ib_send_bw

ib_send_bw(InfinBand发送带宽)工具是Perftest Package的一部分。 这篇文章在perftest软件包版本5.6中显示了此工具的配置选项

应用介绍

ib_send_bw(InfinBand发送带宽)工具是Perftest Package的一部分。

这篇文章在perftest软件包版本5.6中显示了此工具的配置选项

# ib_send_bw -h

Usage:

ib_send_bw start a server and wait for connection

ib_send_bw <host> connect to server at <host>

Options:

-a, --all Run sizes from 2 till 2^23

-b, --bidirectional Measure bidirectional bandwidth (default unidirectional)

-c, --connection=<RC/XRC/UC/UD/DC> Connection type RC/XRC/UC/UD/DC (default RC)

-d, --ib-dev=<dev> Use IB device <dev> (default first device found)

-D, --duration Run test for a customized period of seconds.

-e, --events Sleep on CQ events (default poll)

-X, --vector=<completion vector> Set <completion vector> used for events

-f, --margin measure results within margins. (default=2sec)

-F, --CPU-freq Do not show a warning even if cpufreq_ondemand module is loaded, and cpu-freq is not on max.

-g, --mcg Send messages to multicast group with 1 QP attached to it.

-h, --help Show this help screen.

-i, --ib-port=<port> Use port <port> of IB device (default 1)

-I, --inline_size=<size> Max size of message to be sent in inline

-l, --post_list=<list size> Post list of WQEs of <list size> size (instead of single post)

-m, --mtu=<mtu> MTU size : 256 - 4096 (default port mtu)

-M, --MGID=<multicast_gid> In multicast, uses <multicast_gid> as the group MGID.

-n, --iters=<iters> Number of exchanges (at least 5, default 1000)

-N, --noPeak Cancel peak-bw calculation (default with peak up to iters=20000)

-O, --dualport Run test in dual-port mode.

-p, --port=<port> Listen on/connect to port <port> (default 18515)

-q, --qp=<num of qp's> Num of qp's(default 1)

-Q, --cq-mod Generate Cqe only after <--cq-mod> completion

-r, --rx-depth=<dep> Rx queue size (default 512). If using srq, rx-depth controls max-wr size of the srq

-R, --rdma_cm Connect QPs with rdma_cm and run test on those QPs

-s, --size=<size> Size of message to exchange (default 65536)

-S, --sl=<sl> SL (default 0)

-t, --tx-depth=<dep> Size of tx queue (default 128)

-T, --tos=<tos value> Set <tos_value> to RDMA-CM QPs. availible only with -R flag. values 0-256 (default off)

-u, --qp-timeout=<timeout> QP timeout, timeout value is 4 usec * 2 ^(timeout), default 14

-V, --version Display version number

-w, --limit_bw=<value> Set verifier limit for bandwidth

-x, --gid-index=<index> Test uses GID with GID index (Default : IB - no gid . ETH - 0)

-y, --limit_msgrate=<value> Set verifier limit for Msg Rate

-z, --com_rdma_cm Communicate with rdma_cm module to exchange data - use regular QPs

--cpu_util Show CPU Utilization in report, valid only in Duration mode

--dlid Set a Destination LID instead of getting it from the other side.

--dont_xchg_versions Do not exchange versions and MTU with other side

--force-link=<value> Force the link(s) to a specific type: IB or Ethernet.

--inline_recv=<size> Max size of message to be sent in inline receive

--ipv6 Use IPv6 GID. Default is IPv4

--mmap=file Use an mmap'd file as the buffer for testing P2P transfers.

--mmap-offset=<offset> Use an mmap'd file as the buffer for testing P2P transfers.

--mr_per_qp Create memory region for each qp.

--odp Use On Demand Paging instead of Memory Registration.

--output=<units> Set verbosity output level: bandwidth , message_rate, latency

Latency measurement is Average calculation

--perform_warm_up Perform some iterations before start measuring in order to warming-up memory cache, valid in Atomic, Read and Write BW tests

--pkey_index=<pkey index> PKey index to use for QP

--report-both Report RX & TX results separately on Bidirectinal BW tests

--report_gbits Report Max/Average BW of test in Gbit/sec (instead of MB/sec)

--report-per-port Report BW data on both ports when running Dualport and Duration mode

--reversed Reverse traffic direction - Server send to client

--run_infinitely Run test forever, print results every <duration> seconds

--retry_count=<value> Set retry count value in rdma_cm mode

--tclass=<value> Set the Traffic Class in GRH (if GRH is in use)

--use_exp Use Experimental verbs in data path. Default is OFF.

--use_hugepages Use Hugepages instead of contig, memalign allocations.

--use_res_domain Use shared resource domain

--verb_type=<option> Set verb type: normal, accl. Default is normal.

--wait_destroy=<seconds> Wait <seconds> before destroying allocated resources (QP/CQ/PD/MR..)

Rate Limiter:

--burst_size=<size> Set the amount of messages to send in a burst when using rate limiter

--rate_limit=<rate> Set the maximum rate of sent packages. default unit is [Gbps]. use --rate_units to change that.

--rate_units=<units> [Mgp] Set the units for rate limit to MBps (M), Gbps (g) or pps (p). default is Gbps (g).

Note (1): pps not supported with HW limit.

Note (2): When using PP rate_units is forced to Kbps.

--rate_limit_type=<type> [HW/SW/PP] Limit the QP's by HW, PP or by SW. Disabled by default. When rate_limit Not is specified HW limit is Default.

Note (1) in Latency under load test SW rate limit is forced

文件列表(部分)

名称 大小 修改日期

立即下载

相关下载

[IB Specification Vol 1-Release-1.5] IB Specification Vol 1-Release-1.5最新版本
[什么是PERFQUERY?] Perfquery是一种诊断实用程序,它使用通用服务管理包(GMPS)查询InfiniBand端口的性能和错误计数器,以获取端口计数器,例如: PortCounters, PortCountersExtended, PortXmitDataSL, PortRcvDataSL 以及收发数据(e.g. PortXmitData and PortRcvData).​ # perfquery
[ib网查看流量] mellanox网络,交换机端口流量查看工具,这个python程序可以查看端口流量,端口流量,查看流量是一项非常重要 的功能,可以衡量应用程序是否性能好,程序是否已经吃满带宽,是否有继续提供计算速度的可能,现在分享给大家
[Infiniband 网卡驱动安装的坑] 跟IT借了一根40G QSPF+ 的DAC线缆,型号两台机器在一个机柜,1m的长度就够了。连接好后,绿色的网卡端口状态指示灯就亮了,硬件应该OK的,接下来就是安装驱动了
[108端口FDR交换机用户手册SX6506] 本用户手册概述了SX6506 QSFP模块化InfiniBand交换机平台(在本文档中称为“机箱或交换机”)及其操作环境。 Mellanox SX6506交换系统通过交付以下产品来提供最高性能的光纤解决方案 高带宽和低延迟的企业数据中心(EDC),高性能计算(HPC)和嵌入式环境。使用SX6506系统构建的网络可以结合保证带宽和精细服务质量来承载融合流量。 SX6506系统采用Mellanox的第五代SwitchX®VPI交换设备构建,每个端口可提供高达56Gb / s的双向双向带宽
[MLNX_OFED 4.9-0.1.7.0] 用于Linux的Mellanox OpenFabrics企业分发版(MLNX_OFED)是单个虚拟协议互连(VPI)软件堆栈,可在所有Mellanox网络适网卡上运行。 Mellanox OFED(MLNX_OFED)是经过Mellanox测试和打包的OFED版本,并使用相同的RDMA(远程DMA)和称为OFED动词的内核旁路API支持两种互连类型-InfiniBand和以太网。

评论列表 共有 0 条评论

暂无评论

微信捐赠

微信扫一扫体验

立即
上传
发表
评论
返回
顶部