linux – 如何获得虚拟化SR-IOV Infiniband接口UP?
我已经花了几天时间在这上面,我已经设法让SR-IOV使用Mellanox Infiniband卡使用最新的固件.
虚函数在Dom0中显示为
然后我从Dom0分离06:00.1并将其分配给xen-pciback. 我把它传递给了Xen测试域. 测试DomU内的lspci显示:
我在DomU中加载了以下模块 mlx4_ib rdma_ucm ib_umad ib_uverbs ib_ipoib mlx4驱动程序的dmesg输出显示: [ 11.956787] mlx4_core: Mellanox ConnectX core driver v1.1 (Dec,2011) [ 11.956789] mlx4_core: Initializing 0000:00:01.1 [ 11.956859] mlx4_core 0000:00:01.1: enabling device (0000 -> 0002) [ 11.957242] mlx4_core 0000:00:01.1: Xen PCI mapped GSI0 to IRQ30 [ 11.957581] mlx4_core 0000:00:01.1: Detected virtual function - running in slave mode [ 11.957606] mlx4_core 0000:00:01.1: Sending reset [ 11.957699] mlx4_core 0000:00:01.1: Sending vhcr0 [ 11.976090] mlx4_core 0000:00:01.1: HCA minimum page size:512 [ 11.976672] mlx4_core 0000:00:01.1: Timestamping is not supported in slave mode. [ 12.068079] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0 (April 4,2008) [ 12.184072] mlx4_core 0000:00:01.1: mlx4_ib: multi-function enabled [ 12.184075] mlx4_core 0000:00:01.1: mlx4_ib: operating in qp1 tunnel mode 我甚至出现了ib0设备. ib0 Link encap:UNSPEC HWaddr 80-00-05-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:10.10.10.10 Bcast:10.10.10.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:117303 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:6576132 (6.5 MB) TX bytes:0 (0.0 B) 我甚至可以在本地ping 10.10.10.10. 但是,这些ping不会发送到infiniband结构上. 这似乎是因为链接已关闭. CA 'mlx4_0' CA type: MT4100 Number of ports: 1 Firmware version: 2.30.3000 Hardware version: 0 Node GUID: 0x001405005ef41f25 System image GUID: 0x002590ffff175727 Port 1: State: Down Physical state: LinkUp Rate: 10 Base lid: 9 LMC: 0 SM lid: 1 Capability mask: 0x02514868 Port GUID: 0x0000000000000000 我怎么弄它? domU链接是UP但不是VF链接? 答案实际上在这里找到:
解决方法
必须在虚拟机管理程序主机上安装并启动OpenSM才能启动状态.然后开始使用选项启动OpenSM:PORTS =“ALL”.
(编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |