LINUX入门:Linux下网卡bonding配置
《LINUX入门:Linux下网卡bonding配置》要点: 一、bonding技术bonding(绑定)是一种Linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功效,有很多优势. bonding技术是Linux系统内核层面实现的,它是一个内核模块(驱动).使用它必要系统有这个模块,我们可以modinfo命令查看下这个模块的信息,?一般来说都支持. # modinfo bonding filename: /lib/modules/2.6.32-642.1.1.el6.x86_64/kernel/drivers/net/bonding/bonding.ko author: Thomas Davis,tadavis@lbl.gov and many others description: Ethernet Channel Bonding Driver,v3.7.1 version: 3.7.1 license: GPL alias: rtnl-link-bond srcversion: F6C1815876DCB3094C27C71 depends: vermagic: 2.6.32-642.1.1.el6.x86_64 SMP mod_unload modversions parm: max_bonds:Max number of bonded devices (int) parm: tx_queues:Max number of transmit queues (default = 16) (int) parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int) parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up,in milliseconds (int) parm: downdelay:Delay before considering link down,in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off,1 for on (default) (int) parm: mode:Mode of operation; 0 for balance-rr,1 for active-backup,2 for balance-xor,3 for broadcast,4 for 802.3ad,5 for balance-tlb,6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default),1 for only if speed of primary is better,2 for only on active slave failure (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow,1 for fast (charp) parm: ad_select:803.ad aggregation selection logic; 0 for stable (default),1 for bandwidth,2 for count (charp) parm: min_links:Minimum number of available links before turning on carrier (int) parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default),1 for layer 3+4,2 for layer 2+3 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes; 0 for none (default),1 for active,2 for backup,3 for all (charp) parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default),1 for all (charp) parm: fail_over_mac:For active-backup,do not set all slaves to the same MAC; 0 for none (default),2 for follow (charp) parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default),1 for always. (int) parm: resend_igmp:Number of IGMP membership reports to send on link failure (int) parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave,1 packet per slave (default),>1 packets per slave. (int) parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint) bonding的七种事情模式:? bonding技术提供了七种工作模式,在使用的时候必要指定一种,每种有各自的优缺点.
具体的网上有很多资料,了解每种模式的特点根据本身的选择就行,一般会用到0、1、4、6这几种模式. 二、CentOS7配置bonding情况: 体系: Centos7 网卡: em1、em2 bond0:172.16.0.183 负载模式: mode6(adaptive load balancing) 服务器上两张物理网卡em1和em2,经由过程绑定成一个逻辑网卡bond0,bonding模式选择mode6 注: ip地址配置在bond0上,物理网卡不必要配置ip地址. 1、封闭和停止NetworkManager服务 systemctl stop NetworkManager.service # 停止NetworkManager服务 systemctl disable NetworkManager.service # 禁止开机启动NetworkManager服务 ps: 必定要关闭,不关会对做bonding有干扰 2、加载bonding模块 modprobe --first-time bonding 没有提示说明加载成功,如果出现modprobe: ERROR: could not insert 'bonding': Module already in kernel说明你已经加载了这个模块,就不消管了 你也可以使用lsmod | grep bonding查看模块是否被加载 lsmod | grep bonding
bonding 136705 0
3、创立基于bond0接口的配置文件 改动成如下,根据你的情况: DEVICE=bond0 TYPE=Bond IPADDR=172.16.0.183 NETMASK=255.255.255.0 GATEWAY=172.16.0.1 DNS1=114.114.114.114 USERCTL=no BOOTPROTO=none ONBOOT=yes BONDING_MASTER=yes BONDING_OPTS="mode=6 miimon=100" 上面的BONDING_OPTS="mode=6 miimon=100" 表现这里配置的工作模式是mode6(adaptive load balancing),miimon表现监视网络链接的频度 (毫秒),我们设置的是100毫秒,根据你的需求也可以指定mode成其它的负载模式. 4、改动em1接口的配置文件 vim /etc/sysconfig/network-scripts/ifcfg-em1 改动成如下: DEVICE=em1 USERCTL=no ONBOOT=yes MASTER=bond0 # 必要和上面的ifcfg-bond0配置文件中的DEVICE的值对应 SLAVE=yes BOOTPROTO=none 5、修改em2接口的配置文件 vim /etc/sysconfig/network-scripts/ifcfg-em2 改动成如下: DEVICE=em2 USERCTL=no ONBOOT=yes MASTER=bond0 # 必要和上的ifcfg-bond0配置文件中的DEVICE的值对应 SLAVE=yes BOOTPROTO=none 6、测试 重启收集服务 systemctl restart network 查看bond0的接口状态信息 ?( 如果报错阐明没做成功,很有可能是bond0接口没起来) # cat /proc/net/bonding/bond0 Bonding Mode: adaptive load balancing // 绑定模式: 当前是ald模式(mode 6),也便是高可用和负载均衡模式 Primary Slave: None Currently Active Slave: em1 MII Status: up // 接口状态: up(MII是Media Independent Interface简称,接口的意思) MII Polling Interval (ms): 100 // 接口轮询的时间隔(这里是100ms) Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em1 // 备接口: em0 MII Status: up // 接口状态: up(MII是Media Independent Interface简称,接口的意思) Speed: 1000 Mbps // 端口的速率是1000 Mpbs Duplex: full // 全双工 Link Failure Count: 0 // 链接失败次数: 0 Permanent HW addr: 84:2b:2b:6a:76:d4 // 永久的MAC地址 Slave queue ID: 0 Slave Interface: em1 // 备接口: em1 MII Status: up // 接口状态: up(MII是Media Independent Interface简称,接口的意思) Speed: 1000 Mbps Duplex: full // 全双工 Link Failure Count: 0 // 链接失败次数: 0 Permanent HW addr: 84:2b:2b:6a:76:d5 // 永久的MAC地址 Slave queue ID: 0 通过ifconfig命令查看下网络的接口信息 # ifconfig bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 172.16.0.183 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::862b:2bff:fe6a:76d4 prefixlen 64 scopeid 0x20<link> ether 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet) RX packets 11183 bytes 1050708 (1.0 MiB) RX errors 0 dropped 5152 overruns 0 frame 0 TX packets 5329 bytes 452979 (442.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 em1: flags=6211<UP,SLAVE,MULTICAST> mtu 1500 ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet) RX packets 3505 bytes 335210 (327.3 KiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 2852 bytes 259910 (253.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 em2: flags=6211<UP,MULTICAST> mtu 1500 ether 84:2b:2b:6a:76:d5 txqueuelen 1000 (Ethernet) RX packets 5356 bytes 495583 (483.9 KiB) RX errors 0 dropped 4390 overruns 0 frame 0 TX packets 1546 bytes 110385 (107.7 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 17 bytes 2196 (2.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 2196 (2.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
更多详情见请继续阅读下一页的精彩内容: _baidu_page_break_tag_三、CentOS6配置bondingCentos6配置bonding和上面的Cetons7做bonding基本一样,只是配置有些分歧.? 系统: Centos6 网卡: em1、em2 bond0:172.16.0.183 负载模式: mode1(adaptive load balancing) # 这里的负载模式为1,也便是主备模式. 1、关闭和停止NetworkManager服务 service NetworkManager stop
chkconfig NetworkManager off
ps: 如果有装的话关闭它,如果报错说明没有装这个,那就不消管 2、加载bonding模块 modprobe --first-time bonding 3、创建基于bond0接口的设置装备摆设文件 vim /etc/sysconfig/network-scripts/ifcfg-bond0 修改如下 (根据你的必要): DEVICE=bond0 TYPE=Bond BOOTPROTO=none ONBOOT=yes IPADDR=172.16.0.183 NETMASK=255.255.255.0 GATEWAY=172.16.0.1 DNS1=114.114.114.114 USERCTL=no BONDING_OPTS="mode=6 miimon=100" 4、加载bond0接口到内核 vi /etc/modprobe.d/bonding.conf 改动成如下: alias bond0 bonding 5、编纂em1、em2的接口文件 vim /etc/sysconfig/network-scripts/ifcfg-em1 改动成如下: DEVICE=em1 MASTER=bond0 SLAVE=yes USERCTL=no ONBOOT=yes BOOTPROTO=none vim /etc/sysconfig/network-scripts/ifcfg-em2 改动成如下: DEVICE=em2 MASTER=bond0 SLAVE=yes USERCTL=no ONBOOT=yes BOOTPROTO=none 6、加载模块、重启收集与测试 modprobe bonding
service network restart
查看bondo接口的状态 cat /proc/net/bonding/bond0 Bonding Mode: fault-tolerance (active-backup) # bond0接口当前的负载模式是主备模式 Primary Slave: None Currently Active Slave: em2 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: em1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 2 Permanent HW addr: 84:2b:2b:6a:76:d4 Slave queue ID: 0 Slave Interface: em2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 84:2b:2b:6a:76:d5 Slave queue ID: 0 通过ifconfig命令查看下接口的状态,你会发现mode=1模式下所有的mac地址都是一致的,说明对外逻辑便是一个mac地址 ifconfig bond0: flags=5187<UP,MULTICAST> mtu 1500 inet6 fe80::862b:2bff:fe6a:76d4 prefixlen 64 scopeid 0x20<link> ether 84:2b:2b:6a:76:d4 txqueuelen 0 (Ethernet) RX packets 147436 bytes 14519215 (13.8 MiB) RX errors 0 dropped 70285 overruns 0 frame 0 TX packets 10344 bytes 970333 (947.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 em1: flags=6211<UP,MULTICAST> mtu 1500 ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet) RX packets 63702 bytes 6302768 (6.0 MiB) RX errors 0 dropped 64285 overruns 0 frame 0 TX packets 344 bytes 35116 (34.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 em2: flags=6211<UP,MULTICAST> mtu 1500 ether 84:2b:2b:6a:76:d4 txqueuelen 1000 (Ethernet) RX packets 65658 bytes 6508173 (6.2 MiB) RX errors 0 dropped 6001 overruns 0 frame 0 TX packets 1708 bytes 187627 (183.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 31 bytes 3126 (3.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 31 bytes 3126 (3.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 进行高可用测试,拔掉此中的一条网线看丢包和延时情况,然后在插回网线(模拟故障恢复),再看丢包和延时的情况. 本文永远更新链接地址: 欢迎参与《LINUX入门:Linux下网卡bonding配置》讨论,分享您的想法,编程之家PHP学院为您提供专业教程。 (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |