Linux Channel Bonding -- 實作合併網卡

一直想把這一篇寫完,但卻找不出時間,剛好手邊有機器要做 Channel Bonding,就順便實作順便貼上來分享一下囉,簡單講,在 Linux 上,允許把多個網路介面用一個叫 "bonding" 的 kernel module 以及 Channel bonding interface 來綁成一個 single channel,當然你可以將兩個網卡或更多的網卡綁成一個來用,簡單講就是增加頻寬,達到備援機制~

其實 Channel Bonding 的作法很簡單,只要編輯幾個檔案,再將網路重啟便可,這裡我們的環境是在一台 RHEL5 ES 的機器上要將 eth0 與 eth1 做成 bond0,那麼我們就需要編輯如下的檔案:
[root@KHCDNSS01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.15.25.0
NETMASK=255.255.255.0
IPADDR=10.15.25.13
USERCTL=no
P.S. 其中 bond0 的 0 視你的需求而定,這裡可以是 1, 2, 3....隨你高興。

接著編輯要被綁訂的網卡介面,這裡要綁訂的是 eth0 跟 eth1 所以就編輯這兩個檔囉:
[root@KHCDNSS01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet
DEVICE=eth0
BOOTPROTO=none
HWADDR=00:22:19:50:BC:7E
ONBOOT=yes
MASTER=bond0
SLAVE=yes

USERCTL=no
[root@KHCDNSS01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
# Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet
DEVICE=eth1
BOOTPROTO=none
HWADDR=00:22:19:50:BC:80
ONBOOT=yes
MASTER=bond0
SLAVE=yes

USERCTL=no
[root@KHCDNSS01 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=KHCDNSS01
GATEWAY=10.15.25.254
再來是編輯 /etc/modprobe.conf 檔案:(這個檔案在 RHEL3 是在:/etc/modules.conf 而在 RHEL4 以後的版本則在:/etc/modprobe.conf)
[root@KHCDNSS01 ~]# vi /etc/modprobe.conf
alias eth0 bnx2
alias eth1 bnx2
alias scsi_hostadapter megaraid_sas
alias scsi_hostadapter1 ata_piix
alias bond0 bonding
options bond0 miimon=100
P.S. 這裡要注意一下喔,假如你想綁兩個以上的話,那 option 這一行的後面要再多加一個選項喔:max_bonds=3,這代表我要將三個網卡綁成一個 Channel...像下面這樣:
options bond0 miimon=100 max_bonds=3
接著只要重啟一下網路就行了,在這之前我們先看一下目前的網路狀況:
[root@KHCDNSS01 ~]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:22:19:50:BC:7E
inet addr:10.15.25.13 Bcast:10.15.25.255 Mask:255.255.255.0
inet6 addr: fe80::222:19ff:fe50:bc7e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:48374222 errors:0 dropped:0 overruns:0 frame:0
TX packets:4928117 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3254586862 (3.0 GiB) TX bytes:561588414 (535.5 MiB)
Interrupt:169 Memory:f8000000-f8012100

eth1 Link encap:Ethernet HWaddr 00:22:19:50:BC:80
inet6 addr: fe80::222:19ff:fe50:bc80/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:128 (128.0 b) TX bytes:3354 (3.2 KiB)
Interrupt:169 Memory:f4000000-f4012100

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:27284 errors:0 dropped:0 overruns:0 frame:0
TX packets:27284 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:19766775 (18.8 MiB) TX bytes:19766775 (18.8 MiB)

sit0 Link encap:IPv6-in-IPv4
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
接著我們就重啟一下網路吧:
[root@KHCDNSS01 ~]# service network restart
Shutting down interface eth0: /etc/sysconfig/network-scripts/ifdown-eth: line 101: /sys/class/net/bond0/bonding/slaves: No such file or directory
[ OK ]
Shutting down interface eth1: /etc/sysconfig/network-scripts/ifdown-eth: line 101: /sys/class/net/bond0/bonding/slaves: No such file or directory
[ OK ]
Shutting down loopback interface: [ OK ]
Bringing up loopback interface: [ OK ]
Bringing up interface bond0: [ OK ]
[root@KHCDNSS01 ~]#
剛設定完 Channel Bonding 後的第一次重啟看到的這些錯誤訊息是正常的,下次再重啟就不會在看到這些訊息了...我們先看一下 Bonding 的狀態吧:
[root@KHCDNSS01 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:22:19:50:bc:7e

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:22:19:50:bc:80
我們順便看一下網路卡的狀況吧:
[root@KHCDNSS01 ~]# mii-tool -v
eth0: negotiated 100baseTx-FD, link ok
product info: vendor 00:08:18, model 54 rev 6
basic mode: autonegotiation enabled
basic status: autonegotiation complete, link ok
capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
eth1: negotiated 100baseTx-FD, link ok
product info: vendor 00:08:18, model 54 rev 6
basic mode: autonegotiation enabled
basic status: autonegotiation complete, link ok
capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
[root@KHCDNSS01 ~]# ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
[root@KHCDNSS01 ~]# ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
[root@KHCDNSS01 ~]#
再來我們看一下目前網卡的 IP 為何:是不是多了一個 bond0 的介面了啊?
[root@KHCDNSS01 ~]# ifconfig -a
bond0 Link encap:Ethernet HWaddr 00:22:19:50:BC:7E
inet addr:10.15.25.13 Bcast:10.15.25.255 Mask:255.255.255.0
inet6 addr: fe80::222:19ff:fe50:bc7e/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:42 errors:0 dropped:0 overruns:0 frame:0
TX packets:76 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8088 (7.8 KiB) TX bytes:20800 (20.3 KiB)

eth0 Link encap:Ethernet HWaddr 00:22:19:50:BC:7E
inet6 addr: fe80::222:19ff:fe50:bc7e/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:23 errors:0 dropped:0 overruns:0 frame:0
TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4548 (4.4 KiB) TX bytes:10601 (10.3 KiB)
Interrupt:169 Memory:f8000000-f8012100

eth1 Link encap:Ethernet HWaddr 00:22:19:50:BC:7E
inet6 addr: fe80::222:19ff:fe50:bc7e/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:19 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3540 (3.4 KiB) TX bytes:10199 (9.9 KiB)
Interrupt:169 Memory:f4000000-f4012100

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:27284 errors:0 dropped:0 overruns:0 frame:0
TX packets:27284 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:19766775 (18.8 MiB) TX bytes:19766775 (18.8 MiB)

sit0 Link encap:IPv6-in-IPv4
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@KHCDNSS01 ~]#
接著我們做個小試驗,把 eth0 跟 eth1 的網路線拔插個幾次看看,我發現每次的拔插大概掉一個封包,所以兩條網路線各五次的拔插共掉了 10 個封包,不過 session 倒是不會斷:
[root@KHCDNSS01 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 5
Permanent HW addr: 00:22:19:50:bc:7e

Slave Interface: eth1
MII Status: up
Link Failure Count: 5
Permanent HW addr: 00:22:19:50:bc:80
[root@KHCDNSS01 ~]#
以上,報告完畢~
13 Responses
  1. 匿名 Says:

    對不起,小弟最近也是想要合併頻寬,但是看到好多文章都說必須要接到switch才可以,不然只能當作備援,並不能實現頻寬加倍..

    想請教您,這是真的嗎?

    我若使用您的方式可以實現1Mx4=4M的頻寬嗎??懇請伺教..感謝!


  2. 抱歉今天一直在忙,無法及時回應,其實當初寫這一篇時是為了這些機器在交機給客戶時需要達到雙網路備援機制,所以在設定上都是以這樣的目的為考量,而不是為了做合併網卡頻寬的這一部份,其實應該先解釋一下,在 Linux 上面的 Bonding 有幾個不同的模式,像剛剛的例子就是其中一種,在 Linux Kernel bonding 的 kernel module 裡面,是可以依設定 mode=x 的方式來決定運作的模式,主要有下面幾種 mode ,分別如下所示:

    1. mode=0 (balance-rr)
    Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
    本篇文章所提的就是這種case,因為我並沒有強制指定 mode= 多少,所以預設將會使用 mode 0 balance-rr(round-robin)。而當兩張網路卡設成 Balance 時,還會包含容錯功能 (Fault tolerance),如果有一張網卡掛點另一張還是可以正常運作滴。

    2. mode=1 (active-backup)
    Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
    假設有兩張網卡時其中一張是 Primary 而另一張是 Secondary (也就是 backup) 這時候流量只在 Primary 的網卡上傳,當 Primary 掛點時 Secondary 會自動啟動接手變為 Primary,另外如果原來 Primary 恢復時會自動變成 Secondary 了。

    3. mode=2 (balance-xor)
    XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

    4. mode=3 (broadcast)
    Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

    5. mode=4 (802.3ad : Dynamic Link Aggregation)
    假設將兩張網路卡設成 Dynamic Link Aggregation 時,進來流量為 2G 而出去流量為 2G,這種方式需將網卡接到有支援 Dynamic Link Aggregation 的 Switch,另外,在 Dynamic Link Aggregation 下也有容錯功能 (Fault tolerance),如果有一張網卡掛點另一張還是可以繼續工作。

    6. mode=5 (balance-tlb : Transmit load balancing)
    Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load(computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
    進來流量 1G 而出去流量則為 2G

    7. mode=6 (balance-alb : Adaptive load balancing)
    Adaptive load balancing: includes balance-tlb + receive load balancing (rlb) for IPV4 traffic and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.
    進來流量 1G 而出去流量則為 2G

    這幾種模式,是之前有紀錄下來的,不過部分模式我還沒驗證過~

    所以可以依據實際需求決定要使用哪種模式來提供 bonding 功能。一般所謂合併與平衡負載功能部份,選擇 mode=0 或者不設定採用預設值,而若是要達成 active-backup 架構的話,則選擇使用 mode=1 即可。設定的方是就是在編輯設定檔案時,於 /etc/modprobe.conf 中多加上 mode=x 的內容就能決定你想要的類型了:

    aias bond0 bonding
    options bond0 miimon=100 mode=1

    所以我們再回到之前設定完後的的結果:
    [root@KHCDNSS01 named]# cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)

    Bonding Mode: load balancing (round-robin)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    Slave Interface: eth0
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:22:19:50:bc:7e

    Slave Interface: eth1
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:22:19:50:bc:80
    [root@KHCDNSS01 named]#

    注意一下 Bonding Mode: load balancing (round-robin) 這一行,這裡其實就已經很清楚了,現在機器只有做備援的機制,實際去抓一下兩張網卡上面的封包,可以發現,同一個封包會同時往兩張網卡上各送一次,這也就是為什麼斷其中一條線,現有的 session 並不會斷了。

    接下來我們回到你的問題,你想要的"實現頻寬加倍(1Mx4=4M)" 應該是屬於上面所寫的 mode=4 (802.3ad : Dynamic Link Aggregation) 這種 case,這種模式下,你應該是要準備四張網卡,或一張 4 個 port 的網卡也可,照原來文章的方式設定,不過這次你可能會是 eth0, eth1, eth2, eth3,然後在編輯 /etc/modprobe.conf 時加上 mode=4,接著將這四條網路線接到一台(或兩台有 stacking 的也可)有支援 802.3ad LACP ( Link Aggregation Control Protocol) 的 Switch 上,假設是接到 port 1/1 1/2 1/3 1/4,那就用 lacp agg 把這四個 ports aggregate 起來就好囉~抱歉,回答你的部分好像寫得太簡單了,不過目前並沒有測試環境可以玩,所以就先把大概的觀念先寫出來給你參考囉,下次有空我弄個環境出來測試看看,然後再寫出來分享給大家~


  3. 匿名 Says:

    非常感謝您的說明..所以我真的必須買一台switch才可以合併頻寬是嗎?..那想請教..我應該怎麼選擇這台switch?
    因為是個人架的伺服器,預算很有限,可以幫我介紹哪台switch便宜又好用嗎? 真是太感謝你了!!!


  4. 這個嘛,我都是做專案的耶,所以平常對於用的設備的價位沒什麼感覺說(我根本不會去看這些機器或設備的價錢啦,因為這是採購的工作,搶別人的工作是不道德的行為,嘿嘿)...手邊用過最多的 Switch 如 Omni6850, CISCO 3750, D-Link DES-3028, DES-3828 等等,都有支援,不過印象中他們價位都會讓你@#$%的...其實你只要上網去查一下有標出 Switch 價位的網站,然後去查一下你可以接受的價位下的型號再看看規格裡有沒有 Compliant Standards IEEE 802.3ad (LACP)就行了,幫你查了一下,你可以到下面這種類似的站去看一下價格 Switches - IEEE 802.3ad (LACP): Compare Prices, Reviews,僅供參考而已喔...


  5. 匿名 Says:

    是喔..那謝謝您的解說了,感謝您這篇這麼好的文章喔!!


  6. 匿名 Says:

    那再請教一下..有點搞不清楚應該怎麼接了..
    我現在打算申請兩條中華電信固定制的光纖10M(下載)/2M(上傳)..來測試看看..(每條線會有6個固定ip)

    所以我這兩條線路接到switch 然後 switch兩條線接到主機的兩個網路卡?

    但我的ip應該怎麼設?還有我的buffalo寬頻分享器該怎麼接入??需要買兩台寬頻分享器嗎?

    我應該怎麼樣接才可以達到上傳有4m的頻寬??


  7. 我找了一台機器來試驗了一下,這台機器上有四個網卡,我將 eth2 和 eth3 綁成 bond0,而 eth0 和 eth1 綁成 bond1,而且其中 bond0 跑 mode=5 (balance-tlb : Transmit load balancing),另外 bond1 則是跑 mode=4 (802.3ad : Dynamic Link Aggregation),設定如下:

    [root@KHCTEST01 ~]# cat /etc/modprobe.conf
    alias eth0 bnx2
    alias eth1 bnx2
    alias eth2 e1000
    alias eth3 e1000

    #1950 uses sas
    alias scsi_hostadapter megaraid_sas

    alias usb-controller ehci-hcd
    alias usb-controller1 uhci-hcd
    install bond0 modprobe bonding --ignore-install -o bond0 \
    mode=balance-tlb miimon=100
    install bond1 modprobe bonding --ignore-install -o bond1 \
    mode=802.3ad miimon=100 xmit_hash_policy=layer3+4
    options e1000 RxDescriptors=2048,2048

    alias net-pf-10 off
    [root@KHCTEST01 ~]#

    另外在 Switch 端是用兩台 OmniSwitch 6850-24X (兩台 stacking 起來,可以看成是一台,當然也以只用一台來做啦) 的四個 port,分別是機器的 eth2 接 Switch GE1/1,eth3 接 Switch GE2/1,eth0 接 Switch GE1/2,eth1 接 Switch GE2/2,然後,將 GE1/2 與 GE2/2 做 Link Aggregate 如下:

    ! Link Aggregate :
    lacp linkagg 3 size 2 admin state enable
    lacp linkagg 3 actor admin key 40
    lacp agg 1/2 actor admin key 40
    lacp agg 2/2 actor admin key 40

    (VLAN 部分的設定我就省略不寫出來了...)

    將機器的網路重啟之後再來看一下機器上 bonding 的狀態如下:

    [root@KHCTEST01 ~]# cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005)

    Bonding Mode: transmit load balancing
    Primary Slave: None
    Currently Active Slave: eth3
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    Slave Interface: eth2
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:15:17:97:74:14

    Slave Interface: eth3
    MII Status: up
    Link Failure Count: 0
    Permanent HW addr: 00:15:17:97:74:15
    [root@KHCTEST01 ~]# cat /proc/net/bonding/bond1
    Ethernet Channel Bonding Driver: v2.6.3-rh (June 8, 2005)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    802.3ad info
    LACP rate: slow
    Active Aggregator Info:
    Aggregator ID: 2
    Number of ports: 2
    Actor Key: 17
    Partner Key: 40

    Partner Mac Address: 00:e0:b1:b0:3e:7f

    Slave Interface: eth1
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:22:19:50:bd:64
    Aggregator ID: 2

    Slave Interface: eth0
    MII Status: up
    Link Failure Count: 1
    Permanent HW addr: 00:22:19:50:bd:66
    Aggregator ID: 2
    [root@KHCTEST01 ~]#

    從上面的狀態來看,很明顯的 bond1 的確是已經跑成 802.3ad Dynamic link aggregation mode 了,而且 LACP 相關的資訊也跟 Switch 上的吻合~

    以上測試結果供大家參考,報告完畢~


  8. 小松 Says:

    洋蔥爸比
    你好~
    我找您的方法在RHEL5下實作
    可是當我重新啟動網路十
    他竟然回應bond0裝置不存在
    不知有何配套措施?
    我的MSN:yuyu521@livemail.tw
    感謝~


  9. 小松 Says:

    我目前嘗試到他說
    錯誤 其他主機IP已經使用中
    昏倒...


  10. 匿名 Says:

    您好~
    有個問題想請教您, 就是目前我有2張不同型號的網卡各有二個網路port, 想要分別做bonding,不知道這樣是不是可行?
    即eth0,eth1是一張網卡, eth2,eth3為另一個型號的網卡, 目前是想eth0和eth2做bond0, 另eth1及eth3做bond1, 不知會不會有問題?


  11. 關於在不同型號的網卡上做 bonding,我還真沒實際去試過,因為每次實作的機器上都是用同樣型號的網卡,不過,從以前看過的資料來說,並沒有限制不能使用不同型號,而且,合併網卡做 Channel Bonding 是取決於 Kernel 有沒有支援,只要你的 Linux kernel 認的到這兩張網卡,我個人覺得應該就不會有問題。設定完,確定一下所有網卡及Switch 的狀態就好。


  12. 匿名 Says:

    請問對於802.3ad 此mode 是否有測試過可以達到合併頻寬的效用




  13. 匿名 Says:

    mode=6 (balance-alb : Adaptive load balancing)
    Adaptive load balancing: includes balance-tlb + receive load balancing (rlb) for IPV4 traffic and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the server on their way out and overwrites the src hw address with the unique hw address of one of the slaves in the bond such that different clients use different hw addresses for the server.

    在請問一下,這上面說的進來流量 1G 而出去流量則為 2G ,說的是在做bonding的主機上 download 的流量是1G 而upload的流量是2G嗎