[DRBD-user] OT: simple bonding question w/ 2 nodes and crossover cables.

Leroy van Logchem leroy.vanlogchem at wldelft.nl
Wed Sep 5 21:04:50 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

> Will bonding mode 5 or 6 work in this situation? I've
> set it up with both mode=5 and mode=6 and it seems I
> can't get any traffic unless I kill one of the links.
I did this yesterday and works great. Configure the kernel module for
example using modprobe.conf:
options bond0 miimon=100 mode=0 # balance-rr

The bonding driver module will provide a bond0 device (master).
Assign a ip and network to this interface only. Add the two slaves
without ip's. This is distribution depended but CentOS slaves look like:

After starting the bond0 ( using CentOS it's 'ifup bond0' ) you'll see:

# more /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.0.3 (March 23, 2006)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Link Failure Count: 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0

This gives about 1.9Gbps troughput when benchmarked with iperf.

A few sysctl adjustments might be helpful:

# TCP adjustments for >2Gb/s
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_rmem = 4096        87380   16777216
net.ipv4.tcp_wmem = 4096        87380   16777216
net.ipv4.tcp_reordering = 100
net.core.netdev_max_backlog = 10000
net.core.rmem_default = 110592
net.core.wmem_default = 110592
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

See man ifenslave and 
for more info.

One way of checking it really round-robins is using tcpdump on one of 
the slaves:
tcpdump -i eth1 -p icmp
and then ping the bond0 ip from the other node.
You should see only one ping per 2 seconds.


More information about the drbd-user mailing list