[DRBD-user] bond for drbd identical performance with one link down

Bart Coninckx bart.coninckx at telenet.be
Thu May 20 20:07:31 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

Admittedly  not a DRBD issue per se, but I guess this list represents quite 
some experience in the area: I have two gigabit NICs bonded in balance-rr mode 
for DRBD sync. They are directly linked (no switch) to the other other pair in 
the other DRBD node. 

Before syncing things I was testing the performance and failover. Netperf 
shows for instance this:


iscsi2:/etc/sysconfig/network # netperf -p 2222 -H 10.0.2.3
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.3 (10.0.2.3) 
port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed 
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    10.00     977.83


Pulling one cable gives me about the same speed. I would expect it to be at 
least 20% slower. It seems the round robin does not speed up things.

The bonds on both sides show up fine in /proc/net/bonding/bond0.

Anyone any idea what I'm doing wrong?

Cheers,


Bart



More information about the drbd-user mailing list