[DRBD-user] bond for drbd identical performance with one link down

Lee Riemer lriemer at bestline.net
Thu May 20 20:15:36 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Do you need multiple destination IPs to properly balance?  It is my 
understanding that a single stream will only traverse a single link.  
Hence the MPIO requirement.

On 5/20/2010 1:07 PM, Bart Coninckx wrote:
> Hi,
>
> Admittedly  not a DRBD issue per se, but I guess this list represents quite
> some experience in the area: I have two gigabit NICs bonded in balance-rr mode
> for DRBD sync. They are directly linked (no switch) to the other other pair in
> the other DRBD node.
>
> Before syncing things I was testing the performance and failover. Netperf
> shows for instance this:
>
>
> iscsi2:/etc/sysconfig/network # netperf -p 2222 -H 10.0.2.3
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.3 (10.0.2.3)
> port 0 AF_INET
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>   87380  16384  16384    10.00     977.83
>
>
> Pulling one cable gives me about the same speed. I would expect it to be at
> least 20% slower. It seems the round robin does not speed up things.
>
> The bonds on both sides show up fine in /proc/net/bonding/bond0.
>
> Anyone any idea what I'm doing wrong?
>
> Cheers,
>
>
> Bart
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>    



More information about the drbd-user mailing list