[DRBD-user] Drbd and network speed

Fortier,Vincent [Montreal] Vincent.Fortier1 at EC.GC.CA
Wed Sep 16 19:25:47 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


> -----Message d'origine-----
> De : drbd-user-bounces at lists.linbit.com 
> [mailto:drbd-user-bounces at lists.linbit.com] De la part de 
> Diego Remolina
> Envoyé : 16 septembre 2009 09:17
> > 
> > linux bonding of _two_ nics in "balance-rr" mode, after some tuning of 
> > the network stack sysctls, should give you about 1.6 to 1.8 x the 
> > throughput of a single link.
> > For a single TCP connection (as DRBDs bulk data socket is), bonding 
> > more than two will degrade throughput again, mostly due to packet 
> > reordering.
> 
> I've tried several bonding modes and with balance-rr the most 
> I got was about 1.2Gbps using netperf tests. IIRC, the other 
> issue of balance-rr is that there can be retransmission which 
> slows down the transfers.
> 
> Any specific information or howto accomplish the 1.6 to 1.8 x 
> would be really appreciated.
> 
> I am currently replicating two drbd devices over separate 
> bonds in active backup mode (two bonds with 2 Gigabit 
> interfaces each using mode=1 miimon=100).

I've done some testing with 2 giga NIC's in different bonding modes and balance-rr was the slowest one.

I'd love to know how to achieve that speed because the way it is now it will most probably be active-backup or simply a single nic setup.

eth1:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 61.6557 s, 87.1 MB/s

eth3:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 64.3204 s, 83.5 MB/s

active-backup:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 65.5199 s, 81.9 MB/s
5368709120 bytes (5.4 GB) copied, 63.8162 s, 84.1 MB/s

802.3ad:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 63.6863 s, 84.3 MB/s

balance-rr:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 82.6274 s, 65.0 MB/s

balance-xor:/# dd if=/dev/zero of=/apps/dd-test-file bs=1M count=5120 oflag=sync
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB) copied, 64.5289 s, 83.2 MB/s

Note that to make the test valid I had to reboot the system otherwhise even the /proc/net/bonding/bondX would get updated it would still be acting like it uses the first bonding protocol associated with thoses nics (at least on lenny)

Also note that without oflag=sync I would always get 125-130MB/s results.

> My peak speed for replication is ~120MB/s and as I stated 
> before, my backend is about 5 times faster. So if I could 
> really accomplish the 1.6 to 1.8 x with a few tweaks, that 
> would be great.
> 
> OTOH, 10GB Cooper nics have reached decent pricing, The Intel 
> cards are ~US $600. Please keep in mind you will need a 
> special cable (SPF+ Direct Attach which is around US $50 for 
> a 2 meter cable, I am sure you can get better pricing on those).
> 
> http://www.intel.com/Products/Server/Adapters/10-Gb-AF-DA-DualPort/10-Gb-AF-DA-DualPort-overview.htm
> 

Yeah, might be another option...

- vin



More information about the drbd-user mailing list