[DRBD-user] performance issues with DRBD :(

Lars Ellenberg lars.ellenberg at linbit.com
Thu Dec 10 16:52:56 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Dec 10, 2009 at 01:23:52PM +0100, Beck Ingmar wrote:
> 
> 
> Hi Florian,
> 
> why improves bonding of the replication-connection not the performance.
> 
> We currently have a direct connection for the 1 Gb DRBD replication and
> I
> wanted to activate an active-active bonding (with a further LAN-port). I
> would expect nearly double of replication-performance. Andreas Kurz had
> spoken at a workshop for that.
> 
> Or depends it on whether the connection between the nodes directly or
> via 1-n switches.

Quoting linux/Documentation/networking/bonding.txt:

...

balance-rr: This mode is the only mode that will permit a single
	TCP/IP connection to stripe traffic across multiple
	interfaces. It is therefore the only mode that will allow a
	single TCP/IP stream to utilize more than one interface's
	worth of throughput.  This comes at a cost, however: the
	striping generally results in peer systems receiving packets out
	of order, causing TCP/IP's congestion control system to kick
	in, often by retransmitting segments.

	It is possible to adjust TCP/IP's congestion limits by
	altering the net.ipv4.tcp_reordering sysctl parameter.  The
	usual default value is 3, and the maximum useful value is 127.
	For a four interface balance-rr bond, expect that a single
	TCP/IP stream will utilize no more than approximately 2.3
	interface's worth of throughput, even after adjusting
	tcp_reordering.

...

This likely has been correct at some point in time, or for 100 MBit/s.

In my experience, bonding two 1GBit links can yield about 1.7 to 1.8,
adding the third drops back to about 1.0 but adds latency,
and adding a forth will deteriorate to less then a single link.

This is for a _sinlge_ tcp bulk transfer.  Of course, the sum total over
all transfers on those bonded links will be higher.

But for the single tcp bulk transfer, in my experience, more than two
1GBit links in bonding-rr are counterproductive.

May be highly variable with different NICs, boards,
processors and bus and processor speeds.


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list