[DRBD-user] bonding more than two NICs

Lars Ellenberg lars.ellenberg at linbit.com
Tue Apr 22 14:01:48 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Fri, Apr 18, 2014 at 08:59:10PM +0200, Bart Coninckx wrote:
> Hi all,
> 
> In the past I read somewhere that by bonding more than two NICs there is a severe penalty on speed as TCP re-ordering needs to happen.
> 
> I'm currently building a two-node DRBD cluster that uses Infiniband
> for DRBD. The cluster offers SCST targets. I would like to offer the
> best speed possible to the iSCSI clients (which are on gigabit NICs). 
> Has anyone tried 3 or 4 card bonding? What performance do you get out of this?

You have to distinguish between "round-robin" balancing
(which is the only mode I know that can give a *single* TCP connection
more than single physical link bandwidth) and any of the "hashing"
balancing modes, in which the single TCP connection will always use just
one physical link, but you get additional *aggregated* bandwidth
over all TCP links, as long as you have "enough" connections
that the balancing algorithms can work their "statistical magic".

With balance-rr, you likely have to measure, the outcome
may depend on a lot of things including link and irq latencies.
Chose your deployment-local "optimum" from
single-TCP session throughput vs # bonding channels.

For other bonding modes, you won't be able to increase single TCP
session beyond single physical link saturation, but your aggregate
throughput will increase with number of bonding channels and number of
communication partners (mapped to different "balance buckets").


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list