[DRBD-user] bonding more than two NICs

raulhp raulhp at ugr.es
Wed Apr 23 11:50:35 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Bart,

I think that these three links will give you some idea about 
performance you can get with 3 or more physical NICs.

Maximum Throughput
https://www.kernel.org/doc/Documentation/networking/bonding.txt

Two examples using Gigabit
http://www.scl.ameslab.gov/Projects/MP_Lite/
http://www.ibm.com/developerworks/data/library/techarticle/dm-1202streamslinuxperf/

If  the NIC´s are integrated to the motherboard, you can get about 90 
percent of performance, if you are using PCI ports the performance is 
poor up to 50 or 60 percent, also, you know that all connectivity must 
support Gigabit.

Regards
Ra



El 2014-04-22 14:01, Lars Ellenberg escribió:
> On Fri, Apr 18, 2014 at 08:59:10PM +0200, Bart Coninckx wrote:
>> Hi all,
>>
>> In the past I read somewhere that by bonding more than two NICs 
>> there is a severe penalty on speed as TCP re-ordering needs to happen.
>>
>> I'm currently building a two-node DRBD cluster that uses Infiniband
>> for DRBD. The cluster offers SCST targets. I would like to offer the
>> best speed possible to the iSCSI clients (which are on gigabit 
>> NICs).
>> Has anyone tried 3 or 4 card bonding? What performance do you get 
>> out of this?
>
> You have to distinguish between "round-robin" balancing
> (which is the only mode I know that can give a *single* TCP 
> connection
> more than single physical link bandwidth) and any of the "hashing"
> balancing modes, in which the single TCP connection will always use 
> just
> one physical link, but you get additional *aggregated* bandwidth
> over all TCP links, as long as you have "enough" connections
> that the balancing algorithms can work their "statistical magic".
>
> With balance-rr, you likely have to measure, the outcome
> may depend on a lot of things including link and irq latencies.
> Chose your deployment-local "optimum" from
> single-TCP session throughput vs # bonding channels.
>
> For other bonding modes, you won't be able to increase single TCP
> session beyond single physical link saturation, but your aggregate
> throughput will increase with number of bonding channels and number 
> of
> communication partners (mapped to different "balance buckets").




More information about the drbd-user mailing list