[DRBD-user] bonding more than two network cards still a bad idea?

Bart Coninckx bart.coninckx at telenet.be
Tue Oct 5 09:19:47 CEST 2010


On Monday 04 October 2010 20:45:30 J. Ryan Earl wrote:

> 77MB/sec is low for a single GigE link if you backing store can do
> 250MB/sec.  I think you should test on your hardware with a single GigE--no
> bonding--and work on getting close to the 110-120M/sec range before
> pursuing bonding optimization.  Did you go through:
> http://www.drbd.org/users-guide-emb/p-performance.html ?

Hi Jr, thx for your reply. I did with another setup to not much avail, but 
will try this again.
 
> I use the following network sysctl tuning:
> 
> # Tune TCP and network parameters
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> net.core.rmem_max = 16777216
> net.core.wmem_max = 16777216
> vm.min_free_kbytes = 65536
> net.ipv4.tcp_max_syn_backlog = 8192
> net.core.netdev_max_backlog = 25000
> net.ipv4.tcp_no_metrics_save = 1
> sys.net.ipv4.route.flush = 1
> 
> This gives me up to 16MB TCP windows and considerable backlog to tolerate
> latency with high-throughput.  It's tuned for 40gbit IPoIB, you could
> reduce some of these numbers for slower connections...

Will try that.

> Anyway, what NICs are you using?  

Currently a mix of one bnx2 card and one e1000 card. I will move the bond to 
two bnx2 ports on one card. Netperf shows close to 2 Gbt/sec though ...

> Older interrupt-based NICs like the
> e1000/e1000e (older Intel) and tg3 (older Broadcom) will not perform as
> well as the newer RDMA-based hardware, but they should be well above the
> 77MB/sec range.  Does your RAID controller have a power-backed write
> cache?  

Yes

> Have you tried RAID10?

No, but since the bonnie++ test without DRBD give a 250 MB/sec performance 
hit, I guess this is not where our bottleneck is ... 

> 
> -JR


thx again,

B.



More information about the drbd-user mailing list