[DRBD-user] DRDB+ scaling in the 10Gbit ballpark

H.D. devnull at deleted.on.request
Wed Sep 19 13:31:09 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 19.09.2007 13:21, Leroy van Logchem wrote:
> We have been trying to overcome the 1Gbps ethernet bottleneck last month.
> First we used a 10Gbps Intel PCI-e interface which capped at about 2.5Gbps.
> Tuning didnt really help much so we used quad gigabit Intel PCI-e 
> interfaces.
> These only performed okay when using just two ports bonded. For bonding we
> used the kernel traffic shaper tc with the teql module and plain 
> ifenslave bonds.
> The kernel tc can panic current 2.6 kernels quite easy so we stayed with 
> the
> regular 'bond0 miimon=100 mode=0'. We stil don't know why using 3 or 4 
> nic's
> doesnt speedup the maximum throughput..  If anyone knows please share.

Hm ok, that doesn't sound to promising. I only have used bonding with 2 
ports. As CPU load still seems neglectable while saturating a 2GBit bond 
link, I expected this to scale.


-- 
Regards,
H.D.



More information about the drbd-user mailing list