[DRBD-user] DRDB+ scaling in the 10Gbit ballpark

Leroy van Logchem leroy.vanlogchem at wldelft.nl
Wed Sep 19 14:07:18 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


H.D. wrote:
> On 19.09.2007 13:21, Leroy van Logchem wrote:
>> We have been trying to overcome the 1Gbps ethernet bottleneck last 
>> month.
>> First we used a 10Gbps Intel PCI-e interface which capped at about 
>> 2.5Gbps.
>> Tuning didnt really help much so we used quad gigabit Intel PCI-e 
>> interfaces.
>> These only performed okay when using just two ports bonded. For 
>> bonding we
>> used the kernel traffic shaper tc with the teql module and plain 
>> ifenslave bonds.
>> The kernel tc can panic current 2.6 kernels quite easy so we stayed 
>> with the
>> regular 'bond0 miimon=100 mode=0'. We stil don't know why using 3 or 
>> 4 nic's
>> doesnt speedup the maximum throughput..  If anyone knows please share.
>
> Hm ok, that doesn't sound to promising. I only have used bonding with 
> 2 ports. As CPU load still seems neglectable while saturating a 2GBit 
> bond link, I expected this to scale.
>

That was our expectation too until we increased the number of 
interfaces. With a bit
of luck someone will shed some light on this and open our eyes to 
something obvious <g>.
I'am wondering about the pci-e bus speeds, a x4 *should* support 
1GB(byte)/s, but
if that's really the case..

-- 
Leroy






More information about the drbd-user mailing list