[DRBD-user] DRDB+ scaling in the 10Gbit ballpark

Leroy van Logchem leroy.vanlogchem at wldelft.nl
Wed Sep 19 13:21:14 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

H.D. wrote:
> I'm trying to evaluate the requirements for a new project:
> - 2 machines
> - 6-12TB, DRBD+ mirrored, active/passive.
> - 8x bonded GBit dedicated to DRBD
> - 16GB RAM
> - 4-8 cores.
> Is this a realistic setup? Will I be able to still archive write rates
> near the physical limit of the disc sub system? I expect 700MB/sec. Will
> TCP/IP processing push me to my knees?
We have been trying to overcome the 1Gbps ethernet bottleneck last month.
First we used a 10Gbps Intel PCI-e interface which capped at about 2.5Gbps.
Tuning didnt really help much so we used quad gigabit Intel PCI-e 
These only performed okay when using just two ports bonded. For bonding we
used the kernel traffic shaper tc with the teql module and plain 
ifenslave bonds.
The kernel tc can panic current 2.6 kernels quite easy so we stayed with the
regular 'bond0 miimon=100 mode=0'. We stil don't know why using 3 or 4 nic's
doesnt speedup the maximum throughput..  If anyone knows please share.


More information about the drbd-user mailing list