[DRBD-user] DRBD over 10GE, limited throughput

Ben RUBSON ben.rubson at gmail.com
Fri Apr 11 10:48:50 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I think I need your help :)

Servers :
2 identical servers, Xeon E5-2670 v2 2.5 GHz (40 cores), 128 GB of RAM,
RAID LSI 9271-8i.

Storage :
RAID 10 storage, throughput with dd and bonnie++ gives 680 MB/s read and
write (expected value).

Network :
Replication link is a a 10 Ge link, throughput with iperf (6 threads or
more) gives 7.7 Gb/s (918 MB/s) bidirectional.
Could be tuned, but well, enough to sustain my RAID array at 680 MB/s.
Latency of 10.8 ms between the 2 servers.
MTU 9012.

Metadata to suit RAID array layout :
drbdmeta 1 v08 /dev/c0v1 internal create-md --al-stripes 4
--al-stripe-size-kB 32
al-extents 6433

Software :
Debian stable / 7 Wheezy
Kernel 3.10.23
DRBD 8.4.4

Problem :
I can't manage to reach 680 MB/s on initial replication.

Initial configuration :
protocol C
disk-barrier no
disk-flushes no
resync-rate 680M
c-plan-ahead 0 (to force max throughput at resync-rate value, for test
purpose)
# finish: 103:20:15 speed: 41,976 (41,976) want: 696,320 K/sec

Guides studied :
http://www.drbd.org/users-guide/s-throughput-tuning.html
http://www.drbd.org/users-guide/s-latency-tuning.html

Tests done :
Setting max-buffers to its max value 131072 gives the best improvment :
# finish: 16:25:16 speed: 264,036 (257,292) want: 696,320 K/sec
Tuning of other parameters give let's say nothing at all (max-epoch-size,
unplug-watermark, sndbuf-size, scheduler)...

Questions :
How to be able to sustain synchronisation at max RAID array throughput ?
Perhaps first, how to find the bottleneck ?

Thank you very much for your support,

Best regards,

Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140411/1ab08033/attachment.htm>


More information about the drbd-user mailing list