[DRBD-user] DRBD over 10GE, limited throughput

Ben RUBSON ben.rubson at gmail.com
Fri Apr 11 14:26:59 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

2014-04-11 14:03 GMT+02:00 Ben RUBSON <ben.rubson at gmail.com>:

> 2014-04-11 12:30 GMT+02:00 Steve Thompson wrote:
>> On Fri, 11 Apr 2014, Ben RUBSON wrote:
>>  Replication link is a a 10 Ge link, throughput with iperf (6 threads or
>>> more) gives 7.7 Gb/s (918 MB/s) bidirectional.
>>> Could be tuned, but well, enough to sustain my RAID array at 680 MB/s.
>>> Latency of 10.8 ms between the 2 servers.
>>> MTU 9012.
>> Something that strikes me is your latency and throughput: assuming it is
>> not a typo, that latency is remarkably high. Between two of my 10GbE
>> servers, I see an average latency of 0.07ms (from ping) and an iperf
>> throughput of 9.8 Gb/s with MTU=9000 and 1 thread.
> Yes I did not tell that replication link is a long distance link, the 2
> servers are separated by hundreds of kilometers :-)
> Which explains the 10.8 ms latency.
> I made further tests, and seems that my issue is that I did not tune my
> TPC stack according to the 10 G speed.
> However, with tuning (window size &co), I did not manage to go beyond 7.3
> Gb/s (iperf, 1 thread).
> Without tuning I was at 2.15 Gb/s (iperf, 1 thread).

OK, just to clarify, my long distance link provider just told me that I
won't be able to go beyond.
So, TPC stack tuning was the initial issue here.
Hope this will help others playing with 10G :-)

Thank you again,

Best regards,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140411/9e12e496/attachment.htm>

More information about the drbd-user mailing list