<div dir="ltr">2014-04-11 12:30 GMT+02:00 Steve Thompson <span dir="ltr">wrote</span>:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="">On Fri, 11 Apr 2014, Ben RUBSON wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Replication link is a a 10 Ge link, throughput with iperf (6 threads or<br>
more) gives 7.7 Gb/s (918 MB/s) bidirectional.<br>
Could be tuned, but well, enough to sustain my RAID array at 680 MB/s.<br>
Latency of 10.8 ms between the 2 servers.<br>
MTU 9012.<br>
</blockquote>
<br></div>
Something that strikes me is your latency and throughput: assuming it is not a typo, that latency is remarkably high. Between two of my 10GbE servers, I see an average latency of 0.07ms (from ping) and an iperf throughput of 9.8 Gb/s with MTU=9000 and 1 thread.<br>
</blockquote><div><br></div><div>Yes I did not tell that replication link is a long distance link, the 2 servers are separated by hundreds of kilometers :-)<br></div><div>Which explains the 10.8 ms latency.<br><br></div><div>
I made further tests, and seems that my issue is that I did not tune my TPC stack according to the 10 G speed.<br></div><div>However, with tuning (window size &co), I did not manage to go beyond 7.3 Gb/s (iperf, 1 thread).<br>
</div><div>Without tuning I was at 2.15 Gb/s (iperf, 1 thread).<br><br></div><div>Did you do something specific to go up to 9.8 Gb/s ?<br><br>Thank you !<br><br>Best regards,<br><br>Ben<br><br></div></div></div></div>