On Thu, Jan 6, 2011 at 8:11 AM, Or Gerlitz <span dir="ltr"><<a href="mailto:ogerlitz@voltaire.com">ogerlitz@voltaire.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div bgcolor="#ffffff" text="#000000"><div class="im">
<tt>On 1/4/2011 2:46 AM, Sean McCreadie wrote:</tt>
<blockquote type="cite">
<div>
<p class="MsoNormal"><tt></tt><tt>I have tested write
performance using fio and right now the best I can get is
write throughput and IOPS that are about 50-75% as good as
with drbd replication disabled. </tt><tt><br>
</tt></p>
</div>
</blockquote>
</div><tt><br>
Were you able to get closer to the throughput in non-replicated
configuration when using protocol A instead of C? Using very much
the same params as in your configuration, I see that protocol A
buys me better latency but not any further bandwidth. <br>
<br>
I'm on 500 MB/s for replicating one drbd lun, 1GB/s for two and
the same for three. My underline interconnect is Infiniband/DDR
whose bandwidth is 1900MB/s. These disks can yield 1600 MBs for
drbd read test or write test when the connection is off. So I'm
also hitting some unknown (yet, to me) bottleneck...<br></tt></div></blockquote><div> </div><div>Have you checked CPU usage? Look at all your processors, one of them may be maxxed. I've noticed the DRBD threads tend to all run on a single core.</div>
<div><br></div><div>-JR</div></div>