<div dir="ltr"><div>Okay I will try to be a little bit more clear. The read speeds are not important as they are saturating the network at 112 MB/s. The issue is the write speed of 89 MB/s. </div><div><br></div><div>If I run a write test over nfs with drbd replicating I get 89 MB/s</div><div>If I run a write test on the server where the drbd device is mounted (and still replicating) I get 112 MB/s</div><div>If I run a write test over nfs with the drbd secondary down, I get 112 MB/s</div><div><br></div><div>I am doing my testing with the benchmark bonnie++ running the command "bonnie++ -u 0:0 -d /path/to/mount"</div><div>Each machine is a bare metal server with 12 drives in RAID 6. The servers each have a gigabit link for connection to the network as well as a back to back gigabit link for drbd replication.</div><div><br></div><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 10, 2016 at 4:09 AM, Igor Cicimov <span dir="ltr"><<a href="mailto:icicimov@gmail.com" target="_blank">icicimov@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><p dir="ltr"><br>
On 7 Jun 2016 3:18 pm, "Stephano-Shachter, Dylan" <<a href="mailto:dstathis@seas.harvard.edu" target="_blank">dstathis@seas.harvard.edu</a>> wrote:<br>
><br>
> Hello all,<br>
><br>
> I am building an HA NFS server using drbd and pacemaker. Everything is working well except I am getting lower write speeds than I would expect. I have been doing all of my benchmarking with bonnie++. I always get read speeds of about 112 MB/s which is just about saturating the network. When I perform a write, however, I get about 89 MB/s which is significantly slower.<br>
><br>
> The weird thing is that if I run the test locally, on the server (not using nfs), I get 112 MB/s read. Also, if I run the tests over nfs but with the secondary downed via "drbdadm down name", then I also get 112 MB/s. </p>
</span><p dir="ltr">This is confusing, you are just saying that the reads are same in case of drbd and nfs and without. Or you meant writes here? What does locally mean? Different partition without drbd? Or drbd without nfs? Nothing in drbd is local it is block level replicated storage.</p><span class="">
<p dir="ltr">I can't understand what is causing the bottleneck if it is not drbd replication or nfs. <br>
></p>
</span><p dir="ltr">How exactly are you testing and what is the physical disk, meaning raid or not? Is this a virtual or bare metal server? <br>
The reads are faster due to caching so did you account for that in your read test, ie reading a file at least twice the ram size?</p>
<p dir="ltr">Not exactly an answer just trying to get some more info about your setup.</p>
<p dir="ltr"></p><div><div class="h5">> If anyone could help me to figure out what is slowing down the write performance if would be very helpful. My configs are<br>
><br>
><br>
> --------------------drbd-config-----------------------------<br>
><br>
><br>
> # /etc/drbd.conf<br>
> global {<br>
> usage-count yes;<br>
> cmd-timeout-medium 600;<br>
> cmd-timeout-long 0;<br>
> }<br>
><br>
> common {<br>
> net {<br>
> protocol C;<br>
> after-sb-0pri discard-zero-changes;<br>
> after-sb-1pri discard-secondary;<br>
> after-sb-2pri disconnect;<br>
> max-buffers 8000;<br>
> max-epoch-size 8000;<br>
> }<br>
> disk {<br>
> resync-rate 1024M;<br>
> }<br>
> handlers {<br>
> pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";<br>
> pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";<br>
> local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";<br>
> split-brain "/usr/lib/drbd/notify-split-brain.sh root";<br>
> }<br>
> }<br>
><br>
> # resource <res_name> on <host1>: not ignored, not stacked<br>
> # defined at /etc/drbd.d/<res_name>.res:1<br>
> resource <res_name> {<br>
> on <host2> {<br>
> device /dev/drbd1 minor 1;<br>
> disk /dev/sdb1;<br>
> meta-disk internal;<br>
> address ipv4 55.555.55.55:7789;<br>
> }<br>
> on <host1> {<br>
> device /dev/drbd1 minor 1;<br>
> disk /dev/sdb1;<br>
> meta-disk internal;<br>
> address ipv4 55.555.55.55:7789;<br>
> }<br>
> net {<br>
> allow-two-primaries no;<br>
> after-sb-0pri discard-zero-changes;<br>
> after-sb-1pri discard-secondary;<br>
> after-sb-2pri disconnect;<br>
> }<br>
> }<br>
><br>
><br>
><br>
> -----------------------nfs.conf-----------------------------<br>
><br>
><br>
><br>
> MOUNTD_NFS_V3="yes"<br>
> RPCNFSDARGS="-N 2"<br>
> LOCKD_TCPPORT=32803<br>
> LOCKD_UDPPORT=32769<br>
> MOUNTD_PORT=892<br>
> RPCNFSDCOUNT=48<br>
> #RQUOTAD_PORT=875<br>
> #STATD_PORT=662<br>
> #STATD_OUTGOING_PORT=2020<br>
> STATDARG="--no-notify"<br>
><br></div></div>
> _______________________________________________<br>
> drbd-user mailing list<br>
> <a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
><br>
<p></p>
</blockquote></div><br></div></div>