[DRBD-user] Performance issues with drbd and nfs

Stephano-Shachter, Dylan dstathis at seas.harvard.edu
Fri Jun 10 15:25:25 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Okay I will try to be a little bit more clear. The read speeds are not
important as they are saturating the network at 112 MB/s. The issue is the
write speed of 89 MB/s.

If I run a write test over nfs with drbd replicating I get 89 MB/s
If I run a write test on the server where the drbd device is mounted (and
still replicating) I get 112 MB/s
If I run a write test over nfs with the drbd secondary down, I get 112 MB/s

I am doing my testing with the benchmark bonnie++ running the command
"bonnie++ -u 0:0 -d /path/to/mount"
Each machine is a bare metal server with 12 drives in RAID 6. The servers
each have a gigabit link for connection to the network as well as a back to
back gigabit link for drbd replication.



On Fri, Jun 10, 2016 at 4:09 AM, Igor Cicimov <icicimov at gmail.com> wrote:

>
> On 7 Jun 2016 3:18 pm, "Stephano-Shachter, Dylan" <
> dstathis at seas.harvard.edu> wrote:
> >
> > Hello all,
> >
> > I am building an HA NFS server using drbd and pacemaker. Everything is
> working well except I am getting lower write speeds than I would expect. I
> have been doing all of my benchmarking with bonnie++. I always get read
> speeds of about 112 MB/s which is just about saturating the network. When I
> perform a write, however, I get about 89 MB/s which is significantly slower.
> >
> > The weird thing is that if I run the test locally, on the server (not
> using nfs), I get 112 MB/s read. Also, if I run the tests over nfs but with
> the secondary downed via "drbdadm down name", then I also get 112 MB/s.
>
> This is confusing, you are just saying that the reads are same in case of
> drbd and nfs and without. Or you meant writes here? What does locally mean?
> Different partition without drbd? Or drbd without nfs? Nothing in drbd is
> local it is block level replicated storage.
>
> I can't understand what is causing the bottleneck if it is not drbd
> replication or nfs.
> >
>
> How exactly are you testing and what is the physical disk, meaning raid or
> not? Is this a virtual or bare metal server?
> The reads are faster due to caching so did you account for that in your
> read test, ie reading a file at least twice the ram size?
>
> Not exactly an answer just trying to get some more info about your setup.
>
> > If anyone could help me to figure out what is slowing down the write
> performance if would be very helpful. My configs are
> >
> >
> > --------------------drbd-config-----------------------------
> >
> >
> > # /etc/drbd.conf
> > global {
> >     usage-count yes;
> >     cmd-timeout-medium 600;
> >     cmd-timeout-long 0;
> > }
> >
> > common {
> >     net {
> >         protocol           C;
> >         after-sb-0pri    discard-zero-changes;
> >         after-sb-1pri    discard-secondary;
> >         after-sb-2pri    disconnect;
> >         max-buffers      8000;
> >         max-epoch-size   8000;
> >     }
> >     disk {
> >         resync-rate      1024M;
> >     }
> >     handlers {
> >         pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh;
> /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;
> reboot -f";
> >         pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;
> /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;
> reboot -f";
> >         local-io-error   "/usr/lib/drbd/notify-io-error.sh;
> /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ;
> halt -f";
> >         split-brain      "/usr/lib/drbd/notify-split-brain.sh root";
> >     }
> > }
> >
> > # resource <res_name> on <host1>: not ignored, not stacked
> > # defined at /etc/drbd.d/<res_name>.res:1
> > resource <res_name> {
> >     on <host2> {
> >         device           /dev/drbd1 minor 1;
> >         disk             /dev/sdb1;
> >         meta-disk        internal;
> >         address          ipv4 55.555.55.55:7789;
> >     }
> >     on <host1> {
> >         device           /dev/drbd1 minor 1;
> >         disk             /dev/sdb1;
> >         meta-disk        internal;
> >         address          ipv4 55.555.55.55:7789;
> >     }
> >     net {
> >         allow-two-primaries  no;
> >         after-sb-0pri    discard-zero-changes;
> >         after-sb-1pri    discard-secondary;
> >         after-sb-2pri    disconnect;
> >     }
> > }
> >
> >
> >
> > -----------------------nfs.conf-----------------------------
> >
> >
> >
> > MOUNTD_NFS_V3="yes"
> > RPCNFSDARGS="-N 2"
> > LOCKD_TCPPORT=32803
> > LOCKD_UDPPORT=32769
> > MOUNTD_PORT=892
> > RPCNFSDCOUNT=48
> > #RQUOTAD_PORT=875
> > #STATD_PORT=662
> > #STATD_OUTGOING_PORT=2020
> > STATDARG="--no-notify"
> >
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user at lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160610/3e34dc73/attachment.htm>


More information about the drbd-user mailing list