[DRBD-user] slow drbd over tripple gigabit bonding balance-rr

Zoltan Patay zoltanpatay at gmail.com
Sat Aug 1 02:14:38 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Mark,

syncer rate does not define the overall drbd performance between the two
nodes, but rather it designates a limit for re-syncing, to assure normal
system performance while it is happening in the backgrounnd (so your re-sync
processes dont eat up all your bandwidth between your drbd nodes, and normal
drbd usage can happen).

It has nothing to do with the working performance of a single drbd device
pair see more here:
http://www.drbd.org/users-guide/s-configure-syncer-rate.html

Originally it was set to be 80M in this case, but probably it should be even
lower, since despite the iperf results, I never saw a drbd go over the
117MB/s (single gigabit link hitting performance wall), so it probably
should be as low as 35MB/s

As I wrote before, the two boxes have three gigabit links dedicated for
DRBD, these are bonded using the balance-rr mode, and with arp ip monitoring
(basically I can unplug any of the cables between the nodes, in any order,
and plug them back in, as long there is a single link, the connection is
uninterrupted, also depending how many gigabit links are allive the
bandwidth scales up with every additional connection, pretty swift actually)

What I would like to see is higher write rates. I know how the different
bondings work, I also know balance-rr is the only where a single connection
cab scale beyond the capacity of a single card, and it clearly happens when
I benchamrk it with iperf, but never saw it in DRBD.

Now, since this is a Xen Dom0, I have been able to do more testing in the
DomU itself (half of the testing was done before I wrote to the mailing
list)

In the paravirtualized DomU, the drbd devices are inported as xvdb to xvdf,
and they were used:

1) as phisical volumes for the LVM in the DomU itseld, setting up the
logical volumes using striping in LVM

2) as part of a raid0 stripe

3) raid0 stripe in above as a phisical volume for LVM

There is no signofocant difference in either mode.

I have even tested a RAID_0 stripe over the drbd volumes in the Xen Dom0,
same results.

I also know about the LVM default block device performance issues, and
"blockdev --setra" is used as a workaround on all levels

Whenever I disconnect drbd, performance is as expected (close to the raid10
performance measured)

The reason this is so annoying, is because on both the DRBD wiki and in the
mail list there are hints that even over a dual gigabit link in ballance-rr
performance is much better, see them yourself:

http://www.drbd.org/home/wiki/?tx_drwiki_pi1[keyword]=performance

http://www.nabble.com/DRBD-Performance-td18745802.html

http://lists.linbit.com/pipermail/drbd-user/2008-July/009893.html


Also if it not clear, this is a nested LVM (with the DRBD's being the
phisical volumes for LVM in the paravirtualized xen instance, DRBD is
running in Dom0):


DRBD<-----|                  |<-----xvdb (PV)|
Xen Dom0 6XHDD<-----RAID10<-----LVM<-----DRBD<-----|XenDomU|<-----xvdc
(PV)|<-----LVM<----file ystems

DRBD<-----|                  |<-----xvdd|(PV)|

As a side note, I am a seasoned sysadmin of fifteen years and use linux
practically for everything for the last  ten years, work with it at least
twelve hours a day (mostly more than that, I am lucky to do for living what
I love, and work and fun are the same)

So, anybody knows how those magical numbers were achived under those links?

z

On Thu, Jul 30, 2009 at 9:18 AM, Mark Watts <m.watts at eris.qinetiq.com>wrote:

> On Thu, 2009-07-30 at 03:57 -0400, Zoltan Patay wrote:
> > using "dd if=/dev/zero of=/dev/drbd26 bs=10M count=100" I get:
> >
> > drbd connected
> > 1048576000 bytes (1.0 GB) copied, 13.6526 seconds, 76.8 MB/s
> > 1048576000 bytes (1.0 GB) copied, 13.4238 seconds, 78.1 MB/s
> > 1048576000 bytes (1.0 GB) copied, 13.2448 seconds, 79.2 MB/s
> >
> > drbd disconnected
> > 1048576000 bytes (1.0 GB) copied, 4.04754 seconds, 259 MB/s
> > 1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s
> > 1048576000 bytes (1.0 GB) copied, 4.06758 seconds, 258 MB/s
> >
> > The three (intel) gigabit PCIe cards are bonded with balance-rr, and
> > iperf gives me:
> >
> > iperf 0.0-10.0 sec  2.52 GBytes  2.16 Gbits/sec (276.48MB/s)
> >
> > So clearly there is enough speed for both on the network and in the
> > backend to support higher speeds. The boxes are with cross-over
> > back-to-back no-switch.
> >
> > version: 8.3.0 (api:88/proto:86-89)
> > GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by
> > phil at fat-tyre, 2008-12-18 15:26:13
> >
> > global { usage-count yes; }
> >        common { syncer { rate 650M; } }
>
> Try actually setting this to a sensible value for 3 x 1Gbit links.
> eg: 300M
>
> > resource OpenVZ_C1C2_B_LVM5 {
> >   protocol C;
> >   startup {degr-wfc-timeout 120;}
> >   disk {on-io-error
> > detach;no-disk-flushes;no-md-flushes;no-disk-drain;no-disk-barrier;}
> >   net {
> >     cram-hmac-alg sha1;
> >     shared-secret "OpenVZ_C1C2_B";
> >     allow-two-primaries;
> >     after-sb-0pri discard-zero-changes;
> >     after-sb-1pri discard-secondary;
> >     after-sb-2pri disconnect;
> >     rr-conflict disconnect;
> >     timeout 300;
> >     connect-int 10;
> >     ping-int 10;
> >     max-buffers 2048;
> >     max-epoch-size 2048;
> >   }
> >   syncer {rate 650M;al-extents 257;verify-alg crc32c;}
>
> And here too.
>
>
> Mark.
> --
> Mark Watts BSc RHCE MBCS
> Senior Systems Engineer, Managed Services Manpower
> www.QinetiQ.com
> QinetiQ - Delivering customer-focused solutions
> GPG Key: http://www.linux-corner.info/mwatts.gpg
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090731/15aa910f/attachment.htm>


More information about the drbd-user mailing list