[DRBD-user] DRBD with SSD primary, spindle drive secondary, buckets, funnels, and pipes

Lars Ellenberg lars.ellenberg at linbit.com
Fri Sep 21 14:39:51 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Fri, Sep 21, 2012 at 07:11:53AM -0300, Andrew Eross wrote:
> Hi guys,
> I've been doing the pre-requisite Google research and I haven't reached a
> conclusion, so thought I'd ask here. I have an experimental pair of
> identical XenServers setup with DRBD running over a Gigabit cross-over
> cable. The only difference is that the primary has a SSD and the secondary
> is a normal spindle drive.
> dd tests on the underlying hardware show:
> * the spindle server is capable of writing at ~70MB/s
> * the SSD server at ~250MB/s
> If I put the primary into drbd standalone mode, I also get about ~250MB/s
> when writing to the DRBD device.
> When running in primary/secondary mode, however, we only get around the
> ~65MB/s mark, which makes perfect sense with protocol C.
> I was expecting that if I switched to protocol A, I would be able to let
> the SSD drive write at it's full speed (e.g. 250MB/s) only at the price of
> the secondary potentially falling a little bit behind, however performance
> is almost exactly the same with protocol A, B, or C at around 60-70MB/s.

Throughput != Latency.

(thanks, ascii-art.de)
         /=//==//=/  \
        |=||==||=|    |
        |=||==||=|~-, |
    jgs  \=\\==\\=\`=.:
                   \         /
                    \ funnel/
                     \     /
                      \   /
                       \ /
                        `---- pipe ----

Ok, so if that funnel is big enough for one bucket,
you can pour out one bucket quasi instantaneoulsly.

During the time it takes you to fetch the next bucket,
the funnel asynchronously drains through the (thin) pipe.

"Feels" like a "fat pipe", but is not.

Now, if you fetch the new bucket faster than the funnel can drain,
you reach congestion, and you have to pour more slowly.

Unless spilling is allowed ;-)

> I then tried combining that with "on-congestion pull-ahead;" to see if that
> would allow the primary to write at full speed, but still, same result.
> Is this simply not do-able for some reason to let the primary write at a
> faster speed than the secondary?

For a short peak period, yes, see above.
To extend that peak period (increase the size of that funnel),
we have the drbd-proxy (contact LINBIT).

But even with massive buffers (funnels),
the sustained mid term/long term average write rate
obviously cannot exceed the minimum bandwith within the whole system.

: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
please don't Cc me, but send to list   --   I'm subscribed

More information about the drbd-user mailing list