[DRBD-user] DRBD with SSD primary, spindle drive secondary, buckets, funnels, and pipes

Andrew Eross eross at locatrix.com
Fri Sep 21 18:12:47 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Thanks Lars, that really helped! I totally get what you're saying here now.

I've sent off a request to the sales linbit folks to ask about a proxy
trial. More out of experimental curiosity than anything I'd be interested
to give that a shot to see what happens with our SSD/spindle combination...
realistically we'll just need to go fork over the money and buy a 2nd SSD
for our secondary box and call it a day.

On Fri, Sep 21, 2012 at 9:39 AM, Lars Ellenberg
<lars.ellenberg at linbit.com>wrote:

> On Fri, Sep 21, 2012 at 07:11:53AM -0300, Andrew Eross wrote:
> > Hi guys,
> >
> > I've been doing the pre-requisite Google research and I haven't reached a
> > conclusion, so thought I'd ask here. I have an experimental pair of
> > identical XenServers setup with DRBD running over a Gigabit cross-over
> > cable. The only difference is that the primary has a SSD and the
> secondary
> > is a normal spindle drive.
> >
> > dd tests on the underlying hardware show:
> > * the spindle server is capable of writing at ~70MB/s
> > * the SSD server at ~250MB/s
> >
> > If I put the primary into drbd standalone mode, I also get about ~250MB/s
> > when writing to the DRBD device.
> >
> > When running in primary/secondary mode, however, we only get around the
> > ~65MB/s mark, which makes perfect sense with protocol C.
> >
> > I was expecting that if I switched to protocol A, I would be able to let
> > the SSD drive write at it's full speed (e.g. 250MB/s) only at the price
> of
> > the secondary potentially falling a little bit behind, however
> performance
> > is almost exactly the same with protocol A, B, or C at around 60-70MB/s.
>
> Throughput != Latency.
>
>
>
> (thanks, ascii-art.de)
>           ___________
>          /=//==//=/  \
>         |=||==||=|    |
>         |=||==||=|~-, |
>         |=||==||=|^.`;|
>     jgs  \=\\==\\=\`=.:
>           `"""""""`^-,`.
>                    `.~,'
>                   ',~^:,
>                   `.^;`.
>                    ^-.~=;.
>                       `.^.:`.
>                    \         /
>                     \ funnel/
>                      \     /
>                       \   /
>                        \ /
>                         `---- pipe ----
>
>
>
> Ok, so if that funnel is big enough for one bucket,
> you can pour out one bucket quasi instantaneoulsly.
>
> During the time it takes you to fetch the next bucket,
> the funnel asynchronously drains through the (thin) pipe.
>
> "Feels" like a "fat pipe", but is not.
>
> Now, if you fetch the new bucket faster than the funnel can drain,
> you reach congestion, and you have to pour more slowly.
>
> Unless spilling is allowed ;-)
>
> > I then tried combining that with "on-congestion pull-ahead;" to see if
> that
> > would allow the primary to write at full speed, but still, same result.
> >
> > Is this simply not do-able for some reason to let the primary write at a
> > faster speed than the secondary?
>
> For a short peak period, yes, see above.
> To extend that peak period (increase the size of that funnel),
> we have the drbd-proxy (contact LINBIT).
>
> But even with massive buffers (funnels),
> the sustained mid term/long term average write rate
> obviously cannot exceed the minimum bandwith within the whole system.
>
> --
> : Lars Ellenberg
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com
>
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
> __
> please don't Cc me, but send to list   --   I'm subscribed
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120921/c92c5d34/attachment.htm>


More information about the drbd-user mailing list