[DRBD-user] Limit Syncer Speed

Lars Ellenberg lars.ellenberg at linbit.com
Wed Dec 14 22:37:44 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sat, Dec 10, 2016 at 04:02:45AM +0100, Jasmin J. wrote:
> Hello!
> 
> Adam, THX for your answer.
> 
> I used now a 100MBit switch to test this further and indeed the performance
> went down to 12MB/s.
> 
> I am now asking how I need to understand the Protocol option in DRBD? It
> clearly reads:
>    Protocol A: write IO is reported as completed, if it has reached local disk
>    and local TCP send buffer.
> I would suggest this is used for any operation on the disk, initial sync and
> normal write operations.
>
> But it seems DRBD doesn't behave as the configuration
> suggests.

What makes you believe that?

> Maybe anyone can explain this.

Maybe this nice post from 2012,
helps to realize what congestion is?
Pasted here for your convenience, even though it is in the archives.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

[DRBD-user] DRBD with SSD primary, spindle drive secondary,
buckets, funnels, and pipes

On Fri, Sep 21, 2012 at 07:11:53AM -0300, Andrew Eross wrote:
> Hi guys,
> 
> I've been doing the pre-requisite Google research and I haven't reached a
> conclusion, so thought I'd ask here. I have an experimental pair of
> identical XenServers setup with DRBD running over a Gigabit cross-over
> cable. The only difference is that the primary has a SSD and the secondary
> is a normal spindle drive.
> 
> dd tests on the underlying hardware show:
> * the spindle server is capable of writing at ~70MB/s
> * the SSD server at ~250MB/s
> 
> If I put the primary into drbd standalone mode, I also get about ~250MB/s
> when writing to the DRBD device.
> 
> When running in primary/secondary mode, however, we only get around the
> ~65MB/s mark, which makes perfect sense with protocol C.
> 
> I was expecting that if I switched to protocol A, I would be able to let
> the SSD drive write at it's full speed (e.g. 250MB/s) only at the price of
> the secondary potentially falling a little bit behind, however performance
> is almost exactly the same with protocol A, B, or C at around 60-70MB/s.

Throughput != Latency.



(thanks, ascii-art.de)
          ___________
         /=//==//=/  \
        |=||==||=|    |
        |=||==||=|~-, |
        |=||==||=|^.`;|
    jgs  \=\\==\\=\`=.:
          `"""""""`^-,`.
                   `.~,'
                  ',~^:,
                  `.^;`.
                   ^-.~=;.
                      `.^.:`.
                   \         /
                    \ funnel/
                     \     /
                      \   /
                       \ /
                        `---- pipe ----



Ok, so if that funnel is big enough for one bucket,
you can pour out one bucket quasi instantaneoulsly.

During the time it takes you to fetch the next bucket,
the funnel asynchronously drains through the (thin) pipe.

"Feels" like a "fat pipe", but is not.

Now, if you fetch the new bucket faster than the funnel can drain,
you reach congestion, and you have to pour more slowly.

Unless spilling is allowed ;-)

> Is this simply not do-able for some reason to let the primary write at a
> faster speed than the secondary?

For a short peak period, yes, see above.
To extend that peak period (increase the size of that funnel),
we have the drbd-proxy (contact LINBIT).

But even with massive buffers (funnels),
the sustained mid term/long term average write rate
obviously cannot exceed the minimum bandwith within the whole system.

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed



More information about the drbd-user mailing list