[DRBD-user] Limit Syncer Speed

Lars Ellenberg lars.ellenberg at linbit.com
Mon Dec 19 12:09:59 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sat, Dec 17, 2016 at 10:22:38PM +0100, Jasmin J. wrote:
> Hi!
> 
> I am not sure what I did test yesterday, but today it is definitely not
> working!
> Now I am getting "drbd_rs_complete_io(,5576704 [=170]) called, but refcnt is
> 0!?" and "drbd_rs_complete_io() called, but extent not found" outputs in
> syslog.

Ahead / Behind mode is NOT intended to be used with the small buffers
the TCP stack gives you.

It is intended to be used if you have a local synchronous cluster
for redundancy, and add a remote asynchronous replication via DRBD Proxy
to some DR-site.

Proxy does the buffering of data write bursts, optionally adds compression
(careful with the optimal choice of algorithms and settings for your
scenario, or the CPU may become your bottleneck) then lets it drain.

[Jasmin pointed me to pieces of code off list]
Yes that is a layer violation, yes it could be done differently, no it
is not a problem, and certainly not the cause of the effects you are seeing.

Ahead/Behind is known to have some races when it frequently flaps.
I even know roughly what would need to be done to fix that, but fixing
the "ahead-behind flapping" scenario is pretty low priority for us.

Ahead/Behind works fine when
 * the amount of resync requests in flight is limitted,
   fits well in the buffer and is very unlikely to ever be a reason for
   congestion itself. That means c-fill-target is smaller (MUCH smaller)
   than the (proxy) buffer, and obviously also much smaller than
   congestion-fill.
 * the buffers are large enough to hold a substantial amount more than
   just congestion-fill
 * it takes the system a few seconds to minutes to even reach
   congestion-fill, even under load
 * it takes the system several seconds to minutes to drain a full buffer 

Large (proxy) buffers only make sense when the average buffer usage is
much smaller than the buffer capacity, not if it keeps "almost full
almost all the time". It is wor buffering write *bursts*.

We used to have only an "emergency disconnect" in case the buffer
capacity was not enough (and thus congestion would have slowed down the
production site). The monitoring necessary to guess the "optimal time"
to then reconnect turned out to be too complex for most users, so
we added the ahead/behind (but not disconnect), and some heuristic 
to guess a good time to start the resync.

You use DRBD "in case something happens to the Primary".  And in that
case, you typically expect the Secondary to be in a state where it can
take over.

If you keep flapping between congested -> ahead -> resync
(but the resync then causes congestion again) -> ahead -> ...
you also can just disable DRBD altogether, because typically
"in case something happens", your Secondary will be Inconsistent.

If you want "rate-limitted rsync (or csync2, for that matter) triggered
by inotify change notifications" semantics, may I suggest that DRBD with
its "quasi synchronous block level replication" semantics might not be
the optimal tool for the job?


-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed



More information about the drbd-user mailing list