[DRBD-user] Adjusting al-extents on-the-fly

mahadevsb mahadevsb mahadevsb at gmail.com
Sat May 31 14:42:40 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.



-----Original Message-----
From: Lars Ellenberg
Sent: 5/30/2014 6:43 AM
To: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] Adjusting al-extents on-the-fly

On Wed, May 28, 2014 at 01:23:55PM +1000, Stuart Longland wrote:
> Hi Lars,
> On 27/05/14 20:31, Lars Ellenberg wrote:
> >> The system logs PLC-generated process data every 5 seconds, and at two
> >> times of the day, at midnight and midday, it misses a sample with the
> >> logging taking 6 seconds.  There's no obvious CPU spike at this time, so
> >> my hunch is I/O, and so I'm looking at ways to try and improve this.
> > 
> > Funny how if "something" happens,
> > and there is DRBD anywhere near it,
> > it is "obviously" DRBD's fault, naturally.
> 
> No, it's not "obviously" DRBD's fault.  It is a factor, as is the CPU.
> Rather, it's the network and/or disk, of which DRBD is reliant both of
> these, and (to a lesser extent) CPU time.
> 
> I'm faced with a number of symptoms, and so it is right I consider *all*
> factors, including DRBD and the I/O subsystems that underpin it.

Okok ...

> >> iotop didn't show any huge spikes that I'd imagine the disks would have
> >> trouble with.  Then again, since it's effectively polling, I could have
> >> "blinked" and missed it.
> > 
> > If your data gathering and logging thingy misses a sample
> > because of the logging to disk (assuming for now that this is in fact
> > what happens), you are still doing it wrong.
> > 
> > Make the data sampling asynchronous wrt. flushing data to disk.
> 
> Sadly how it does the logging is outside my control.  The SCADA package
> is one called MacroView, and is made available for a number of platforms
> under a proprietary license.  I do not have the source code, however it
> has been used successfully on quite a large number of systems.
> 
> The product has been around since the late 80's on numerous Unix
> variants.  Its methods may not be "optimal", but they seem to work well
> enough in a large number of cases.
> 
> The MacroView Historian basically reads its data from shared memory
> segments exported by PLC drivers, computes whatever summary data is
> needed then writes this out to disk.  So the process is both I/O and
> possibly CPU intensive.
> 
> I can't do much about the CPU other than fiddling with `nice` without a
> hardware upgrade (which may yet happen; time will tell).
> 
> I don't see the load-average sky rocketing which is why I suspected I/O:
> either disk writes that are being bottle-necked by the gigabit network
> link, or perhaps the disk controller.
> 
> The DRBD installation there was basically configured and gotten to a
> working state, there was a little monkey-see-monkey-do learning in the
> beginning, so it's possible that performance can be enhanced with a
> little tweaking.
> 
> The literature suggests a number of parameters are dependent on the
> hardware used, and this, is what I'm looking into.
> 
> This is one possibility I am investigating: being mindful that this is a
> live production cluster that I'm working on.  Thus I have to be careful
> what I adjust, and how I adjust it.

Sure.

Well, IO subsystems may have occasional latency spikes.
DRBD may trigger, be responsible for, or even cause
additional latency spikes.

IF your scada would "catch one sample then synchronously log it",
particular high latency spikes might cause it to miss the next samle.

I find that highly unlikely.
Both that sampling and logging would be so tightly coupled,
and that the latency spike would take that long (if nothing else is
going on, and the system is not completely overloaded;
with really loaded systems, arbitrary queue length and buffer bloat,
I can easily make the latency spike for minutes).

As this is "pro" stuff, I think it is safe to assume
that gathering data, and logging that data, is not so tightly coupled.
Which leads me to believe that it missing a sample
has nothing to do with persisting the previous sample(s) to disk.

Especially if it happens so regularly twice a day noon and midnight.
What is so "special" about those times?
flushing logs?  log rotation?

You wrote "with the logging taking 6 seconds".
What exactly does that mean?
"the logging"?
"taking 6 seconds"?
what exactly takes six seconds?
how do you know?

Are some clocks slightly off
and get adjusted twice a day?

> >> DR:BD is configured with a disk partition on a RAID array as its backing
> > 
> > Wrong end of the system to tune in this case, imo.
> 
> Well, hardware configuration and BIOS settings are out of my reach as
> I'm in Brisbane and the servers in question are somewhere in Central
> Queensland some 1000km away.
> 
> > This (adjusting of the "al-extents" only) is a rather boring command
> > actually.  It may stall IO on a very busy backend a bit,
> > changes some internal "caching hash table size" (sort of),
> > and continues.
> 
> Does the change of the internal 'caching hash table size' do anything
> destructive to the DR:BD volume?

No.  Really.
Why would we do something destructive to your data
because you change some syncronisation parameter.
And I even just wrote it was "boring, at most briefly stalls then
continues IO".  I did not write
it-will-reformat-and-panic-the-box-be-careful-dont-use.

But unless your typical working set size is much larger than what
the current setting covered, this is unlikely to help.
(257 al-extents correspond to ~ 1GByte working set)
If it is not about the size, but the change rate, of your working set,
you will need to upgrade to drbd 8.4.

> http://www.drbd.org/users-guide-8.3/re-drbdsetup.html mentions that
> --create-device "In case the specified DRBD device (minor number) does
> not exist yet, create it implicitly."
> 
> Unfortunately to me "device" is ambiguous, is this the block device file
> in /dev, or the actual logical DR:BD device (i.e. the partition).

So what. "In case .* does not exist yet".
Well, it does exist.
So that's a no-op, right?

Anyways.  That flag is passed from drbdadm to drbdsetup *always*
(in your drbd version).
And it does no harm. Not even to your data.
It's an internal convenience flag.

> I don't want to create a new device, I just want to re-use the existing
> one that's there and keep its data.
> 
> > As your server seems to be rather not-so-busy, IO wise,
> > I don't think this will even be noticable.
> 
> Are there other parameters that I should be looking at?

If this is about DRBD tuning,
well, yes, there are many things to consider.
If there were just one optimal set of values,
those would be hardcoded, and not tunables.

> Sync-rates perhaps?

Did you have resync going on during your "interesting" times?
If not, why bother, at this time, for this issue.
If yes, why would you always resync at noon and midnight?

> Once again, the literature suggests this should be higher if the writes
> are small and "scattered" in nature, which given we're logging data from
> numerous sources, I'd expect to be the case.

Sync rate is not relevant at all here.
Those parameters control the background resynchronization
after connection loss and re-establishment.
As I understand, your DRBD is healthy, connected,
and happily replicating. No resync.

> Thus following the documentation's recommendations (and not being an
> expert myself) I figured I'd try carefully adjusting that figure to
> something more appropriate.

Sure, careful is good.
Test system is even better ;-)

If you really want to improve on random write latency with DRBD,
you need to upgrade to 8.4. (8.4.5 will be released within days).

I guess that upgrade is too scary for such a system?

Also, you could use auditctl to find out in detail what is happenening
on your system. You likely want to play with that on a test system first
as well, until you get the event filters right,
or you could end up spamming your production systems logs.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140531/7d4473bb/attachment.htm>


More information about the drbd-user mailing list