[DRBD-user] Pool free report by drbdmanage can't be trust

Julien Escario escario at azylog.net
Thu Feb 4 12:39:45 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Le 04/02/2016 10:07, Robert Altnoeder a écrit :
> On 01/29/2016 11:08 AM, Julien Escario wrote:
>> Le 25/01/2016 15:19, Julien Escario a écrit :
>>> So I'm wondering how and when 'pool free' value is calculated. Is it
>>> recalculated only when a new ressource is created ? deleted ?
> It is normally recalculated whenever drbdmanage changes something that
> modifies the amount of available space.

Right now, this is not the case. With python-drbdmanage 0.91-1 at least (and
lvm-thinlv backend). Creating a new ressource doesn't seems to call a recalculus
of free space.

Perhaps is ti really the case at *creation* time. But on the syncsource, thin
allocation means there's nearly zero space used. But with the initial sync,
there's a lot of space used on the synctarget. After initial sync, free space is
not recalculated for sure.

> Another problem is that any backend that uses thin allocation
> essentially returns data from fantasy land, so the pool free value in
> drbdmanage will never be anything better than a rough estimate. Actually
> allocating a volume of the reported free size might work or might not
> work, depending on how much actual storage ends up being allocated upon
> creating the resource on each node.

Yup ! That's actually my major caveat. Returned datas by lvm-thin are pretty
'logic' and pool is slowly filled. That does not seems to be the same thing for
Urban Larsson that does have really strange numbers.

No, the real thing is about sync on thinlvm. But perhaps do we have a solution
with 9.0.1 and rs-discard-granularity parameter.

It seems to be a drbdsetup parameter. Could it be used with drbdmanage ? In a
future release ?

I would be happy to know how this should be set. (depends upon backend storage
block size ?).

> Right now, DRBD still full-syncs (which will also change in the future,
> because obviously that does not make a lot of sense with thin
> allocation), and drbdmanage does not yet have lots of logic for
> estimating thin allocation, and for both reasons, all the values
> returned by drbdmanage as it is now are usually very conservative and
> fat-allocation-like regarding free space.

Not much. I really have concrete free space report on my 3-nodes setup. At
least, I'm conservative with the disks on syncsource node : so I'm creating VMs
with a round-robin-like algorithm. Not really the worse.

But, yes, having the possibility to 'jump' across initial sync would be a great
feature ! (as we KNOW that there's no data at creation time).

>>> Is there a way to force it to rescan free space ? (with a dbus command perhaps ?)
>>> That could perhaps be done by a cron job running at defined frequency ?
> We have a prototype of a daemon that does something like that
> (drbdmanage-poolmonitord). While update-pool always locks the control
> volumes and updates pool data, the daemon uses a monitoring function
> (the update_pool_check() D-Bus API) that first checks whether the amount
> of space has changed, and only if it did, it triggers the update-pool
> command.
> That will be released with some future release of drbdmanage, as it will
> be especially useful if a storage pool is shared by drbdmanage and other
> allocators.

Well, sharing the storage with others allocators doesn't really seems a good
idea for now ;-)

If two or more nodes tries to lock the control volumes, is there some kind of
waiting queue to get the lock ? I think this has already been predicted.

Thanks for your precision ! I would be happy to help you with tests in
differents cases from a 'customer' point of view if this could help you !

Best regards,

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3705 bytes
Desc: Signature cryptographique S/MIME
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160204/f460cfc9/attachment.bin>

More information about the drbd-user mailing list