[DRBD-user] DM to Linstor migration issue

Robert Altnoeder robert.altnoeder at linbit.com
Wed Aug 22 15:46:18 CEST 2018


On 08/22/2018 02:58 PM, Roland Kammerer wrote:
> So far I would conclude the migration script did the only thing it could
> do, the "33554432K" match in the ctrlvol and the generated script.
>
> And from LINSTOR then we get:
>
>> Description:
>>     Initialization of storage for resource 'vm-115-disk-1' volume 0 failed
>> Cause:
>>     Storage volume 0 of resource 'vm-115-disk-1' too large. Expected
>> 33561640KiB, but was : 33587200KiB.
> Where none of these values matches the "33554432K".
>
> Robert, any ideas?

Yes, that's the net size of 33,554,432 kiB plus 7,208 kiB DRBD internal
meta data for 7 peers.

The only potentially interesting thing that I noticed about the
33,587,200 kiB number that we see is that it is 32 MiB aligned, whereas
the net size calculated by LINSTOR, 33,561,640 kiB, is only 8 kiB
aligned, so LVM would normally round this number up to the next LVM
extent size, which is 4 MiB by default (LINSTOR's storage drivers factor
in LVM's extent size, so if the volume size matches LVM's extent size
alignment, then that's ok).

A possible cause for the mismatch could be a change in LVM extent size
at some point (e.g., the logical volume being restored in a different /
newly created volume group with different properties). I am not sure
whether LVM can be configured to align volumes to some greater number
than the extent size, but that might be possible too.

All I can say so far is that it's all a bit mysterious. We might need to
make the storage driver a bit more configurable regarding the checks
done on the size of backend storage volumes to ease migration whenever
existing volumes have properties that do not seem to make sense with the
storage configuration that LINSTOR sees, for whatever reason, be it
prior manual intervention or migration/restore of volumes to a storage
backend with different properties.

> "common": {
>         "props": {
>             "/dmconf/cluster/max-peers": "7", 
>             "/dso/disko/al-extents": "6007", 
>             "/dso/neto/max-buffers": "8000", 
>             "/dso/neto/max-epoch-size": "8000", 
>             "/dso/neto/verify-alg": "sha1", 
>             "serial": "3550"
>         }
>     }, 
>
> So "max-peers" is set in this setup, which means that at some point it
> was changed, because "7" is the default, so it was set to "something"
> and then set back to "7".

Yes, it might be, unless someone explicitly set it to 7 although this
would have been the default anyway.

>
> Regards, rck
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


br,
Robert



More information about the drbd-user mailing list