[DRBD-user] DM to Linstor migration issue

Robert Altnoeder robert.altnoeder at linbit.com
Tue Aug 21 11:23:43 CEST 2018

On 08/21/2018 09:23 AM, Yannis Milios wrote:
> Checked the error reports that are listed by using the 'linstor
> error-reports list', and I can see multiple occurrences of the
> following error:
> Description:
>     Initialization of storage for resource 'vm-115-disk-1' volume 0 failed
> Cause:
>     Storage volume 0 of resource 'vm-115-disk-1' too large. Expected
> 33561640KiB, but was : 33587200KiB.

I guess that may have been caused by a different peer count in drbdmanage.
Volumes are defined by net size (usable size) in drbdmanage and LINSTOR,
but the actual backend storage is larger than the net size due to
internal DRBD meta data. One of the factors that the size of DRBD meta
data depends on is the number of peer slots.

You could try to resize the volume (volume-definition set-size) to match
those 33587200 KiB that you see as the expected value, which will
effectively make it somewhat larger than that. If the peer count is
indeed different from what LINSTOR thinks it should be, that might cause
a mismatch between the net size reported by LINSTOR and the actual net
size, but it should at least make the volume usable until we can come up
with a fix for any issues caused by different peer count.

You could also try to figure out the actual peer count of that resource
and attempt to set the correct peer count on the LINSTOR
resource-definition, volume-definition, on each resource/volume or
something like that - I am not sure that it will let you after the
resources have been created though. We might need to fix this in the
migration script.

Anyhow, the peer-count causing this problem is just a guess on my part.
There are several other factors that may cause similar problems, such as
a volume that was manually resized in the past or a mismatch of LVM's
extent size, etc.


More information about the drbd-user mailing list