[DRBD-user] DM to Linstor migration issue

Roland Kammerer roland.kammerer at linbit.com
Wed Aug 22 14:58:29 CEST 2018


On Tue, Aug 21, 2018 at 08:23:44AM +0100, Yannis Milios wrote:
> Hello,
> 
> I was testing DM to Linstor migration script by following the steps in the
> documentation on a 3 node test cluster.
> The migration script was completed successfully, resources and volume
> definitions were created normally.
> However, when rebooting the 3 nodes, none of the DRBD resources comes up
> (drbdtop shows an empty list and the same applies to drbdadm,drbdsetup etc).
> Checked the error reports that are listed by using the 'linstor
> error-reports list', and I can see multiple occurrences of the following
> error:
> 
> Reported error:
> ===============
> 
> Description:
>     Initialization of storage for resource 'vm-115-disk-1' volume 0 failed
> Cause:
>     Storage volume 0 of resource 'vm-115-disk-1' too large. Expected
> 33561640KiB, but was : 33587200KiB.

Summing up what we have so far:

- control volume:

"vm-115-disk-1": {
            "_name": "vm-115-disk-1", 
            "_port": 7015, 
            "_state": 0, 
            "props": {
                "/dso/neto/allow-two-primaries": "yes", 
                "create_date": "2018-06-03T16:07:27.489149", 
                "serial": "4111"
            }, 
            "snapshots": {}, 
            "volumes": {
                "0": {
                    "_id": 0, 
                    "_size_kiB": 33554432, 
                    "_state": 0, 
                    "minor": 140, 
                    "props": {
                        "current-gi": "F05EFD8273528987", 
                        "serial": "2527"
                    }
                }
            }
        }, 

- migration script:

linstor resource-definition create --port 7015 vm-115-disk-1
linstor resource-definition drbd-options --allow-two-primaries yes vm-115-disk-1
linstor volume-definition create --vlmnr 0 --minor 140 vm-115-disk-1 33554432K
linstor volume-definition set-property vm-115-disk-1 0 OverrideVlmId vm-115-disk-1_00
linstor resource create --node-id 2 --storage-pool drbdpool pve1 vm-115-disk-1
linstor resource create --node-id 0 --storage-pool drbdpool pve3 vm-115-disk-1
linstor resource create --node-id 1 --storage-pool drbdpool pve2 vm-115-disk-1

So far I would conclude the migration script did the only thing it could
do, the "33554432K" match in the ctrlvol and the generated script.

And from LINSTOR then we get:

> Description:
>     Initialization of storage for resource 'vm-115-disk-1' volume 0 failed
> Cause:
>     Storage volume 0 of resource 'vm-115-disk-1' too large. Expected
> 33561640KiB, but was : 33587200KiB.

Where none of these values matches the "33554432K".

Robert, any ideas?

Also interesting, but I don't think related:

"common": {
        "props": {
            "/dmconf/cluster/max-peers": "7", 
            "/dso/disko/al-extents": "6007", 
            "/dso/neto/max-buffers": "8000", 
            "/dso/neto/max-epoch-size": "8000", 
            "/dso/neto/verify-alg": "sha1", 
            "serial": "3550"
        }
    }, 

So "max-peers" is set in this setup, which means that at some point it
was changed, because "7" is the default, so it was set to "something"
and then set back to "7".

Regards, rck


More information about the drbd-user mailing list