[DRBD-user] DM to Linstor migration issue
yannis.milios at gmail.com
Wed Aug 22 13:34:32 CEST 2018
> Do you still have the migration script? Could you post the part for that
> resource? Would be interesting which value the script tried to set.
Yes, I do. You can find it here
The problem in my case, was not just in one resource, but on all of them.
In short, none of the resources could come online. To fix that, I had to
resize the volume definitions for each resource.
> Anything uncommon? Did you change the max number of peers or anything?
Not sure to be honest. I have done several modifications on this cluster
over the time,but perhaps I can give you some clues, maybe the answer is
somewhere in there ... :-)
- Initially, all 3 nodes were using ZFS Thin as a DRBD backend. Now, 2
nodes are using LVM Thin, and 1 ZFS.
- All resources were created automatically by drbdmanage-proxmox plugin,
sometimes with redundancy 2 and sometimes with redundancy 3 (I was playing
around with this option).
- There were occasions, where a resource which initially was created with
drbdmanage-proxmox plugin with redundancy 2, later it was manually assigned
to the 3rd node, manually by using drbdmanage command, in order to have
redundancy of 3.
- IIRC in only one occasion, I had to manually export DRBD metadata from a
resource, modify the max-peers option from 1 to 7 and then restore import
it back. Not sure why it was set to 1 in the first place, but yes I had to
do this modification, otherwise the peers were refusing to sync.
It is good hat there is a fix and you guys managed to migrate. I still
> wonder why this did not trigger in my tests.
As you can see from the above, perhaps my setup is not the ideal to go to
conclusions, but still, I would accept if some of the resources had failed,
but not all ?!. Maybe Roberto can also give some tips from his setup?
Thanks for the good work!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the drbd-user