[DRBD-user] DM to Linstor migration issue

Roberto Resoli roberto at resolutions.it
Thu Aug 23 09:30:20 CEST 2018


Il 22/08/2018 13:34, Yannis Milios ha scritto:
> Hi Roland,
> 
>     Do you still have the migration script? Could you post the part for that
>     resource? Would be interesting which value the script tried to set.
> 
> 
> Yes, I do. You can find it here 
> https://privatebin.net/?a12ad8f1c97bcb15#XLlAENrDGQ7OYn/Mq4Uvq7vwZuZ+jyjRBLIUPMepYgE=
> 
> The problem in my case, was not just in one resource, but on all of 
> them. In short, none of the resources could  come online. To fix that, I 
> had to resize the volume definitions for each resource.
> 
>     Anything uncommon? Did you change the max number of peers or anything?
> 
> 
> Not sure to be honest. I have done several modifications on this cluster 
> over the time,but perhaps I can give you some clues, maybe the answer is 
> somewhere in there ... :-)
> 
> - Initially, all 3 nodes were using ZFS Thin as a DRBD backend. Now, 2 
> nodes are using LVM Thin, and 1 ZFS.
> - All resources were created automatically by drbdmanage-proxmox plugin, 
> sometimes with redundancy 2 and sometimes with redundancy 3 (I was 
> playing around with this option).
> - There were occasions, where a resource which initially  was created 
> with drbdmanage-proxmox plugin with redundancy 2, later it was manually 
> assigned to the 3rd node, manually by using drbdmanage command, in order 
> to have redundancy of 3.
> - IIRC in only one occasion, I had to manually export DRBD metadata from 
> a resource, modify the max-peers option from 1 to 7 and then restore 
> import it back. Not sure why it was set to 1 in the first place, but yes 
> I had to do this modification, otherwise the peers were refusing to sync.
> 
>     It is good hat there is a fix and you guys managed to migrate. I still
>     wonder why this did not trigger in my tests.
> 
> 
>   As you can see from the above, perhaps my setup is not the ideal to go 
> to conclusions, but still, I would accept if some of the resources had 
> failed, but not all ?!. Maybe Roberto can also give some tips from his 
> setup?

In my (Roberto) use case history is simpler; all resources were created 
(when DRBD9 was still semi-officialy supported by PVE) using the 
original drbdmange proxmox plugin, using lvm-thin backend and redundancy 
three.

My hw infrastructure is not absolutely uniform, unfortunately. One of 
the three nodes is an apu2 board used for proxmox/drbd quorum. The drbd 
dedicated disk on it is connected via an USB-SATA adapter because i 
wanted to use the three way replication anyway.

Most oos occur (mainly after reboot) occur on this node, and I suspect 
this is due to the sub-optimal hw setup; on the other two nodes the sata 
disks (same size) are connectd via a BBU-equipped pci controller.

> Thanks for the good work!
 > Yannis

Thanks from me as well, glad if our reports may help.

rob


More information about the drbd-user mailing list