<div dir="ltr"><div class="gmail_quote"><div>Hi Roland,</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Do you still have the migration script? Could you post the part for that<br>
resource? Would be interesting which value the script tried to set.<br>
<br></blockquote><div><br></div><div>Yes, I do. You can find it here <a href="https://privatebin.net/?a12ad8f1c97bcb15#XLlAENrDGQ7OYn/Mq4Uvq7vwZuZ+jyjRBLIUPMepYgE=">https://privatebin.net/?a12ad8f1c97bcb15#XLlAENrDGQ7OYn/Mq4Uvq7vwZuZ+jyjRBLIUPMepYgE=</a></div><div><br></div><div>The problem in my case, was not just in one resource, but on all of them. In short, none of the resources could come online. To fix that, I had to resize the volume definitions for each resource.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Anything uncommon? Did you change the max number of peers or anything?<br></blockquote><div><br></div><div>Not sure to be honest. I have done several modifications on this cluster over the time,but perhaps I can give you some clues, maybe the answer is somewhere in there ... :-) </div><div><br></div><div>- Initially, all 3 nodes were using ZFS Thin as a DRBD backend. Now, 2 nodes are using LVM Thin, and 1 ZFS.</div><div>- All resources were created automatically by drbdmanage-proxmox plugin, sometimes with redundancy 2 and sometimes with redundancy 3 (I was playing around with this option).</div><div>- There were occasions, where a resource which initially was created with drbdmanage-proxmox plugin with redundancy 2, later it was manually assigned to the 3rd node, manually by using drbdmanage command, in order to have redundancy of 3.</div><div>- IIRC in only one occasion, I had to manually export DRBD metadata from a resource, modify the max-peers option from 1 to 7 and then restore import it back. Not sure why it was set to 1 in the first place, but yes I had to do this modification, otherwise the peers were refusing to sync.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It is good hat there is a fix and you guys managed to migrate. I still<br>
wonder why this did not trigger in my tests.<br></blockquote><div><br></div><div> As you can see from the above, perhaps my setup is not the ideal to go to conclusions, but still, I would accept if some of the resources had failed, but not all ?!. Maybe Roberto can also give some tips from his setup?</div><div><br></div><div>Thanks for the good work!</div><div><br></div><div>Yannis</div><div><br></div><div><br></div><div><br></div></div></div>