<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><meta http-equiv="Content-Type" content="text/html charset=utf-8" class=""><meta http-equiv="Content-Type" content="text/html charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi folks,<div class=""><br class=""></div><div class="">I try to get drbd9 working on my servers. My configuration is following:</div><div class=""><br class=""></div><div class="">I have 5 nodes, 2 of them are primary fileserver (fs1 and fs2)</div><div class="">3 are virtualisation hosts running proxmox (virt1…virt3)</div><div class=""><br class=""></div><div class="">All of them have network cards for derby connections:</div><div class=""> 10.10.10.33/26</div><div class=""><br class=""></div><div class="">The virtualisation hosts have additional network cards.</div><div class=""><br class=""></div><div class="">Running newest drbd from linbit repo.</div><div class=""><br class=""></div><div class="">After several hours all nodes seem to be connected and talk to each other.</div><div class="">(drbdmanage n —> all have OK state)</div><div class=""><font face="Courier" class=""><br class=""></font></div><div class=""><font face="Courier" class="">+------------------------------------------------------------------------------------------------------------+<br class="">| Name | Pool Size | Pool Free | | State |<br class="">|------------------------------------------------------------------------------------------------------------|<br class="">| fs1 | 7630888 | 6479712 | | ok |<br class="">| fs2 | 7630888 | 6436592 | | ok |<br class="">| virt1 | 19260 | 19252 | | ok |<br class="">| virt2 | 19260 | 19252 | | ok |<br class="">| virt3 | 19260 | 19252 | | ok |<br class="">+------------------------------------------------------------------------------------------------------------+<br class=""><br class=""></font></div><div class="">With drbdadm I can see that fs1 is primary control node.</div><div class=""><br class=""></div><div class="">When trying to deploy a VM or move storage to drbd I nearly always get "TASK ERROR: storage migration failed: drbd error: Could not forward data to leader"</div><div class="">Sometimes this is working for the setting of "redundancy 1"; the setting of "redundancy 2" or higher never worked. In higher redundancy I can see in log file something like "Initial split brain detected" and it fails completely.</div><div class=""><br class=""></div><div class="">After the storage correctly runs on redundancy-1-level, I can assign it to the second fileserver. </div><div class="">When doing this while the deploy is running it will most likely fail at 99.8% or so with some blocks not syncing.</div><div class=""><br class=""></div><div class="">So what am I possibly doing wrong? </div><div class=""><br class=""></div><div class="">Thanks in advance for useful hints,</div><div class="">cheers, Frank</div><div class=""><br class=""></div></div></div></body></html>