Dear list's friends, I've noticed one strange action on two nodes with the following listed drbd.conf setup.<br><br>The two nodes are crossover connected with two bonded nics, set for high availability, not for performances.<br>
<br>For some reason, at the startup the bonded drbd network didn't came up correctly (could be for the crossover connection? Forget it, it is not the issue I'll talk about)), so it was not possible to ping 10.1.1.x each other, no connection for drbd.<br>
<br>The supposed scenario I expected to be WFConnection on both nodes, and it was, but with Primary/Unknown on the server-1 and Secondary/Unknown on the server-2.<br><br>My issue is that they both were Secondary and I had to manually issue the "drbdadm primary" command on server-1.<br>
It did work, not a big deal, but it would be better if this can work as expected.<br><br>Could be due to some settings in the drbd.conf?<br><br>What follow is the drbd.con content and I thank you in advance for any kind of tip.<br>
<br>Robert<br>_________________________________________<br><br>---- drbd.conf ----<br><br>global {<br> usage-count yes;<br> }<br><br>common {<br> syncer {<br> rate 50M;<br> verify-alg md5;<br>
csums-alg md5;<br> }<br> }<br># single sata is 80MB/sec, may be is better to set half of the speed<br><br>resource drbd0 {<br> protocol C;<br> startup {<br> become-primary-on xenserver-1;<br>
}<br><br> net {<br> cram-hmac-alg md5;<br> shared-secret "pwdpwd";<br> sndbuf-size 0;<br> rcvbuf-size 0;<br> data-integrity-alg md5;<br>
}<br><br> disk {<br> max-bio-bvecs 1;<br> on-io-error detach;<br> no-disk-flushes;<br> }<br><br> on xenserver-1 {<br> device /dev/drbd0;<br>
disk /dev/sdb;<br> address <a href="http://10.1.1.1:7789">10.1.1.1:7789</a>;<br> meta-disk internal;<br> }<br><br> on xenserver-2 {<br> device /dev/drbd0;<br>
disk /dev/sdb;<br> address <a href="http://10.1.1.2:7789">10.1.1.2:7789</a>;<br> meta-disk internal;<br> }<br><br> handlers {<br> # split-brain "/usr/lib/drbd/notify-split-brain.sh root";<br>
}<br>}<br>