Hi,<br><br>Im mounting a shared storad with "raid5 in hardware" -> lvm -> drbd -> ocfs2 with two identical servers.<br>Everthing its gonny rigths, but now, in test time, drbd is often in split brain condicion.<br>
<br>The two server is connect by a cross cable with a gigabit ethernet connection.<br clear="all"><br>My messages log file:<br>Apr 11 14:05:06 node2 kernel: drbd10: Split-Brain detected, dropping connection!<br>Apr 11 14:06:14 node2 kernel: drbd10: Split-Brain detected, dropping connection!<br>
Apr 11 15:13:41 node2 kernel: drbd10: Split-Brain detected, dropping connection!<br>Apr 11 15:35:18 node2 kernel: drbd10: Split-Brain detected, 0 primaries, automatically solved. Sync from peer node<br><br>My drbd.conf file:<br>
<br>global {<br> usage-count yes;<br>}<br><br>common {<br> <br> syncer {<br> rate 80M;<br> #al-extents 257;<br> al-extents 2401;<br> verify-alg sha1;<br> }<br><br> protocol C;<br> <br> handlers {<br> split-brain "/usr/lib/drbd/notify-split-<div id=":20x" class="ii gt">
brain.sh <a href="mailto:rafael.rezo@gmail.com" target="_blank">rafael.rezo@gmail.com</a>";<br>
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";<br> pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";<br> local-io-error "echo o > /proc/sysrq-trigger ; halt -f";<br>
outdate-peer "/usr/sbin/drbd-peer-outdater";<br> }<br><br> disk {<br> on-io-error detach;<br> }<br> <br> net {<br> allow-two-primaries;<br> after-sb-0pri discard-younger-primary;<br> after-sb-1pri consensus;<br>
after-sb-2pri disconnect;<br> rr-conflict disconnect;<br> }<br><br><br>}<br><br>resource r0 {<br><br> on node1 {<br><br> device /dev/drbd0;<br> disk /dev/vg0/lvm1;<br> address <a href="http://192.168.1.1:7788/" target="_blank">192.168.1.1:7788</a>;<br>
meta-disk internal;<br><br> }<br><br> on node2 {<br><br> device /dev/drbd0;<br> disk /dev/vg0/lvm1;<br> address <a href="http://192.168.1.2:7788/" target="_blank">192.168.1.2:7788</a>;<br> meta-disk internal;<br>
}<br>}<br><br>resource r1 {<br><br>on node1 {<br> device /dev/drbd1;<br> disk /dev/vg0/lvm2;<br> address <a href="http://192.168.1.1:7789/" target="_blank">192.168.1.1:7789</a>;<br> meta-disk internal;<br>
}<br>
<br>on node2 {<br> device /dev/drbd1;<br> disk /dev/vg0/lvm2;<br> address <a href="http://192.168.1.2:7789/" target="_blank">192.168.1.2:7789</a>;<br> meta-disk internal;<br> }<br><br>}<br><br>resource r2 {<br>
<br> on node1 {<br>
<br> device /dev/drbd2;<br> disk /dev/vg0/vg_ocfsteste;<br> address <a href="http://192.168.1.1:7790/" target="_blank">192.168.1.1:7790</a>;<br> meta-disk internal;<br><br> }<br><br> on node2 {<br>
device /dev/drbd2;<br>
disk /dev/vg0/vg_ocfsteste;<br> address <a href="http://192.168.1.2:7790/" target="_blank">192.168.1.2:7790</a>;<br> meta-disk internal;<br> }<br><br>}<br><br>resource r10{<br><br> on node1 {<br><br> device /dev/drbd10;<br>
disk /dev/vg0/lvm3;<br> address <a href="http://192.168.1.1:7791/" target="_blank">192.168.1.1:7791</a>;<br> meta-disk internal; <br> }<br> <br>on node2 {<br> device /dev/drbd10;<br> disk /dev/vg0/lvm3;<br>
address <a href="http://192.168.1.2:7791/" target="_blank">192.168.1.2:7791</a>;<br>
meta-disk internal;<br> }<br><br>}<br><br>Somebody can help me?<br><br>Thank very much<br><font color="#888888"><br></font></div>