<br><br><div class="gmail_quote">On Sun, Nov 27, 2011 at 9:46 PM, Nick Khamis <span dir="ltr"><<a href="mailto:symack@gmail.com">symack@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">I could be wrong, but topics as important as a disk replicator's<br>
ability to automatically recover<br>
from split brain has been covered multiple times on it's list. Not to<br>
mention the thourough<br>
documentation.<br>
<br>
<a href="http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html" target="_blank">http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html</a><br>
<a href="http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-automatic-split-brain-recovery-configuration" target="_blank">http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-automatic-split-brain-recovery-configuration</a><br>
<a href="http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-split-brain-notification" target="_blank">http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-split-brain-notification</a><br>
<br>
How about it......<br>
<br>
Nick from Toronto.<br>
</div>- Show quoted text -<br>
<div class="im HOEnZb"><br>
<br>
<br>
On Sat, Nov 26, 2011 at 4:49 PM, trm asn <<a href="mailto:trm.nagios@gmail.com">trm.nagios@gmail.com</a>> wrote:<br>
</div><div class="HOEnZb"><div class="h5">> Dear List,<br>
><br>
> I have one HA NFS setup with DRBD. Primary is NFS1 server & secondary is<br>
> NFS2 server.<br>
><br>
> Please help me out to configure the auto recovery from split-brain.<br>
><br>
> Below is my config & package details.<br>
><br>
><br>
> Packages:<br>
> kmod-drbd83-8.3.8-1.el5.centos<br>
> drbd83-8.3.8-1.el5.centos<br>
><br>
> /etc/drbd.conf [ same one both the box]<br>
><br>
> common { syncer { rate 100M; al-extents 257; } }<br>
> resource main {<br>
> protocol C;<br>
> handlers { pri-on-incon-degr "halt -f"; }<br>
> disk { on-io-error detach; }<br>
> startup { degr-wfc-timeout 60; wfc-timeout 60; }<br>
><br>
> on NFS1 {<br>
> address <a href="http://10.20.137.8:7789" target="_blank">10.20.137.8:7789</a>;<br>
> device /dev/drbd0;<br>
> disk /dev/sdc;<br>
> meta-disk internal;<br>
> }<br>
> on NFS2 {<br>
> address <a href="http://10.20.137.9:7789" target="_blank">10.20.137.9:7789</a>;<br>
> device /dev/drbd0;<br>
> disk /dev/sdc;<br>
> meta-disk internal;<br>
> }<br>
> }<br>
><br>
><br>
</div></div><br></blockquote></div><br><br>Below I am getting one packet loss warning message. And due to that it's becoming StandAlone status on both the servers. Is there any mechanism to increase the number of packet drop count in DRBD .<br>
<br><br><br>Dec 7 19:23:13 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [1782:1784]<br>Dec 7 19:27:21 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [1906:1908]<br>Dec 7 19:28:27 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [1939:1941]<br>
Dec 7 19:38:49 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2250:2252]<br>Dec 7 19:40:01 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2286:2288]<br>Dec 7 19:41:31 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2331:2333]<br>
Dec 7 19:46:01 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2466:2468]<br>Dec 7 19:46:47 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2489:2491]<br>Dec 7 19:46:59 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2495:2497]<br>
Dec 7 19:47:09 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [2500:2502]<br>Dec 8 06:52:48 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [90:92]<br>Dec 8 06:52:54 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [93:95]<br>
Dec 8 06:59:14 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for [nfs2] [283:285]<br><br><br>Thanks & Regards,<br>Tarak Ranjan<br>