Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 12/08/2011 08:14 AM, trm asn wrote:
>
>
> On Sun, Nov 27, 2011 at 9:46 PM, Nick Khamis <symack at gmail.com
> <mailto:symack at gmail.com>> wrote:
>
> I could be wrong, but topics as important as a disk replicator's
> ability to automatically recover
> from split brain has been covered multiple times on it's list. Not to
> mention the thourough
> documentation.
>
> http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html
> http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-automatic-split-brain-recovery-configuration
> http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-split-brain-notification
>
> How about it......
>
> Nick from Toronto.
> - Show quoted text -
>
>
>
> On Sat, Nov 26, 2011 at 4:49 PM, trm asn <trm.nagios at gmail.com
> <mailto:trm.nagios at gmail.com>> wrote:
> > Dear List,
> >
> > I have one HA NFS setup with DRBD. Primary is NFS1 server &
> secondary is
> > NFS2 server.
> >
> > Please help me out to configure the auto recovery from split-brain.
> >
> > Below is my config & package details.
> >
> >
> > Packages:
> > kmod-drbd83-8.3.8-1.el5.centos
> > drbd83-8.3.8-1.el5.centos
> >
> > /etc/drbd.conf [ same one both the box]
> >
> > common { syncer { rate 100M; al-extents 257; } }
> > resource main {
> > protocol C;
> > handlers { pri-on-incon-degr "halt -f"; }
> > disk { on-io-error detach; }
> > startup { degr-wfc-timeout 60; wfc-timeout 60; }
> >
> > on NFS1 {
> > address 10.20.137.8:7789 <http://10.20.137.8:7789>;
> > device /dev/drbd0;
> > disk /dev/sdc;
> > meta-disk internal;
> > }
> > on NFS2 {
> > address 10.20.137.9:7789 <http://10.20.137.9:7789>;
> > device /dev/drbd0;
> > disk /dev/sdc;
> > meta-disk internal;
> > }
> > }
> >
> >
>
>
>
> Below I am getting one packet loss warning message. And due to that it's
> becoming StandAlone status on both the servers. Is there any mechanism
> to increase the number of packet drop count in DRBD .
That has nothing to do with DRBD, these are messages from Heartbeats
messaging layer ... flaky network?
Regards,
Andreas
--
Need help with DRBD & Pacemaker?
http://www.hastexo.com/now
>
>
>
> Dec 7 19:23:13 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [1782:1784]
> Dec 7 19:27:21 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [1906:1908]
> Dec 7 19:28:27 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [1939:1941]
> Dec 7 19:38:49 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2250:2252]
> Dec 7 19:40:01 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2286:2288]
> Dec 7 19:41:31 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2331:2333]
> Dec 7 19:46:01 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2466:2468]
> Dec 7 19:46:47 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2489:2491]
> Dec 7 19:46:59 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2495:2497]
> Dec 7 19:47:09 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [2500:2502]
> Dec 8 06:52:48 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [90:92]
> Dec 8 06:52:54 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [93:95]
> Dec 8 06:59:14 NFS1 heartbeat: [12280]: WARN: 1 lost packet(s) for
> [nfs2] [283:285]
>
>
> Thanks & Regards,
> Tarak Ranjan
>
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 286 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111209/d115b124/attachment.pgp>