[DRBD-user] RedHat Clustering Services does not fence when DRBD breaks

Joe Hammerman jhammerman at saymedia.com
Tue Nov 23 20:04:38 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Well, yes, that's exactly the problem; when one node broke, fencing (from the RHCS level) wouldn't be enacted, and things would go South very quickly.

I'll give your suggestion a try in our dev environment, and let you know how it goes.

Thanks Jakov!

On 11/23/10 1:16 AM, "Jakov Sosic" <jakov.sosic at srce.hr> wrote:

On 11/22/2010 08:04 PM, Joe Hammerman wrote:
> Well we're running DRBD in Primary - Primary mode, so the service should
> be enabled on both machines at the same time.
>
> GFS breaks when DRBD loses sync, and both nodes become unusable, since
> neither can guarantee write integrity.  If one of the nodes fenced, when
> it rebooted, it would become at the worst secondary. Then the node that
> never fenced stays on line, and we have %100 uptime.
>
> This is a pretty non-standard setup, huh?

But what's the point of two-node cluster if your setup cannot withstand
the loss of one node. In the case of sync loss, one node should be
fenced, so the other can keep on working with mounted GFS. Your goal
should be to achieve that.

You should resolve it on DRBD level indeed, so when DRBD loses sync that
one node gets fenced... Something like:

disk {
   fencing resource-and-stonith;
}
handlers {
   outdate-peer "/sbin/obliterate-peer.sh"; # We'll get back to this.
}

You can get this script from:
http://people.redhat.com/lhh/obliterate-peer.sh


Also please take a look at:
http://gfs.wikidev.net/DRBD_Cookbook



I hope this helps!



--
Jakov Sosic
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20101123/74c51742/attachment.htm>


More information about the drbd-user mailing list