[DRBD-user] Best Practice with DRBD RHCS and GFS2?

J. Ryan Earl oss at jryanearl.us
Fri Oct 29 22:07:30 CEST 2010


On Fri, Oct 29, 2010 at 12:49 PM, Colin Simpson <Colin.Simpson at iongeo.com>wrote:

> I have made one slight mod to his method on the cluster.conf, I
> personally have multiple services using the same file mounts. I also
> though have cluster.conf managing my GFS2 mounts for me e.g.
>
> <clusterfs device="/dev/CluVG0/CluVG0-projects"
>         force_umount="0"
>         fstype="gfs2" mountpoint="/mnt/projects" name="projects"
>         options="acl"/>
>
> The issue is: I don't want to force_umount as other services might be
> using this mount point (but may not be actually in it at the time).
>

I don't think you need to use force_unmount, at least I haven't needed to.
 As I understand, the same resource references can be used as a dependency
for multiple services.  If one service relies upon a RHCS resource the other
service is dependent upon, stopping one service will just decrease the
reference count on the RHCS resource and not try to stop the RHCS resource
until, from what I've seen, rgmanager is stopped completely; even if the
reference count is 0 it appears to leave the resource running.  What I saw
was that the RHCS controlled GFS2 mount would persistent even have disabling
and stopping the RHCS service (or group as they call it sometimes).


> I still have the issue of restart always brings the device up in
> Secondary/Primary. I wonder if the startup script doesn't do enough on
> restart? I notice the "start" section does:
>
> $DRBDADM sh-b-pri all # Become primary if configured


Yea maybe that's it.  I can reproduce this behavior 100% of the time as well
on dual-primary DRBD resources.

-JR

-JR
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20101029/1c7b800b/attachment.htm>


More information about the drbd-user mailing list