[DRBD-user] Best Practice with DRBD RHCS and GFS2?

Colin Simpson Colin.Simpson at iongeo.com
Wed Oct 27 20:18:25 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Grr, sadly I've just tried waiting for it to become fully "UpToDate",
with a mount in place, but the GFS2 mount remains hung even after it
reaches this state. The noquota is probably a false lead as I see the
manual page for mount.gfs2 says quota's are defaulted to off anyway.

I do like your idea for putting /etc/init.d/gfs2 as the furthest out
resource, though I think I might be unable to use it for the same reason
I have dismissed the idea of using "force_unmount=1" in the clusterfs
resource (and I can't see the advantage of what you are doing over
force_unmount, again I'm maybe missing something). Namely, I have
multiple services using the same mounts i.e in my case Samba and NFS. 

I know a umount may be safe as they will probably get a busy if they try
to unmount and another service is using, but some services e.g samba may
not be "in" the mount point (i.e if no one is accessing a file in there
at this time), so will have that rug pulled away?

Another weird thing on my drbd just now, is there any reason why
bringing the drbd service up using restart causes it to come up as a
Secondary/Primary, but using just start does the right thing
Primary/Primary? See below (ok the restart generates some spurious gunk
cause it isn't running but I'd have thought it shouldn't do this):

[root at node2 ~]# /etc/init.d/drbd stop
Stopping all DRBD resources.
[root at node2 ~]# /etc/init.d/drbd start
Starting DRBD resources:    [ d(r0) s(r0) n(r0) ].
[root at node2 ~]# more /proc/drbd 
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by
buildsvn at c5-i386-build, 2008-10-03 11:42:32

 1: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:0
[root at node2 ~]# /etc/init.d/drbd stop
Stopping all DRBD resources.
[root at node2 ~]# /etc/init.d/drbd restart
Restarting all DRBD resourcesNo response from the DRBD driver! Is the
module loaded?
Command '/sbin/drbdsetup /dev/drbd1 down' terminated with exit code 20
command exited with code 20
ERROR: Module drbd does not exist in /proc/modules
.
[root at node2 ~]# more /proc/drbd 
version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by
buildsvn at c5-i386-build, 2008-10-03 11:42:32

 1: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:0

Any ideas?

Thanks

Colin


On Tue, 2010-10-26 at 17:37 +0100, J. Ryan Earl wrote:
> On Fri, Oct 22, 2010 at 12:49 PM, Colin Simpson
> <Colin.Simpson at iongeo.com> wrote:
>         Maybe I just need to leave for a long time ? Or I wonder
>         because you
>         have "noquota" in your mount options and the oops is in
>         gfs2_quotad
>         modules you never see it? 
> 
> 
> I saw that too...  I'm not sure if the noquota statement has any
> effect, I didn't have any problems before adding that but I saw in
> some tuning document it could help performance.
>  
>         
>         Though I don't see why you are adding the /etc/init.d/gfs2
>         service to
>         the cluster.conf, as all that does is mount gfs2 filesystems
>         from fstab
>         (and you say these are noauto in there), so will this do
>         anything? The
>         inner "clusterfs" directives will handle the actual mount? 
> 
> 
> It's to handle the unmount so that the volume goes down cleanly when
> the rgmanager service stops.  clusterfs won't stop the mount, so I put
> the mount in /etc/fstab with "noauto" so let rgmanager mount and
> unmount GFS2.
>  
>         <resources>
>           <clusterfs device="/dev/CluVG0/CluVG0-projects"
>         force_umount="0"
>         fstype="gfs2" mountpoint="/mnt/projects" name="projects"
>         options="acl"/>
>           <nfsexport name="tcluexports"/>
>           <nfsclient name="NFSprojectsclnt" options="rw"
>         target="192.168.1.0/24"/>
>            <ip address="192.168.1.60" monitor_link="1"/>
>         </resources>
>            <service autostart="1" domain="clusterA"
>         name="NFSprojects">
>             <ip ref="192.168.1.60"/>
>             <clusterfs fstype="gfs" ref="projects">
>               <nfsexport ref="tcluexports">
>                  <nfsclient name=" " ref="NFSprojectsclnt"/>
>                </nfsexport>
>              </clusterfs>
>            </service> 
> 
> 
> YMMV but I found it best to keep 'chkconfig gfs2 off' and control that
> as a script from rgmanager.  It fixed order of operation issues such
> as the GFS2 volume being mounted still during shutdown.  I'd wrap all
> your gfs clusterfs stanzas within a script for gfs2.  I suspect your
> gfs2 is recovering after an unclean shutdown, if you're using quotas
> that could add time to that operation I suppose.  Does it eventually
> come up if you just wait?
> 
> 
> -JR


This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed.  If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.





More information about the drbd-user mailing list