On Fri, Oct 22, 2010 at 12:49 PM, Colin Simpson <span dir="ltr"><<a href="mailto:Colin.Simpson@iongeo.com">Colin.Simpson@iongeo.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Maybe I just need to leave for a long time ? Or I wonder because you<br>
have "noquota" in your mount options and the oops is in gfs2_quotad<br>
modules you never see it?<br></blockquote><div><br></div><div>I saw that too... I'm not sure if the noquota statement has any effect, I didn't have any problems before adding that but I saw in some tuning document it could help performance.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
Though I don't see why you are adding the /etc/init.d/gfs2 service to<br>
the cluster.conf, as all that does is mount gfs2 filesystems from fstab<br>
(and you say these are noauto in there), so will this do anything? The<br>
inner "clusterfs" directives will handle the actual mount?<br></blockquote><div><br></div><div>It's to handle the unmount so that the volume goes down cleanly when the rgmanager service stops. clusterfs won't stop the mount, so I put the mount in /etc/fstab with "noauto" so let rgmanager mount and unmount GFS2.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<resources><br>
<clusterfs device="/dev/CluVG0/CluVG0-projects" force_umount="0"<br>
fstype="gfs2" mountpoint="/mnt/projects" name="projects" options="acl"/><br>
<nfsexport name="tcluexports"/><br>
<nfsclient name="NFSprojectsclnt" options="rw"<br>
target="<a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a>"/><br>
<ip address="192.168.1.60" monitor_link="1"/><br>
</resources><br>
<service autostart="1" domain="clusterA" name="NFSprojects"><br>
<ip ref="192.168.1.60"/><br>
<clusterfs fstype="gfs" ref="projects"><br>
<nfsexport ref="tcluexports"><br>
<nfsclient name=" " ref="NFSprojectsclnt"/><br>
</nfsexport><br>
</clusterfs><br>
</service></blockquote><div><br></div><div>YMMV but I found it best to keep 'chkconfig gfs2 off' and control that as a script from rgmanager. It fixed order of operation issues such as the GFS2 volume being mounted still during shutdown. I'd wrap all your gfs clusterfs stanzas within a script for gfs2. I suspect your gfs2 is recovering after an unclean shutdown, if you're using quotas that could add time to that operation I suppose. Does it eventually come up if you just wait?</div>
<div><br></div><div>-JR</div></div>