[DRBD-user] how to avoid Stale NFS file handle with multiple drbds?
Todd.Denniston at ssa.crane.navy.mil
Mon Dec 12 16:27:36 CET 2005
Raoul Borenius wrote:
> Hi Brad,
> On Sat, Dec 10, 2005 at 07:46:06AM -0500, Brad Barnett wrote:
>>On Wed, 7 Dec 2005 10:45:20 +0100
>>raoul at sgs.dfn.de (Raoul Borenius) wrote:
>>>has anyone succeeded in setting up multiple nfs-exported drbd-volumes
>>>without getting the 'Stale NFS file handle'-error during failover?
>>>Following http://linux-ha.org/DRBD_2fNFS I can export one drbd-volume
>>>with symlinked /var/lib/nfs which works perfectly without error during
>>>failover. But what do I need to do if I have two or more drbd-volumes on
>>>my nfs-server (which will be exported to different clients)?
>>I have to ask this, just in case you may be doing something strange. ;)
>>If you have a large raid, there is no need to setup a separate partition
>>for each NFS export. That is, if you have four NFS exports, they can all
>>live in the same directory on your raid/drbd device.. and be nfs exported
> That's true of course. But we've had some bad experiences with multiple
> simultaneous disk-failures so we're trying to do without raid.
>>Now, I know you may have four different drives, non raided.. and that you
>>may be exporting them each.. but I thought I would mention the above, just
>>in case you thought each nfs export needed its own partition.
>>>/var/lib/nfs can only be symlinked to one of the volumes which
>>>means that I would get a 'Stale NFS file handle'-error during failover
>>>on all the other volumes.
>>>I've tried to set up /var/lib/nfs as drbd0 and then my data-volumes as
>>>drbd1, drbd2 etc. but that did not help.
>>It should help. Is it that the drbd0 partition is not ready soon enough?
>>I would think that heartbeat failover would mean that all drbd partitions
>>are ready, before NFS is started..
>>Do you know why this failed? It seems like a logical idea...
> Here ist my haresources with only one data-volume:
> b1 drbddisk::varlibnfs \
> drbddisk::mail \
> Filesystem::/dev/drbd0::/var/lib/nfs::ext3 \
> Filesystem::/dev/drbd1::/srv/nfs/mail::ext3 \
> killnfsd \
> nfs-common \
> nfs-kernel-server \
> sleep::3 \
>>If it looks like the drbd failover worked, that is, the /var/lib/nfs
>>symlink was pointed to drbd0, and drbd0 was there before nfs started.. it
>>may be something else that is causing stales during failover.
Just for clarification:
You indicate you use a softlink to put the drbd0 filesystem in place of
/var/lib/nfs, but above you mount drbd0 at /var/lib/nfs ... which is it, and
is it the same on BOTH systems?
I have several exports, from different drbd managed partitions, with a
softlink /var/lib/nfs -> /nfsstate/var/lib/nfs where nfsstate is my drbd0
which until recently did not have problems with stale nfs ... WAIT A MINUTE,
now I know why I was seeing stale nfs handles the last time I fell over: after
doing an `ls -l /var/lib/nfs`, I see that for some reason the systems have
blown away my softlinks and replaced with new boot disk resident directories,
must fix. Thanks for the wake up.
> Maybe it's just that there needs to be some sort of sync between
> /var/lib/nfs and the actual exportet files? And if that is put on
> different drbds there might always be some kind of inconsistency between
> the two...?
>>Personally, I moved from a live, non-drbd setup to a drbd setup, so I had
>>several non-friendly mount options on my nfs clients that caused the
>>failover to fail in such a way.
>>What are your nfs export and nfs client options, for mounting? Double
>>check these, for sure...
> I'm using nfs-defaults:
> /srv/nfs/mail 184.108.40.206/24(rw,sync)
> On client:
> raoul at b5:~$ mount
> /dev/sda1 on / type ext3 (rw,errors=remount-ro)
> proc on /proc type proc (rw)
> sysfs on /sys type sysfs (rw)
> devpts on /dev/pts type devpts (rw,gid=5,mode=620)
> tmpfs on /dev/shm type tmpfs (rw)
> usbfs on /proc/bus/usb type usbfs (rw)
> tmpfs on /tmp type tmpfs (rw)
> tmpfs on /var/tmp type tmpfs (rw)
> tmpfs on /dev type tmpfs (rw,size=10M,mode=0755)
> nfs:/srv/nfs/mail on /mnt type nfs (rw,addr=220.127.116.11)
> raoul at b5:~$
>>>I've been reading the list-archives and googling around but haven't
>>>found anything that pointed me to where the problem lies.
>>>Is it not possible to use more than one volume for nfs-export?
> Would be nice to hear if anyone has succeeded in using more than one
> drbd and nfs.
> At the moment I'm about to give up and set up software-raid and one
> drbd on top of that.
> Thanks for your help anyway!
> drbd-user mailing list
> drbd-user at lists.linbit.com
More information about the drbd-user