[DRBD-user] Mount filesystem on both servers?

Gordan Bobic drbd at bobich.net
Thu Jan 22 15:56:21 CET 2009


On Thu, 22 Jan 2009 15:17:44 +0100, Peter Funk <pf at artcom-gmbh.de> wrote:

> Hmmm... What about file-locking?  
> 
> Normally a rpc.statd is required for this to work, which for example on
> Ubuntu Linux is started as part of the package/service nfs-common.  
> Since rpc.statd will be started by /etc/init.d/nfs-common it is needed 
> ---as the name `common` already suggests--- on both the NFS server and 
> NFS client sides.
> 
> Unfortunately /var/lib/nfs which is in turn needed by rpc.statd to start 
> up has to be kept on the shared DRBD to avoid stale nfs file locks
> on any clients after a failover switch.

So set up both nfs/nfslock as fail-over resources in addition to your data
volume and /var/lib/nfs volume which should also be on a DRBD volume.

> Conclusion:
> Together with heartbeat it is not that simple to exchange the roles
> of the NFS server and the NFS client between two nodes back and fourth.

Not really, you just need additional fail-over resources/services. Instead
of just failing over the IP address you make it start up the relevant
services and data volumes on the secondary server. Personally I prefer a
fully active-active setup if I can at all help it, but the active/passive
setup should be possible, too. You mostly answered your own question, in
fact, re: /var/lib/nfs.

> At least as I insisted on using heartbeat r1 style config files because
> I wanted to avoid the fancy and IMHO rather complex XML cluster resource 
> manager files introduced with heartbeat 2.x I ran into severe problems
> and had to give up for the time being.

I tend to use RHCS in preference to heartbeat, but that's mostly because I
tend to use GFS on most of my clustered systems, so RHCS is mandatory, and
heartbeat would just be redundant and mean maintaining two configs instead
of one.

Gordan



More information about the drbd-user mailing list