[DRBD-user] Fault Tolerant NFS

Marcel Kraan marcel at kraan.net
Wed Jun 6 08:22:11 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Friends of me told me that Corosync is giving strange error so i used haresources with heartbeat.

heartbeat with haresources is easy to use.

you have 1 file on both servers.
they are both exact the same

server1 and server2
general config
cat /etc/ha.d/ha.cf 
keepalive 2
deadtime 30
ucast eth0 kvmstorage1.localhost
ucast eth0 kvmstorage2.localhost
#bcast eth0
#udpport 695
node kvmstorage1.localhost kvmstorage2.localhost
auto_failback on
logfile /var/log/ha-log


server1 (this ip is the virtualip)
cat /etc/ha.d/haresources 
kvmstorage1.localhost IPaddr::192.168.123.209/24/eth0 drbddisk::main Filesystem::/dev/drbd0::/datastore::ext4 nfslock nfs rpcidmapd mysqld


server2
cat /etc/ha.d/haresources 
kvmstorage1.localhost IPaddr::192.168.123.209/24/eth0 drbddisk::main Filesystem::/dev/drbd0::/datastore::ext4 nfslock nfs rpcidmapd mysqld

all the services that are here are automatically started..  But they must be available in /etc/rc.d/...
you don't need to start them manually or via "service"  they start (not restart) when the node is going to be active.

You also need to share the NFS library
in my case it was  /var/lib/nfs  need to be at /datastore/nfs

# on both servers
mount /drbd0/ datastore
mv /var/lib/nfs  /datastore/
ln -s /datastore/nfs /var/lin/nfs

marcel

On 6 jun. 2012, at 00:27, Yount, William D wrote:

> Should I be using heartbeat instead of Corosync?
> 
> 
> -----Original Message-----
> From: Marcel Kraan [mailto:marcel at kraan.net] 
> Sent: Tuesday, June 05, 2012 5:19 PM
> To: Yount, William D
> Cc: Felix Frank; drbd-user at lists.linbit.com
> Subject: Re: [DRBD-user] Fault Tolerant NFS
> 
> This is what heartbeat does.
> It mount the drbd disk  and start all the programs that are given in the haresources the virtual ip will be on the second server up and running.
> so basically your 1servers becomes the second.
> when the 1st server come up again he will take it over again.
> 
> i can shutdown the first or second server without going down.. (maybe 5 or 10 seconds for switching)
> 
> works great...
> 
> On 5 jun. 2012, at 23:59, Yount, William D wrote:
> 
>> I am looking for a fault tolerant solution. By this, I mean I want there to be an automatic switch over if one of the two storage servers goes down with no human intervention. 
>> 
>> Initially, I followed this guide: 
>> https://help.ubuntu.com/community/HighlyAvailableNFS
>> That works fine, but there are several steps that require human intervention in case of a server failure:
>> 	Promote secondary server to primary
>> 	Mount drbd partition to export path
>> 	Restart nfs-kernel-server (if necessary)
>> 
>> I was trying to get dual primaries setup, thinking that if one goes out the other will take over automatically. There just seems to be so many moving pieces that don't always work they way they are supposed to. I have been reading all the material I can get my hands on but a lot of it seems contradictory or only applicable on certain OS versions with certain versions of OCFS2, DRBD and Pacemaker. 
>> 
>> It doesn't matter to me if it is master/slave or dual primaries. I am just trying to find something that actually works.
>> 
>> 
>> 
>> -----Original Message-----
>> From: Felix Frank [mailto:ff at mpexnet.de]
>> Sent: Tuesday, June 05, 2012 2:42 AM
>> To: Yount, William D
>> Cc: drbd-user at lists.linbit.com
>> Subject: Re: [DRBD-user] Fault Tolerant NFS
>> 
>> On 06/05/2012 07:41 AM, Yount, William D wrote:
>>> Does anyone have a good resource for setting up a fault tolerant NFS 
>>> cluster using DRBD? I am currently using DRBD, Pacemaker, Corosync 
>>> and
>>> OCFS2 on Ubuntu 12.04.
>> 
>> Those are all right, but I don't really see how OCFS2 is required.
>> Dual-primary? Not needed for HA NFS.
>> 
>> But it should still work.
>> 
>>> High availability doesn't meet my needs. I have spent quite a while 
>>> reading and trying out every combination of settings, but nothing 
>>> seems to work properly.
>> 
>> What are the exact limitations you're facing? Stale mounts after failover?
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120606/abd91615/attachment.pgp>


More information about the drbd-user mailing list