<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello William,<div><br></div><div>I'am not using LVM but a normal 'guid' partition from 4TB each.</div><div>The partitions are running on a virtual machine under KVM</div><div>So the virtual machines are syncing the drbd partitions.</div><div><br></div><div>I use heartbeat with haresources because it's so easy to use.</div><div>i followed this tutorial and it's was perfect for me.</div><div><a href="http://houseoflinux.com/high-availability/building-a-high-available-file-server-with-nfs-v4-drbd-8-3-and-heartbeat-on-centos-6/page-2">http://houseoflinux.com/high-availability/building-a-high-available-file-server-with-nfs-v4-drbd-8-3-and-heartbeat-on-centos-6/page-2</a></div><div><br></div><div><br></div><div><br></div><div><br><div><div>On 6 jun. 2012, at 05:50, Yount, William D wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>I understand what heartbeat does in the general sense. Actually configuring it correctly and making it work the way it is supposed to is the problem.<br><br>I have read the official DRBD/Heartbeat documentation (<a href="http://www.linbit.com/fileadmin/tech-guides/ha-nfs.pdf">http://www.linbit.com/fileadmin/tech-guides/ha-nfs.pdf</a>). That covers a LVM situation that isn't applicable to me. I use LVM but just have one logical volume so no need to group them.<br><br>I have been able to cobble together a set of steps based off of the official documentation and other guides. Different documentation takes different approaches and they often contain contradictory information.<br><br>I have two servers with two 2tb hard drives each. I am using software RAID with logical volumes. I have one 50gb LV for the OS, one 30gb LV for swap and one 1.7tb volume for Storage. All I want is to mirror that 1.7tb LV across servers and then have pacemaker/heartbeat switch over the second server. <br><br>I am not sure if I need to define nfs-kernel-server, LVM, exportFS and drbd0 as services. I am using the LCMC application to monitor the configuration. <br><br>Using the steps that I attached, if the primary server goes down, the secondary does nothing. It doesn't mount /dev/drbd0 to /Storage and it doesn't start accepting traffic on 10.89.99.30. <br><br><br><br><br><br><br>-----Original Message-----<br>From: Marcel Kraan [mailto:marcel@kraan.net] <br>Sent: Tuesday, June 05, 2012 5:19 PM<br>To: Yount, William D<br>Cc: Felix Frank; <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>Subject: Re: [DRBD-user] Fault Tolerant NFS<br><br>This is what heartbeat does.<br>It mount the drbd disk and start all the programs that are given in the haresources the virtual ip will be on the second server up and running.<br>so basically your 1servers becomes the second.<br>when the 1st server come up again he will take it over again.<br><br>i can shutdown the first or second server without going down.. (maybe 5 or 10 seconds for switching)<br><br>works great...<br><br>On 5 jun. 2012, at 23:59, Yount, William D wrote:<br><br><blockquote type="cite">I am looking for a fault tolerant solution. By this, I mean I want there to be an automatic switch over if one of the two storage servers goes down with no human intervention. <br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Initially, I followed this guide: <br></blockquote><blockquote type="cite"><a href="https://help.ubuntu.com/community/HighlyAvailableNFS">https://help.ubuntu.com/community/HighlyAvailableNFS</a><br></blockquote><blockquote type="cite">That works fine, but there are several steps that require human intervention in case of a server failure:<br></blockquote><blockquote type="cite"><span class="Apple-tab-span" style="white-space:pre">        </span>Promote secondary server to primary<br></blockquote><blockquote type="cite"><span class="Apple-tab-span" style="white-space:pre">        </span>Mount drbd partition to export path<br></blockquote><blockquote type="cite"><span class="Apple-tab-span" style="white-space:pre">        </span>Restart nfs-kernel-server (if necessary)<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I was trying to get dual primaries setup, thinking that if one goes out the other will take over automatically. There just seems to be so many moving pieces that don't always work they way they are supposed to. I have been reading all the material I can get my hands on but a lot of it seems contradictory or only applicable on certain OS versions with certain versions of OCFS2, DRBD and Pacemaker. <br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">It doesn't matter to me if it is master/slave or dual primaries. I am just trying to find something that actually works.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">-----Original Message-----<br></blockquote><blockquote type="cite">From: Felix Frank [mailto:ff@mpexnet.de]<br></blockquote><blockquote type="cite">Sent: Tuesday, June 05, 2012 2:42 AM<br></blockquote><blockquote type="cite">To: Yount, William D<br></blockquote><blockquote type="cite">Cc: <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br></blockquote><blockquote type="cite">Subject: Re: [DRBD-user] Fault Tolerant NFS<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">On 06/05/2012 07:41 AM, Yount, William D wrote:<br></blockquote><blockquote type="cite"><blockquote type="cite">Does anyone have a good resource for setting up a fault tolerant NFS <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">cluster using DRBD? I am currently using DRBD, Pacemaker, Corosync <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">and<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">OCFS2 on Ubuntu 12.04.<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Those are all right, but I don't really see how OCFS2 is required.<br></blockquote><blockquote type="cite">Dual-primary? Not needed for HA NFS.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">But it should still work.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><blockquote type="cite">High availability doesn't meet my needs. I have spent quite a while <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">reading and trying out every combination of settings, but nothing <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">seems to work properly.<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">What are the exact limitations you're facing? Stale mounts after failover?<br></blockquote><blockquote type="cite">_______________________________________________<br></blockquote><blockquote type="cite">drbd-user mailing list<br></blockquote><blockquote type="cite"><a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br></blockquote><blockquote type="cite"><a href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br></blockquote><br><span><drbd.rtf></span></div></blockquote></div><br></div></body></html>