<div>Thx Todd - you're right - I'd incorrectly created the link for /var/lib/nfs indeed. </div>
<div>Any idea(s): on how to trigger a force sync - if for some reason the two disks are known to be not-in-sync? is this something that is not recommended i.e. what if - just to be sure - I set up a script to periodically sync up two disks - if this were possible, would this cause any untoward side-effect(s)?
</div>
<div>--<br><br> </div>
<div><span class="gmail_quote">On 7/7/06, <b class="gmail_sendername">pv</b> <<a href="mailto:vishnubhatt@gmail.com">vishnubhatt@gmail.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>
<div>I'm going to ensure that the /var/lib/nfs to make it point to the /data/nfs (which is exported and laid on top of log vols), I'm testing that as we speak, Thx for the response. </div>
<div> </div>
<div>Had yet another question - say for some reason you (as a user/admin) realize that the two disks/devs are not in sync - is there a way to force a sync from a user-land command?</div>
<div>--<br><br> </div></div>
<div><span class="e" id="q_10c4b17f65bcf624_1">
<div><span class="gmail_quote">On 7/6/06, <b class="gmail_sendername">Todd Denniston</b> <<a onclick="return top.js.OpenExtLink(window,event,this)" href="mailto:Todd.Denniston@ssa.crane.navy.mil" target="_blank">Todd.Denniston@ssa.crane.navy.mil
</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">pv wrote:<br>> I have two servers - fedora1, fedora2, wired up w/ drbd and heartbeat and I'm<br>> exporting the data (residing on the drbd dev) using NFS which is visible via
<br>> the cluster interface (via heartbeat).<br>><br>> The export is visible via a cluster interface (e.g. /data/export). I mount this<br>> export (via the cluster interface) from yet another m/c (nfs client). And I'm
<br>> doing simple pings and copying/touching files - when I turn on/off one of the<br>> nodes (e.g fedora1), the mount point moves from fedora1 to fedora2 and back<br>> respectively very well. I copy smaller files and do the same, I do not seem to
<br>> have any problem, but when I try copying a large ISO image file, I see the<br>> following issue:<br>><br>> 1) between the two servers, fedora1 is the primary and I start copying the<br>> large file. at this time - both servers are up and running and copy begins.
<br>> 2) in between the copy, I turn off fedora1 and observe that fedora2 has taken<br>> over the cluster interface as well as the nfs export/mount point, but I do not<br>> see the large file that I started before the failure.
<br><br>(2) is unexpected by me.<br>are you indicating that the file is not visible at the nfs client or when<br>logged into the server?<br>On the server is a big problem.<br>At the client is probably some minor misconfiguration of the services on the
<br>server.<br><br>> 3) upon failback, I get a stale NFS handle error - in my haresources file, I<br>> have the following entry:<br><br>what does `ls -ld /var/lib/nfs` (pn both machines) return?<br>The reason I ask is that the data in /var/lib/nfs needs to be on a disk shared
<br>between the machines and needs to be accessible _before_ starting nfs and<br>nfslock services.<br><br><br>><br>> fedora1 IPaddr::<a onclick="return top.js.OpenExtLink(window,event,this)" href="http://172.30.30.200/24/eth0" target="_blank">
172.30.30.200/24/eth0</a> drbddisk::r1<br>> Filesystem::/dev/drbd1::/data::ext3 restart-nfs restart-rpcidmapd<br>><br><br>do you control any other related services with ha? like the nfs service?<br><br>> --<br>>
<br>> Any idea if my config is wrong or is there an issue dealing w/ large files <br>> either in drbd or the ha or NFS itself? Thx in advance.<br><br>not enough info, but I know I have worked with files bigger than 2GB and had
<br>no problems on drbd 0.6.13 on Fedora Core 1 over NFS managed by Heartbeat. <br>--<br>Todd Denniston<br>Crane Division, Naval Surface Warfare Center (NSWC Crane)<br>Harnessing the Power of Technology for the Warfighter
<br>__<br>please use the "List-Reply" function of your email client.<br><br></blockquote></div><br></span></div></blockquote></div><br>