<div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Here is a scenario:<br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Two identical servers running RHEL 6.7,<br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Three RAID5 targets, with one Logical volume group and one logical volume defined on top of each target.<br></div><div class="gmail_default" style="font-family:verdana,sans-serif">A DRBD device defined on top of each logical volume, and then an XFS file system defined on top of each DRBD device. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">The two identical servers are right on top of one another in the rack, and connected by a single ethernet cable for a private network. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">The configuration works as far as synchronization between DRBD devices. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">We do NOT have pacemaker as part of this configuration at management's request. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">We have the XFS file system mounted on server1, and this file system is exported via NFS. <br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">The difficulty lies in performing failover actions without pacemaker automation. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">The file system is mounted, and those status flags on the file system are successfully mirrored to server2.<br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">If I disconnected all wires from server1 to simulate system failure, and promoted server2 to primary on one of these file systems, and attempted to mount it, the error displayed is "file system already mounted". <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">I have searched the xfs_admin and mount man pages thoroughly to find an option that would help me overcome this state. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Our purpose of replication is to preserve and recover data in case of failure, but we are unable to recover or use the secondary copy in our current configuration. <br><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">How can I recover and use this data without introducing pacemaker to our configuration?<br></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">Thanks for your help.<br></div><div class="gmail_default" style="font-family:verdana,sans-serif"> <br></div><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>-James Ault<br></div></div></div></div></div></div>
</div>