<div class="gmail_quote">On Fri, Nov 19, 2010 at 3:45 PM, Joe Hammerman <span dir="ltr"><<a href="mailto:jhammerman@saymedia.com">jhammerman@saymedia.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div>
<font face="Calibri, Verdana, Helvetica, Arial"><span style="font-size:11pt">Hey all, we are attempting to roll out DRBD in our environment. The issue we are encountering is not with DRBD itself but with RHCS. Does the DRBD device need to be defined as a resource in order for it’s breaking to trigger fencing? The DRBD nodes are VM’s, and the DRBD devices are incorporated into LVM’s with GFS2 formatting.<br>
<br>
Should I use the DRBD fence-peer handler script to call fence_vm?<br></span></font></div></blockquote><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; "><div>
<br></div><div>Short answer is: No.</div><div><br></div><div>In general, you should not handle active-active DRBD devices with RHCS. You only want to handle DRBD devices that are active only on one node at a time in RHCS (at least with CentOS/RHEL5 I haven't checked 6 yet). Basically if the DRBD device changing to active is a dependency of a service (ie you have an ext4 filesystem on the DRBD device) that must be active and mounted. Think of DRBD as just shared storage in the active-active case, you treat it as you would say a SAN or iSCSI LUN block device. If you were worried about connectivity to shared storage, you could setup a quorum disk as one of the LVs on top of the DRBD PV.</div>
<div><br></div></span><div><span class="Apple-style-span" style="font-family: arial, sans-serif; font-size: 13px; border-collapse: collapse; color: rgb(136, 136, 136); ">-JR</span> </div></div>