[DRBD-user] DRBD and Heartbeat V2 CRM mode unwanted auto failback

Daniel Stickney dstickney at pronto.com
Mon Feb 4 22:05:27 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hello everyone,

I have DRBD 8.0.6 and Heartbeat 2.1.3 configured in CRM mode. Everything 
is working awesome except a small critical detail: when the primary DRBD 
resource fails over to a second node and the first node comes back 
online later, the DRBD primary resource is failing back to the first 
node automatically. We don't want this behavior and would rather the 
DRBD primary resource stay where it is until that second node fails or 
we move it manually. We have "default-resource-stickiness" set to 
infinity in the <crm_config> section of the cib.xml file, but this 
unwanted auto-failback is still occurring. Does anyone here have DRBD 
and Heartbeat v2 with CRM setup so there is no auto-failback? If so, can 
you please share your cib.xml config?

(During testing, if we remove the DRBD resource and just configure an 
IP, the "default-resource-stickiness" of infinity works exactly as 
expected. If the IP is on halinux1, and we put halinux1 into standby, 
the IP fails to halinux2. When we bring halinux1 out of standby, the IP 
stays where it is on halinux2 as we want, which seems to show that 
default-resource-stickiness works as we expect, just not for the 
master_slave DRBD resource.)

Here is the cib.xml file:
<cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="2" 
cib_feature_revision="1.3" generated="true" epoch="269" num_updates="1" 
cib-last-written="Mon Feb  4 13:57:50 2008" ccm_transition="4" 
       <cluster_property_set id="cib-bootstrap-options">
           <nvpair id="cib-bootstrap-options-stonith-enabled" 
name="stonith-enabled" value="false"/>
           <nvpair id="cib-bootstrap-options-no-quorum-policy" 
name="no-quorum-policy" value="ignore"/>
           <nvpair name="last-lrm-refresh" 
id="cib-bootstrap-options-last-lrm-refresh" value="1201741039"/>
           <nvpair name="default-resource-stickiness" 
id="cib-bootstrap-options-default-resource-stickiness" value="INFINITY"/>
           <nvpair id="cib-bootstrap-options-dc-version" 
name="dc-version" value="2.1.3-node: 
       <node uname="halinux1" type="normal" 
             <nvpair name="standby" 
id="standby-d2c440e4-9668-4a70-b7e2-de7f52834325" value="false"/>
       <node uname="halinux2" type="normal" 
             <nvpair name="standby" 
id="standby-216a5f87-c472-4ce6-a3f1-7ce4f6dc1bae" value="false"/>
       <master_slave id="ms-drbd0">
         <meta_attributes id="ma-ms-drbd0">
             <nvpair id="ma-ms-drbd0-1" name="clone_max" value="2"/>
             <nvpair id="ma-ms-drbd0-2" name="clone_node_max" value="1"/>
             <nvpair id="ma-ms-drbd0-3" name="master_max" value="1"/>
             <nvpair id="ma-ms-drbd0-4" name="master_node_max" value="1"/>
             <nvpair id="ma-ms-drbd0-5" name="notify" value="yes"/>
             <nvpair id="ma-ms-drbd0-6" name="globally_unique" 
             <nvpair id="ma-ms-drbd0-7" name="target_role" 
         <primitive id="DRBD" class="ocf" provider="heartbeat" type="drbd">
           <instance_attributes id="ia-DRBD">
               <nvpair id="ia-DRBD-1" name="drbd_resource" value="mysql"/>

Here is the ha.cf file:
use_logd yes
udpport 695
bcast eth0
node    halinux1
node    halinux2
crm on

Thanks for your time,

Daniel Stickney - Linux Systems Administrator
Email: dstickney at pronto.com

More information about the drbd-user mailing list