[DRBD-user] DRBD STONITH - how is Pacemaker constraint cleared?

Bob Schatz bschatz at yahoo.com
Tue Aug 2 21:21:29 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

I setup DRBD and Pacemaker using STONITH for DRBD and for Pacemaker.  (Configs at bottom of email)

When I reboot the PRIMARY DRBD node (cnode-1-3-6), Pacemaker shows this location constraint:

location drbd-fence-by-handler-ms-glance-drbd ms-glance-drbd \
rule $id="drbd-fence-by-handler-rule-ms-glance-drbd" $role="Master" -inf: #uname ne cnode-1-3-5

and transitions the SECONDARY to PRIMARY.   This makes sense to me.

However, when I restart cnode-1-3-6 (cnode-1-3-5 still up as PRIMARY) the location constraint is not cleared as I would have expected.   Also, DRBD is not started (I assume because of the location constraint).  I would expect that since cnode-1-3-5 is still up the constraint would be moved and DRBD would change to SECONDARY.

Am I correct that this location constraint should be cleared?

I assumed this would be cleared by the DRBD handler after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh" script but I do not believe it is called.

BTW, I am pretty sure I have ordering duplications in my Pacemaker configuration (pointed out by Andrew on the Pacemaker mailing list) but I am not sure if that is the problem.


Thanks,

Bob

drbd.conf file:


global {
 usage-count yes;
}

common {
 protocol C;
}

resource glance-repos-drbd {
 disk {
   fencing resource-and-stonith;
 }
 handlers {
   fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
   after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
 }
 on cnode-1-3-5 {
   device    /dev/drbd1;
   disk      /dev/glance-repos/glance-repos-vol;
   address   10.4.1.29:7789;
   flexible-meta-disk /dev/glance-repos/glance-repos-drbd-meta-vol;
 }
 on cnode-1-3-6 {
   device    /dev/drbd1;
   disk      /dev/glance-repos/glance-repos-vol;
   address   10.4.1.30:7789;
   flexible-meta-disk /dev/glance-repos/glance-repos-drbd-meta-vol;
 }
 syncer {
   rate 40M;
 }
}

Pacemaker configuration:

node cnode-1-3-5
node cnode-1-3-6

primitive glance-drbd-p ocf:linbit:drbd \ params drbd_resource="glance-repos-drbd" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" \ op monitor interval="59s" role="Master" timeout="30s" \ op monitor interval="61s" role="Slave" timeout="30s"

primitive glance-fs-p ocf:heartbeat:Filesystem \ params device="/dev/drbd1" directory="/glance-mount" fstype="ext4" \ op start interval="0" timeout="60" \ op monitor interval="60" timeout="60" OCF_CHECK_LEVEL="20" \ op stop interval="0" timeout="120"

primitive glance-ip-p ocf:heartbeat:IPaddr2 \ params ip="10.4.0.25" nic="br100" \ op monitor interval="5s" 

primitive glance-lvm-p ocf:heartbeat:LVM \ params volgrpname="glance-repos" exclusive="true" \ op start interval="0" timeout="30" \ op stop interval="0" timeout="30" \ meta target-role="Started" 

primitive node-stonith-5-p stonith:external/ipmi \ op monitor interval="10m" timeout="1m" target_role="Started" \ params hostname="cnode-1-3-5 cnode-1-3-6" ipaddr="172.23.8.99" userid="ADMIN" passwd="foo" interface="lan" 

primitive node-stonith-6-p stonith:external/ipmi \ op monitor interval="10m" timeout="1m" target_role="Started" \ params hostname="cnode-1-3-5 cnode-1-3-6" ipaddr="172.23.8.100" userid="ADMIN" passwd="foo" interface="lan" 

group group-glance-fs glance-fs-p glance-ip-p \ meta target-role="Started" 

ms ms-glance-drbd glance-drbd-p \ meta master-node-max="1" clone-max="2" clone-node-max="1" globally-unique="false" notify="true" target-role="Master"

clone cloneLvm glance-lvm-p

location drbd-fence-by-handler-ms-glance-drbd ms-glance-drbd \ rule $id="drbd-fence-by-handler-rule-ms-glance-drbd" $role="Master" -inf: #uname ne cnode-1-3-5

location loc-node-stonith-5 node-stonith-5-p \ rule $id="loc-node-stonith-5-rule" -inf: #uname eq cnode-1-3-5

location loc-node-stonith-6 node-stonith-6-p \ rule $id="loc-node-stonith-6-rule" -inf: #uname eq cnode-1-3-6

colocation coloc-drbd-and-fs-group inf: ms-glance-drbd:Master group-glance-fs 

order order-glance-drbd-demote-before-stop-drbd inf: ms-glance-drbd:demote ms-glance-drbd:stop 

order order-glance-drbd-promote-before-fs-group inf: ms-glance-drbd:promote group-glance-fs:start 

order order-glance-drbd-start-before-drbd-promote inf: ms-glance-drbd:start ms-glance-drbd:promote 

order order-glance-fs-stop-before-demote-drbd inf: group-glance-fs:stop ms-glance-drbd:demote

order order-glance-lvm-before-drbd 0: cloneLvm ms-glance-drbd:start 

property $id="cib-bootstrap-options" \ dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ stonith-enabled="true" \ no-quorum-policy="ignore" \ last-lrm-refresh="1311899021" 

rsc_defaults $id="rsc-options" \ resource-stickiness="100"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110802/feb826bf/attachment.htm>


More information about the drbd-user mailing list