[DRBD-user] Need help setting fence handler and pacemaker 1.1.10 with DRBD 8.3.16 config

Lars Ellenberg lars.ellenberg at linbit.com
Thu Jun 26 12:11:38 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I already relayed an answer through Matt, off list.
But here is for the benefit of the archives:

On Thu, Jun 19, 2014 at 12:54:29AM -0400, Digimer wrote:
> Hi all,
> 
>   This is something of a repeat of my questions on the Pacemaker
> mailing list, but I think I am dealing with a DRBD issue, so allow
> me to ask again here.
> 
>   I have drbd configured thus:
> 
> ====
> # /etc/drbd.conf
> common {
>     protocol               C;
>     net {
>         allow-two-primaries;
>     disk {
>         fencing          resource-and-stonith;

>         fence-peer       /usr/lib/drbd/crm-fence-peer.sh;

You need to unfence it, too...
after-resync-target ... -unfence- ...

> I've setup pacemaker thus:
> 
> ====
> Cluster Name: an-anvil-04
> Corosync Nodes:
> 
> Pacemaker Nodes:
>  an-a04n01.alteeve.ca an-a04n02.alteeve.ca
> 
> Resources:
>  Master: drbd_r0_Clone
>   Meta Attrs: master-max=2 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true
>   Resource: drbd_r0 (class=ocf provider=linbit type=drbd)
>    Attributes: drbd_resource=r0
>    Operations: monitor interval=30s (drbd_r0-monitor-interval-30s)
>  Master: lvm_n01_vg0_Clone
>   Meta Attrs: master-max=2 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true
>   Resource: lvm_n01_vg0 (class=ocf provider=heartbeat type=LVM)
>    Attributes: volgrpname=an-a04n01_vg0
>    Operations: monitor interval=30s (lvm_n01_vg0-monitor-interval-30s)
> 
> Stonith Devices:
>  Resource: fence_n01_ipmi (class=stonith type=fence_ipmilan)
>   Attributes: pcmk_host_list=an-a04n01.alteeve.ca
> ipaddr=an-a04n01.ipmi action=reboot login=admin passwd=Initial1
> delay=15
>   Operations: monitor interval=60s (fence_n01_ipmi-monitor-interval-60s)
>  Resource: fence_n02_ipmi (class=stonith type=fence_ipmilan)
>   Attributes: pcmk_host_list=an-a04n02.alteeve.ca
> ipaddr=an-a04n02.ipmi action=reboot login=admin passwd=Initial1
>   Operations: monitor interval=60s (fence_n02_ipmi-monitor-interval-60s)
> Fencing Levels:
> 
> Location Constraints:
>   Resource: drbd_r0_Clone
>     Constraint: drbd-fence-by-handler-r0-drbd_r0_Clone
>       Rule: score=-INFINITY role=Master
> (id:drbd-fence-by-handler-r0-rule-drbd_r0_Clone)
>         Expression: #uname ne an-a04n01.alteeve.ca
> (id:drbd-fence-by-handler-r0-expr-drbd_r0_Clone)
> Ordering Constraints:
>   promote drbd_r0_Clone then start lvm_n01_vg0_Clone (Mandatory)
> (id:order-drbd_r0_Clone-lvm_n01_vg0_Clone-mandatory)
> Colocation Constraints:
> 
> Cluster Properties:
>  cluster-infrastructure: cman
>  dc-version: 1.1.10-14.el6_5.3-368c726
>  last-lrm-refresh: 1403147476
>  no-quorum-policy: ignore
>  stonith-enabled: true
> ====
> 
> Note the -INFINITY rule, I didn't add that, crm-fence-peer.sh did on
> start. This brings me to the question;
> 
> When pacemaker starts the DRBD resource, immediately the peer is
> resource fenced. Note that only r0 is configured at this time to
> simplify debugging this issue.
> 
> Here is the DRBD related /var/log/messages entries on node 1
> (an-a04n01) from when I start pacemaker:

> ====
*please* don't wrap log lines.

> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: disk( Diskless -> Attaching )
> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: disk( Attaching -> Consistent )
> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: attached to UUIDs 561F3328043888C0:0000000000000000:052A1A6B59936EC5:05291A6B59936EC5
> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: conn( StandAlone -> Unconnected )
> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: conn( Unconnected -> WFConnection )
> Jun 19 00:14:24 an-a04n01 pengine[16894]:   notice: LogActions: Promote drbd_r0:0#011(Slave -> Master an-a04n01.alteeve.ca)
> Jun 19 00:14:24 an-a04n01 pengine[16894]:   notice: LogActions: Promote drbd_r0:1#011(Slave -> Master an-a04n02.alteeve.ca)
> Jun 19 00:14:24 an-a04n01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0

There. Since DRBD is promoted while still "only Consistent",
it has to call the fence-peer handler.

> Jun 19 00:14:25 an-a04n01 kernel: block drbd0: Handshake successful: Agreed network protocol version 97
> Jun 19 00:14:25 an-a04n01 crm-fence-peer.sh[17156]: INFO peer is reachable, my disk is Consistent: placed constraint 'drbd-fence-by-handler-r0-drbd_r0_Clone'
> Jun 19 00:14:25 an-a04n01 kernel: block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 4 (0x400)
> Jun 19 00:14:25 an-a04n01 kernel: block drbd0: fence-peer helper returned 4 (peer was fenced)

Which is successful,
which allows this state transition:

> Jun 19 00:14:25 an-a04n01 kernel: block drbd0: role( Secondary -> Primary ) disk( Consistent -> UpToDate ) pdsk( DUnknown -> Outdated )

Meanwhile, the other node did the same,
and cib arbitrates:

> Jun 19 00:14:25 an-a04n01 cib[16890]:  warning: update_results: Action cib_create failed: Name not unique on network (cde=-76)
> Jun 19 00:14:25 an-a04n01 cib[16890]:    error: cib_process_create: CIB Update failures   <failed>
> Jun 19 00:14:25 an-a04n01 cib[16890]:    error: cib_process_create: CIB Update failures     <failed_update id="drbd-fence-by-handler-r0-drbd_r0_Clone" object_type="rsc_location" operation="cib_create" reason="Name not unique on network">
> Jun 19 00:14:25 an-a04n01 cib[16890]:    error: cib_process_create: CIB Update failures       <rsc_location rsc="drbd_r0_Clone" id="drbd-fence-by-handler-r0-drbd_r0_Clone">
> Jun 19 00:14:25 an-a04n01 cib[16890]:    error: cib_process_create: CIB Update failures         <rule role="Master" score="-INFINITY" id="drbd-fence-by-handler-r0-rule-drbd_r0_Clone">
> Jun 19 00:14:25 an-a04n01 cib[16890]:    error: cib_process_create: CIB Update failures           <expression attribute="#uname" operation="ne" value="an-a04n02.alteeve.ca" id="drbd-fence-by-handler-r0-expr-drbd_r0_Clone"/>

"Works as designed."

> Jun 19 00:14:26 an-a04n01 kernel: block drbd0: Resync done (total 1 sec; paused 0 sec; 0 K/sec)
> Jun 19 00:14:26 an-a04n01 kernel: block drbd0: conn( SyncSource -> Connected ) pdsk( Inconsistent -> UpToDate )

And there the after-resync-target handler on the peer should remove the constraint again.

> ====
> 
> This causes pacemaker to declare the resource failed, though
> eventually the crm-unfence-peer.sh is called, clearing the -INFINITY
> constraint and node 2 (an-a04n02) finally promotes to Primary.
> However, the drbd_r0 resource remains listed as failed.

A failed promote should result in "recovery",
and that is supposed to be demote (?) -> stop -> start

Do you need to manually do a cleanup there?
Is it enough to wait for "cluster recheck interval"?

Or does it recover by itself anyways,
and only records that it has failed once?

> I have no idea why this is happening, and I am really stumped. Any
> help would be much appreciated.

Because pacemaker is allowed to try to promote an "only Consistent" DRBD,
and does so sufficiently simultaneously, before DRBD is done with its handshake.

> digimer
> 
> PS - RHEL 6.5 fully updated, DRBD 8.3.15, Pacemaker 1.1.10

Upgrade to 8.4
(which also gives you much improved (random) write performance).

Or at least use the resource agent that comes with 8.4
(which is supposed to be backward compatible with 8.3;
if it is not, let me know)

It has a parameter "adjust_master_scores",
which is a list of four numbers, explained there.
Set the first to 0 (that's the master score for a Consistent drbd),
which makes pacemaker to not even try to promote a resource that
does not know whether it has "good" data.

Alternatively, a smal negative location constraint on the master role
(on any location; so "defined #uname" might be a nice location)
should offset the master score just enough to have the same effect.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list