[DRBD-user] Trying to Understanding crm-fence-peer.sh
Bryan K. Walton
bwalton+1539795345 at leepfrog.com
Fri Jan 11 15:54:00 CET 2019
I'm trying to understand what and how crm-fence-peer.sh does what it
does. I'm using DRBD 8.4 with Pacemaker in a two node cluster, with a
single primary.
I'm doing fabric fencing. And I believe I have my stonith setup
configured correctly. I have the following in my drbd config:
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
...
}
disk {
fencing resource-and-stonith;
...
}
I can run "stonith_admin -F <node>" and the
switch ports for the other node will get disabled. Similarly, I can run
"pcs stonith fence <node> --off" and I will get the same result.
Now, I'm doing more fence testing, like running "init 6" on the primary
node that is also the DC. When I do this, I see the following in logs of
the surviving node:
Jan 11 08:49:53 storage2 kernel: drbd r0: helper command: /sbin/drbdadm
fence-peer r0
Jan 11 08:49:53 storage2 crm-fence-peer.sh[15594]:
DRBD_CONF=/etc/drbd.conf DRBD_DONT_WARN_ON_VERSION_MISMATCH=1
DRBD_MINOR=1 DRBD_PEER=storage1 DRBD_PEERS=storage1
DRBD_PEER_ADDRESS=192.168.0.2 DRBD_PEER_AF=ipv4 DRBD_RESOURCE=r0
UP_TO_DATE_NODES='' /usr/lib/drbd/crm-fence-peer.sh
Jan 11 08:49:53 storage2 crm-fence-peer.sh[15594]: INFO peer is
reachable, my disk is UpToDate: placed constraint
'drbd-fence-by-handler-r0-StorageClusterClone'
Jan 11 08:49:53 storage2 kernel: drbd r0: helper command: /sbin/drbdadm
fence-peer r0 exit code 4 (0x400)
Jan 11 08:49:53 storage2 kernel: drbd r0: fence-peer helper returned 4
(peer was fenced)
But the switch ports connected to the fenced node are still enabled.
What am I missing here?
Thanks!
Bryan Walton
--
Bryan K. Walton 319-337-3877
Linux Systems Administrator Leepfrog Technologies, Inc
More information about the drbd-user
mailing list