[DRBD-user] crm-fence-peer.sh did not place the constraint!

ArekW arkaduis at gmail.com
Sun Jul 16 23:14:12 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,
On a 2 node cluster when I do a failover test I get messages in logs
on the healthy node:

Jul 16 22:55:39 centos2 kernel: drbd storage centos1: fence-peer
helper broken, returned 1
Jul 16 22:55:39 centos2 kernel: drbd storage: State change failed:
Refusing to be Primary while peer is not outdated
Jul 16 22:55:39 centos2 kernel: drbd storage: Failed: role( Secondary
-> Primary ) susp-io( no -> fencing)
Jul 16 22:55:39 centos2 kernel: drbd storage centos1: helper command:
/sbin/drbdadm fence-peer
Jul 16 22:55:39 centos2 crm-fence-peer.sh[32094]:
DRBD_BACKING_DEV_0=/dev/storage/lvstorage DRBD_CONF=/etc/drbd.conf
DRBD_LL_DISK=/dev/storage/lvstorage DRBD_MINOR=1 DRBD_MINOR_0=1
DRBD_MY_ADDRESS=192.168.50.152 DRBD_MY_AF=ipv4 DRBD_MY_NODE_ID=1
DRBD_NODE_ID_0=centos1 DRBD_NODE_ID_1=centos2
DRBD_PEER_ADDRESS=192.168.50.151 DRBD_PEER_AF=ipv4 DRBD_PEER_NODE_ID=0
DRBD_RESOURCE=storage DRBD_VOLUME=0 UP_TO_DATE_NODES=0x00000002
/usr/lib/drbd/crm-fence-peer.sh
Jul 16 22:55:39 centos2 crm-fence-peer.sh[32094]: WARNING could not
determine my disk state: did not place the constraint!

The stonith resets the failed node but the drbd can not get up. Above
massage appears every few seconds.

My config:
resource storage {
  protocol C;
  meta-disk internal;
  device /dev/drbd1;
  disk /dev/storage/lvstorage;
  handlers {
    #pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
    #pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger
; reboot -f";
    #local-io-error "/usr/lib/drbd/notify-io-error.sh;
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o >
/proc/sysrq-trigger ; halt -f";
    fence-peer "/usr/lib/drbd/crm-fence-peer.sh --timeout 30 --dc-timeout 60";
    after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
    #split-brain "/usr/lib/drbd/notify-split-brain.sh root";
    #out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
  }
  syncer {
    verify-alg sha1;
  }
  net {
    allow-two-primaries;
    fencing resource-and-stonith;
  }
  on centos1 {
    address  192.168.50.151:7789;
  }
  on centos2 {
    address  192.168.50.152:7789;
  }
}

drbdadm --version
DRBDADM_BUILDTAG=GIT-hash:\ 98b6340c328b763a11c6fb63a6dc340722621ac2\
build\ by\ mockbuild@\,\ 2017-06-12\ 12:17:40
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x090007
DRBD_KERNEL_VERSION=9.0.7
DRBDADM_VERSION_CODE=0x090000
DRBDADM_VERSION=9.0.0


The stonith is configured and working OK. Please explain what I am missing.



More information about the drbd-user mailing list