[DRBD-user] operation monitor failed 'not configured' - how to tell what's not configured?

Klint Gore kgore4 at une.edu.au
Wed Sep 24 02:17:34 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi everyone,

I'm trying to bring up drbd + corosync + pacemaker + nfs on centos 7 and I'm getting this message in pacemaker.log

Sep 24 09:21:29 [2757] hans0.une.edu.au    pengine:    error: unpack_rsc_op:    Preventing master_drbd from re-starting anywhere in the cluster : operation monitor failed 'not configured' (rc=6)

How do I tell which bit isn't configured?

The details
[root at hans0 log]# uname -a
Linux hans0.une.edu.au 3.10.0-123.6.3.el7.x86_64 #1 SMP Wed Aug 6 21:12:36 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root at hans0 log]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)

[root at hans0 ~]# yum list installed |grep -E "(coro|pacemaker|drbd)"
corosync.x86_64                       2.3.3-2.el7                      @base
corosynclib.x86_64                    2.3.3-2.el7                      @base
drbd84-utils.x86_64                   8.9.1-1.el7.elrepo               @elrepo
kmod-drbd84.x86_64                    8.4.5-1.el7.elrepo               @elrepo
pacemaker.x86_64                      1.1.10-32.el7_0                  @updates
pacemaker-cli.x86_64                  1.1.10-32.el7_0                  @updates
pacemaker-cluster-libs.x86_64         1.1.10-32.el7_0                  @updates
pacemaker-libs.x86_64                 1.1.10-32.el7_0                  @updates

[root at hans0 ~]# drbd-overview
1:homeagbu/0  Connected Secondary/Secondary UpToDate/UpToDate
2:backdesk/0  Connected Secondary/Secondary UpToDate/UpToDate
3:genomics/0  Connected Secondary/Secondary UpToDate/UpToDate
4:backserv/0  Connected Secondary/Secondary UpToDate/UpToDate
5:agbudata/0  Connected Secondary/Secondary UpToDate/UpToDate

#cluster setup script
pcs cluster auth hans0 hans1
pcs cluster setup --name agbunfs hans0 hans1

#disable stonith for now
pcs property set stonith-enabled=false

#only 2 in cluster - ignore quorum
pcs property set no-quorum-policy=ignore

#set stickiness
pcs resource defaults resource-stickiness=200

#make a virtual ip
pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=10.1.1.39 cidr_netmask=22 nic=eno2 op monitor interval=20s

pcs resource create drbd_homeagbu ocf:linbit:drbd drbd_resource=homeagbu op monitor interval=29s role="Master" op monitor interval=35s role="Slave"
pcs resource create drbd_backdesk ocf:linbit:drbd drbd_resource=backdesk op monitor interval=29s role="Master" op monitor interval=35s role="Slave"
pcs resource create drbd_backserv ocf:linbit:drbd drbd_resource=backserv op monitor interval=29s role="Master" op monitor interval=35s role="Slave"
pcs resource create drbd_genomics ocf:linbit:drbd drbd_resource=genomics op monitor interval=29s role="Master" op monitor interval=35s role="Slave"
pcs resource create drbd_agbudata ocf:linbit:drbd drbd_resource=agbudata op monitor interval=29s role="Master" op monitor interval=35s role="Slave"

pcs resource group add drbd_group drbd_homeagbu drbd_backdesk drbd_backserv drbd_genomics drbd_agbudata

pcs resource master master_drbd drbd_group master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

#this is to mount the filesystem
pcs resource create shareha Filesystem device="/dev/drbd/by-res/homeagbu" directory="/mnt/homeagbu" fstype="xfs"
pcs resource create sharebd Filesystem device="/dev/drbd/by-res/backdesk" directory="/mnt/backdesk" fstype="xfs"
pcs resource create sharebs Filesystem device="/dev/drbd/by-res/backserv" directory="/mnt/backserv" fstype="xfs"
pcs resource create sharege Filesystem device="/dev/drbd/by-res/genomics" directory="/mnt/genomics" fstype="xfs"
pcs resource create sharead Filesystem device="/dev/drbd/by-res/agbudata" directory="/mnt/agbudata" fstype="xfs"

pcs resource group add fs-group shareha sharebd sharebs sharege sharead


#this is to do the exports
pcs resource create export-rulesha exportfs clientspec=10.1.1.0/22 options="rw,no_root_squash" directory="/mnt/homeagbu/agbu" fsid=1000
pcs resource create export-rulesbd exportfs clientspec=10.1.1.0/22 options="rw,no_root_squash" directory="/mnt/backdesk" fsid=1001
pcs resource create export-rulesbs exportfs clientspec=10.1.1.0/22 options="rw,no_root_squash" directory="/mnt/backserv" fsid=1002
pcs resource create export-rulesge exportfs clientspec=10.1.1.0/22 options="rw,no_root_squash" directory="/mnt/genomics" fsid=1003
pcs resource create export-rulesad exportfs clientspec=10.1.1.0/22 options="rw,no_root_squash" directory="/mnt/agbudata" fsid=1004


#this is the nfs server
pcs resource create nfss nfsserver nfs_shared_infodir=/mnt/homeagbu/nfs nfs_ip=10.1.1.39

pcs resource group add nfs-group nfss export-rulesha export-rulesbd export-rulesbs export-rulesge export-rulesad

pcs constraint colocation add master_drbd virtual_ip INFINITY with-rsc-role="Master"
pcs constraint colocation add master_drbd fs-group INFINITY with-rsc-role="Master"
pcs constraint colocation add master_drbd nfs-group  INFINITY with-rsc-role="Master"

pcs constraint order promote master_drbd then start fs-group
pcs constraint order fs-group then start virtual_ip
pcs constraint order virtual_ip then start nfs-group

[root at hans0 log]# pcs cluster status
Cluster Status:
Last updated: Wed Sep 24 09:50:04 2014
Last change: Tue Sep 23 16:21:26 2014 via cibadmin on hans0
Stack: corosync
Current DC: hans0 (1) - partition with quorum
Version: 1.1.10-32.el7_0-368c726
2 Nodes configured
22 Resources configured

PCSD Status:
  hans0: Online
  hans1: Online




--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789
Fax: 02 6773 3266
EMail: kgore4 at une.edu.au<mailto:kgore4 at une.edu.au>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140924/b653a6e9/attachment.htm>


More information about the drbd-user mailing list