[DRBD-user] FileSystem Resource Won't Start

Jake Smith jsmith at argotec.com
Tue Dec 11 19:51:29 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


----- Original Message -----

> From: "Eric Robinson" <eric.robinson at psmnv.com>
> To: drbd-user at lists.linbit.com
> Sent: Tuesday, December 11, 2012 1:01:43 PM
> Subject: [DRBD-user] FileSystem Resource Won't Start

> When I try to start a filesystem resource, I get the following in the
> logs. The resource will not start on either node. Any idea what this
> means? The underlying device is available.

> --

> Dec 11 09:56:16 ha09a pengine[2642]: warning: unpack_rsc_op:
> Processing failed op start for p_fs_clust08 on ha09a: not installed
> (5)
> Dec 11 09:56:16 ha09a pengine[2642]: notice: unpack_rsc_op:
> Preventing p_fs_clust09 from re-starting on ha09a: operation start
> failed 'not installed' (rc=5)
> Dec 11 09:56:16 ha09a pengine[2642]: warning: unpack_rsc_op:
> Processing failed op start for p_fs_clust09 on ha09a: not installed
> (5)
> Dec 11 09:56:16 ha09a pengine[2642]: warning: unpack_rsc_op:
> Processing failed op start for p_fs_clust08 on ha09b: unknown error
> (1)
> Dec 11 09:56:16 ha09a pengine[2642]: warning: unpack_rsc_op:
> Processing failed op start for p_fs_clust09 on ha09b: unknown error
> (1)
> Dec 11 09:56:16 ha09a pengine[2642]: warning:
> common_apply_stickiness: Forcing p_fs_clust08 away from ha09a after
> 1000000 failures (max=1000000)
> Dec 11 09:56:16 ha09a pengine[2642]: warning:
> common_apply_stickiness: Forcing p_fs_clust09 away from ha09a after
> 1000000 failures (max=1000000)
> Dec 11 09:56:16 ha09a pengine[2642]: warning:
> common_apply_stickiness: Forcing p_fs_clust08 away from ha09b after
> 1000000 failures (max=1000000)
> Dec 11 09:56:16 ha09a pengine[2642]: warning:
> common_apply_stickiness: Forcing p_fs_clust09 away from ha09b after
> 1000000 failures (max=1000000)

Your resources are not starting in the proper order and/or on the wrong node. You need to specify the order they need to start in and which need to collocate with each other. Specifically the relationships between the DRBD resources and the fs/vip groups. More here: 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Pacemaker_Explained/s-resource-ordering.html 
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Pacemaker_Explained/s-resource-colocation.html 

Also here - scroll down from Configure to the subsection for collocation and order: 
http://www.nongnu.org/crmsh/crm.8.html#cmdhelp_configure 
> Here is what the cib looks like.

> node ha09a \
> attributes standby="off"
> node ha09b \
> attributes standby="off"
> primitive p_drbd0 ocf:linbit:drbd \
> params drbd_resource="ha01_mysql" \
> op monitor interval="31s" role="Slave" \
> op monitor interval="30s" role="Master"
> primitive p_drbd1 ocf:linbit:drbd \
> params drbd_resource="ha02_mysql" \
> op monitor interval="31s" role="Slave" \
> op monitor interval="30s" role="Master"
> primitive p_fs_clust08 ocf:heartbeat:Filesystem \
> params device="/dev/drbd0" directory="/ha01_mysql" fstype="ext3"
> options="noatime" \
> meta target-role="Started"
> primitive p_fs_clust09 ocf:heartbeat:Filesystem \
> params device="/dev/drbd1" directory="/ha02_mysql" fstype="ext3"
> options="noatime"
> primitive p_vip_clust08 ocf:heartbeat:IPaddr2 \
> params ip="192.168.10.210" cidr_netmask="32" \
> op monitor interval="30s"
> primitive p_vip_clust09 ocf:heartbeat:IPaddr2 \
> params ip="192.168.10.211" cidr_netmask="32" \
> op monitor interval="30s"
> group g_clust08 p_fs_clust08 p_vip_clust08 \
> meta target-role="Started"
> group g_clust09 p_fs_clust09 p_vip_clust09 \
> meta target-role="Started"
> ms ms_drbd0 p_drbd0 \
> meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true" target-role="Master"
> ms ms_drbd1 p_drbd1 \
> meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true" target-role="Master"
Add something like this: 

order o_drbd_then_group_clust08 inf: ms_drbd0:promote g_clust08:start 
order o_drbd_then_group_clust09 inf: ms_drbd1:promote g_clust09:start 
collocate c_group_clust08_on_drbd_master inf: g_clust08 ms_drbd0:master 
collocate c_group_clust09_on_drbd_master inf: g_clust09 ms_drbd1:master 

HTH 

Jake 

> property $id="cib-bootstrap-options" \
> dc-version="1.1.8-4.el6-394e906" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="2" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> last-lrm-refresh="1355247551"
> rsc_defaults $id="rsc-options" \
> resource-stickiness="100"
> #vim:set syntax=pcmk

> --
> Eric Robinson

> Disclaimer - December 11, 2012
> This email and any files transmitted with it are confidential and
> intended solely for drbd-user at lists.linbit.com . If you are not the
> named addressee you should not disseminate, distribute, copy or
> alter this email. Any views or opinions presented in this email are
> solely those of the author and might not represent those of
> Physicians' Managed Care or Physician Select Management. Warning:
> Although Physicians' Managed Care or Physician Select Management has
> taken reasonable precautions to ensure no viruses are present in
> this email, the company cannot accept responsibility for any loss or
> damage arising from the use of this email or attachments.
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20121211/642e7d50/attachment.htm>


More information about the drbd-user mailing list