[DRBD-user] Ensuring drbd is started before mounting filesystem

Andreas Kurz andreas at hastexo.com
Mon Oct 24 00:10:23 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 10/23/2011 11:18 PM, Nick Khamis wrote:
> Hello Everyone,
> 
> I was wondering if it's possible to use the "order" directive to ensure that
> drbd is fully started before attempting to mount the filesystem? I tried the
> following:
> 
> node mydrbd1 \
>        attributes standby="off"
> node mydrbd2 \
>        attributes standby="off"
> primitive myIP ocf:heartbeat:IPaddr2 \
> 	op monitor interval="60" timeout="20" \
>         params ip="192.168.2.5" cidr_netmask="24" \
>         nic="eth1" broadcast="192.168.2.255" \
> 	lvs_support="true"
> primitive myDRBD ocf:linbit:drbd \
> 	params drbd_resource="r0.res" \
> 	op monitor role=Master interval="10" \
> 	op monitor role=Slave interval="30"
> ms msMyDRBD myDRBD \
> 	meta master-max="1" master-node-max="1" \
> 	clone-max="2" clone-node-max="1" \
> 	notify="true" globally-unique="false"
> primitive myFilesystem ocf:heartbeat:Filesystem \
> 	params device="/dev/drbd0" directory="/service" fstype="ext3" \
>         op monitor interval="15" timeout="60" \
>         meta target-role="Started"
> group MyServices myIP myFilesystem meta target-role="Started"
> order drbdAfterIP \
> 	inf: myIP msMyDRBD
> order filesystemAfterDRBD \
> 	inf: msMyDRBD:promote myFilesystem:start

There is no colocation between the DRBD master and the filesystem ....

> location prefer-mysql1 MyServices inf: mydrbd1
> location prefer-mysql2 MyServices inf: mydrbd2

????? ... these constraints make no sense ...

Regards,
Andreas

-- 
Need help with DRBD?
http://www.hastexo.com/now


> property $id="cib-bootstrap-options" \
>         no-quorum-policy="ignore" \
>         stonith-enabled="false" \
>         expected-quorum-votes="5" \
>         dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
>         cluster-recheck-interval="0" \
>         cluster-infrastructure="openais"
> 	rsc_defaults $id="rsc-options" \
> 	resource-stickiness="100"
> 
> However, it still seems that the filesystem is attempting to get mounted:
> 
> Oct 23 17:18:55 mydrbd1 crmd: [5074]: info: send_direct_ack: ACK'ing
> resource op myDRBD:1_notify_0 from
> 61:1:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f:
> lrm_invoke-lrmd-1319404735-8
> Oct 23 17:18:55 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myDRBD:1_notify_0 (call=10, rc=0, cib-update=0,
> confirmed=true) ok
> Oct 23 17:18:56 mydrbd1 crmd: [5074]: info: do_lrm_rsc_op: Performing
> key=8:2:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f op=myFilesystem_start_0
> )
> Oct 23 17:18:56 mydrbd1 lrmd: [5071]: info: rsc:myFilesystem:11: start
> Oct 23 17:18:56 mydrbd1 crmd: [5074]: info: do_lrm_rsc_op: Performing
> key=22:2:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f
> op=myDRBD:1_monitor_30000 )
> Oct 23 17:18:56 mydrbd1 lrmd: [5071]: info: rsc:myDRBD:1:12: monitor
> Oct 23 17:18:56 mydrbd1 lrmd: [5071]: info: RA output:
> (myIP:start:stderr) ARPING 192.168.2.5 from 192.168.2.5 eth1
> Sent 5 probes (5 broadcast(s))
> Received 0 response(s)
> 
> Oct 23 17:18:56 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myDRBD:1_monitor_30000 (call=12, rc=0, cib-update=15,
> confirmed=false) ok
> Oct 23 17:18:56 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr) FATAL: Module scsi_hostadapter not found.
> 
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr) /dev/drbd0: Wrong medium type
> 
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr) mount: block device /dev/drbd0 is
> write-protected, mounting read-only
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr)
> 
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr) mount: Wrong medium type
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:start:stderr)
> 
> Oct 23 17:18:57 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myFilesystem_start_0 (call=11, rc=1, cib-update=16,
> confirmed=true) unknown error
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_ais_dispatch:
> Update relayed from mydrbd2
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_trigger_update:
> Sending flush op to all hosts for: fail-count-myFilesystem (INFINITY)
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_perform_update:
> Sent update 14: fail-count-myFilesystem=INFINITY
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_ais_dispatch:
> Update relayed from mydrbd2
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_trigger_update:
> Sending flush op to all hosts for: last-failure-myFilesystem
> (1319404808)
> Oct 23 17:18:57 mydrbd1 attrd: [5072]: notice: attrd_perform_update:
> Sent update 17: last-failure-myFilesystem=1319404808
> Oct 23 17:18:57 mydrbd1 crmd: [5074]: info: do_lrm_rsc_op: Performing
> key=3:4:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f op=myFilesystem_stop_0
> )
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: rsc:myFilesystem:13: stop
> Oct 23 17:18:57 mydrbd1 crmd: [5074]: info: do_lrm_rsc_op: Performing
> key=55:4:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f op=myDRBD:1_notify_0 )
> Oct 23 17:18:57 mydrbd1 lrmd: [5071]: info: rsc:myDRBD:1:14: notify
> Oct 23 17:18:58 mydrbd1 crmd: [5074]: info: send_direct_ack: ACK'ing
> resource op myDRBD:1_notify_0 from
> 55:4:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f:
> lrm_invoke-lrmd-1319404738-9
> Oct 23 17:18:58 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myDRBD:1_notify_0 (call=14, rc=0, cib-update=0,
> confirmed=true) ok
> Oct 23 17:18:58 mydrbd1 lrmd: [5071]: info: cancel_op: operation
> monitor[12] on ocf::drbd::myDRBD:1 for client 5074, its parameters:
> CRM_meta_clone=[1] CRM_meta_timeout=[20000]
> CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ]
> CRM_meta_notify_demote_uname=[ ] drbd_resource=[r0.res]
> CRM_meta_notify_inactive_resource=[myDRBD:0 myDRBD:1 ]
> CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ]
> CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1]
> CRM_meta_clone_max=[2] CRM_meta_notify=[true]
> CRM_meta_notify_slave_uname=[ ] CR cancelled
> Oct 23 17:18:58 mydrbd1 crmd: [5074]: info: do_lrm_rsc_op: Performing
> key=22:4:0:f47dbe28-970a-4750-b0b9-a40bf6401b5f op=myDRBD:1_stop_0 )
> Oct 23 17:18:58 mydrbd1 lrmd: [5071]: info: rsc:myDRBD:1:15: stop
> Oct 23 17:18:58 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myDRBD:1_monitor_30000 (call=12, status=1, cib-update=0,
> confirmed=true) Cancelled
> Oct 23 17:18:58 mydrbd1 lrmd: [5071]: info: RA output:
> (myFilesystem:stop:stderr) /dev/drbd0: Wrong medium type
> 
> Oct 23 17:18:58 mydrbd1 crmd: [5074]: info: process_lrm_event: LRM
> operation myFilesystem_stop_0 (call=13, rc=0, cib-update=17,
> confirmed=true) ok
> Oct 23 17:18:58 mydrbd1 lrmd: [5071]: info: RA output: (myDRBD:1:stop:stdout)
> 
> Please Help,
> 
> Nick.
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 286 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111024/4aba0b19/attachment.pgp>


More information about the drbd-user mailing list