Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi guys, I'm building a pacemaker cluster and am having problems with the LVM resource in that it start and stop straight away. I've used the Linbit guide to set this up, but can't seem to see why its happening. I've copied the config etc below if anyone can help. Thanks in advance Matt crm_mon: ============ Last updated: Tue Jul 3 19:31:44 2012 Stack: Heartbeat Current DC: upnas2 (b1fa8c22-cd5e-4d1f-8683-2e9b0766f09e) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 8 Resources configured. ============ Online: [ upnas2 upnas1 ] Master/Slave Set: ms_drbd_infiniband2 Masters: [ upnas2 ] Slaves: [ upnas1 ] Resource Group: rg_infiniband2 p_target_infiniband2 (ocf::heartbeat:iSCSITarget): Started upnas2 p_lvm_infiniband2 (ocf::heartbeat:LVM): Stopped p_lu_infiniband2_lun12 (ocf::heartbeat:iSCSILogicalUnit): Stopped p_ip2 (ocf::heartbeat:IPaddr2): Stopped Master/Slave Set: ms_drbd_infiniband3 Masters: [ upnas2 ] Slaves: [ upnas1 ] Resource Group: rg_infiniband3 p_target_infiniband3 (ocf::heartbeat:iSCSITarget): Started upnas2 p_lvm_infiniband3 (ocf::heartbeat:LVM): Stopped p_lu_infiniband3_lun13 (ocf::heartbeat:iSCSILogicalUnit): Stopped p_ip3 (ocf::heartbeat:IPaddr2): Stopped Master/Slave Set: ms_drbd_infiniband4 Masters: [ upnas2 ] Slaves: [ upnas1 ] Resource Group: rg_infiniband4 p_target_infiniband4 (ocf::heartbeat:iSCSITarget): Started upnas2 p_lvm_infiniband4 (ocf::heartbeat:LVM): Stopped p_lu_infiniband4_lun14 (ocf::heartbeat:iSCSILogicalUnit): Stopped p_ip4 (ocf::heartbeat:IPaddr2): Stopped Master/Slave Set: ms_drbd_infiniband1 Masters: [ upnas1 ] Slaves: [ upnas2 ] Resource Group: rg_infiniband1 p_target_infiniband1 (ocf::heartbeat:iSCSITarget): Started upnas1 p_lvm_infiniband1 (ocf::heartbeat:LVM): Stopped p_lu_infiniband1_lun11 (ocf::heartbeat:iSCSILogicalUnit): Stopped p_ip1 (ocf::heartbeat:IPaddr2): Stopped Failed actions: p_lvm_infiniband2_start_0 (node=upnas2, call=43, rc=7, status=complete): not running p_lvm_infiniband3_start_0 (node=upnas2, call=45, rc=7, status=complete): not running p_lvm_infiniband4_start_0 (node=upnas2, call=49, rc=7, status=complete): not running p_lvm_infiniband1_start_0 (node=upnas2, call=51, rc=7, status=complete): not running p_lvm_infiniband1_start_0 (node=upnas1, call=42, rc=7, status=complete): not running pvscan: upnas1: PV /dev/sda8 VG vg_upnas_4 lvm2 [16.09 GB / 16.09 GB free] PV /dev/sda7 VG vg_upnas_3 lvm2 [16.09 GB / 16.09 GB free] PV /dev/sda6 VG vg_upnas_2 lvm2 [16.09 GB / 16.09 GB free] PV /dev/drbd1 VG vg_upnas_1 lvm2 [16.09 GB / 16.09 GB free] Total: 4 [64.35 GB] / in use: 4 [64.35 GB] / in no VG: 0 [0 ] vgscan: upnas1: Reading all physical volumes. This may take a while... Found volume group "vg_upnas_4" using metadata type lvm2 Found volume group "vg_upnas_3" using metadata type lvm2 Found volume group "vg_upnas_2" using metadata type lvm2 Found volume group "vg_upnas_1" using metadata type lvm2 pvscan: upnas2: PV /dev/sda5 VG vg_upnas_1 lvm2 [16.09 GB / 16.09 GB free] PV /dev/drbd4 VG vg_upnas_4 lvm2 [16.09 GB / 16.09 GB free] PV /dev/drbd3 VG vg_upnas_3 lvm2 [16.09 GB / 16.09 GB free] PV /dev/drbd2 VG vg_upnas_2 lvm2 [16.09 GB / 16.09 GB free] Total: 4 [64.35 GB] / in use: 4 [64.35 GB] / in no VG: 0 [0 ] vgscan: upnas2: Reading all physical volumes. This may take a while... Found volume group "vg_upnas_1" using metadata type lvm2 Found volume group "vg_upnas_4" using metadata type lvm2 Found volume group "vg_upnas_3" using metadata type lvm2 Found volume group "vg_upnas_2" using metadata type lvm2 crm configure: primitive p_lvm_infiniband2 ocf:heartbeat:LVM \ params volgrpname="vg_upnas_2" \ op monitor interval="31s" \ op start interval="0" timeout="40s" \ op stop interval="0" timeout="40s" primitive p_lvm_infiniband3 ocf:heartbeat:LVM \ params volgrpname="vg_upnas_3" \ op monitor interval="31s" \ op start interval="0" timeout="40s" \ op stop interval="0" timeout="40s" primitive p_lvm_infiniband4 ocf:heartbeat:LVM \ params volgrpname="vg_upnas_4" \ op monitor interval="31s" \ op start interval="0" timeout="40s" \ op stop interval="0" timeout="40s" primitive p_target_infiniband1 ocf:heartbeat:iSCSITarget \ params iqn="iqn.unipro.iscsi:Target01" tid="11" incoming_username="alice1" incoming_password="wonderland" \ op monitor interval="10s" primitive p_target_infiniband2 ocf:heartbeat:iSCSITarget \ params iqn="iqn.unipro.iscsi:Target02" tid="12" incoming_username="alice2" incoming_password="wonderland" \ op monitor interval="10s" primitive p_target_infiniband3 ocf:heartbeat:iSCSITarget \ params iqn="iqn.unipro.iscsi:Target03" tid="13" incoming_username="alice3" incoming_password="wonderland" \ op monitor interval="10s" primitive p_target_infiniband4 ocf:heartbeat:iSCSITarget \ params iqn="iqn.unipro.iscsi:Target04" tid="14" incoming_username="alice4" incoming_password="wonderland" \ op monitor interval="10s" group rg_infiniband1 p_target_infiniband1 p_lvm_infiniband1 p_lu_infiniband1_lun11 p_ip1 group rg_infiniband2 p_target_infiniband2 p_lvm_infiniband2 p_lu_infiniband2_lun12 p_ip2 group rg_infiniband3 p_target_infiniband3 p_lvm_infiniband3 p_lu_infiniband3_lun13 p_ip3 group rg_infiniband4 p_target_infiniband4 p_lvm_infiniband4 p_lu_infiniband4_lun14 p_ip4 ms ms_drbd_infiniband1 p_drbd_infiniband1 \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" ms ms_drbd_infiniband2 p_drbd_infiniband2 \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" ms ms_drbd_infiniband3 p_drbd_infiniband3 \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" ms ms_drbd_infiniband4 p_drbd_infiniband4 \ meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" location drbd-fence-by-handler-infiniband2-ms_drbd_infiniband2 ms_drbd_infiniband2 \ rule $id="drbd-fence-by-handler-infiniband2-rule-ms_drbd_infiniband2" $role="Master" -inf: #uname ne upnas2 location drbd-fence-by-handler-infiniband3-ms_drbd_infiniband3 ms_drbd_infiniband3 \ rule $id="drbd-fence-by-handler-infiniband3-rule-ms_drbd_infiniband3" $role="Master" -inf: #uname ne upnas2 location drbd-fence-by-handler-infiniband4-ms_drbd_infiniband4 ms_drbd_infiniband4 \ rule $id="drbd-fence-by-handler-infiniband4-rule-ms_drbd_infiniband4" $role="Master" -inf: #uname ne upnas2 colocation c_infiniband1_on_drbd inf: rg_infiniband1 ms_drbd_infiniband1:Master colocation c_infiniband2_on_drbd inf: rg_infiniband2 ms_drbd_infiniband2:Master colocation c_infiniband3_on_drbd inf: rg_infiniband3 ms_drbd_infiniband3:Master colocation c_infiniband4_on_drbd inf: rg_infiniband4 ms_drbd_infiniband4:Master order o_drbd_before_infiniband1 inf: ms_drbd_infiniband1:promote rg_infiniband1:start order o_drbd_before_infiniband2 inf: ms_drbd_infiniband2:promote rg_infiniband2:start order o_drbd_before_infiniband3 inf: ms_drbd_infiniband3:promote rg_infiniband3:start order o_drbd_before_infiniband4 inf: ms_drbd_infiniband4:promote rg_infiniband4:start property $id="cib-bootstrap-options" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ stonith-enabled="false" \ no-quorum-policy="ignore" \ default-resource-stickiness="200" -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120726/1a85990c/attachment.htm>