[DRBD-user] Having Trouble with LVM on DRBD

Igor Cicimov icicimov at gmail.com
Thu Feb 25 23:28:39 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 26/02/2016 8:53 AM, "Eric Robinson" <eric.robinson at psmnv.com> wrote:
>
> ‘> And your pacemaker config is???
>
> ‘> Run
>
> ‘> #  crm configure show
> ‘>and paste it here.
>
> Pacemaker 1.1.12.
>
>
>
> Here’s the config…
>
>
>
>
>
> [root at ha13a /]# crm configure show
>
> node ha13a
>
> node ha13b
>
> primitive p_drbd0 ocf:linbit:drbd \
>
>         params drbd_resource=ha01_mysql \
>
>         op monitor interval=31s role=Slave \
>
>         op monitor interval=30s role=Master
>
> primitive p_drbd1 ocf:linbit:drbd \
>
>         params drbd_resource=ha02_mysql \
>
>         op monitor interval=29s role=Slave \
>
>         op monitor interval=28s role=Master
>
> primitive p_fs_clust17 Filesystem \
>
>         params device="/dev/vg_drbd0/lv_drbd0" directory="/ha01_mysql"
fstype=ext3 options=noatime
>
> primitive p_fs_clust18 Filesystem \
>
>         params device="/dev/vg_drbd1/lv_drbd1" directory="/ha02_mysql"
fstype=ext3 options=noatime
>
> primitive p_vip_clust17 IPaddr2 \
>
>         params ip=192.168.9.104 cidr_netmask=32 \
>
>         op monitor interval=30s
>
> primitive p_vip_clust18 IPaddr2 \
>
>         params ip=192.168.9.105 cidr_netmask=32 \
>
>         op monitor interval=30s
>
> ms ms_drbd0 p_drbd0 \
>
>         meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
notify=true target-role=Master
>
> ms ms_drbd1 p_drbd1 \
>
>         meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
notify=true target-role=Master
>
> location cli-prefer-p_fs_clust17 p_fs_clust17 role=Started inf: ha13b
>
> colocation c_clust17 inf: p_vip_clust17 ms_drbd0:Master
>
> colocation c_clust18 inf: p_vip_clust18 ms_drbd1:Master
>
> order o_clust17 inf: ms_drbd0:promote p_vip_clust17
>
> order o_clust18 inf: ms_drbd1:promote p_vip_clust18
>
> property cib-bootstrap-options: \
>
>         dc-version=1.1.11-97629de \
>
>         cluster-infrastructure="classic openais (with plugin)" \
>
>         no-quorum-policy=ignore \
>
>         stonith-enabled=false \
>
>         maintenance-mode=false \
>
>         expected-quorum-votes=2 \
>
>         last-lrm-refresh=1456434863
>

Im confused I don't see the VG(s) and LV(s) under cluster control have you
done that  bit?
>
>
> crm_mon shows…
>
>
>
> Last updated: Thu Feb 25 13:49:06 2016
>
> Last change: Thu Feb 25 13:49:04 2016
>
> Stack: classic openais (with plugin)
>
> Current DC: ha13b - partition with quorum
>
> Version: 1.1.11-97629de
>
> 2 Nodes configured, 2 expected votes
>
> 8 Resources configured
>
>
>
>
>
> Online: [ ha13a ha13b ]
>
>
>
> Master/Slave Set: ms_drbd0 [p_drbd0]
>
>      Masters: [ ha13a ]
>
>      Slaves: [ ha13b ]
>
> Master/Slave Set: ms_drbd1 [p_drbd1]
>
>      Masters: [ ha13b ]
>
>      Slaves: [ ha13a ]
>
> p_vip_clust17   (ocf::heartbeat:IPaddr2):       Started ha13a
>
> p_vip_clust18   (ocf::heartbeat:IPaddr2):       Started ha13b
>
> p_fs_clust17    (ocf::heartbeat:Filesystem):    Started ha13a
>
> p_fs_clust18    (ocf::heartbeat:Filesystem):    Started ha13b
>
>
>
> Failed actions:
>
>     p_fs_clust17_start_0 on ha13b 'not installed' (5): call=124,
status=complete, last-rc-change='Thu Feb 25 13:49:04 2016', queued=0ms,
exec=46ms
>
>     p_fs_clust18_start_0 on ha13a 'not installed' (5): call=124,
status=complete, last-rc-change='Thu Feb 25 13:49:04 2016', queued=0ms,
exec=47ms
>
>
>
> …however, the filesystems are properly mounted.
>
>
>
> When I try to failover, it fails…
>
>
>
> --
>
> Eric Robinson
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160226/5d32c200/attachment.htm>


More information about the drbd-user mailing list