[DRBD-user] problem using lv as drbd backend device

Digimer lists at alteeve.ca
Mon Oct 7 20:27:18 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I knew you could make it work, it just seems to me that any benefit of
LVM under DRBD is lost to the added setup/maintenance complexity.

In the end though, what ever works for you is a good solution. :)

digimer

On 07/10/13 04:27, Mia Lueng wrote:
> I find a solution : 
> 1. drbdadm sh-ll-dev drbd0  find drbd0's backend lv
> 2. map lv to dm-x ,   ls /sys/block/dm-x/holders    to find the frontend lv
> 3. dmsetup remove -f $frontlv
> 
> 
> 
> 2013/10/7 Digimer <lists at alteeve.ca <mailto:lists at alteeve.ca>>
> 
>     On 06/10/13 23:26, Mia Lueng wrote:
>     > I have built a drbd cluster. The storage setting is like the
>     following:
>     >
>     > backend LV--->drbd0--->pv-->vg-->userlv
>     >
>     > That means I create a drbd device on a LV, and create a volume
>     group on
>     > drbd device again.
>     >
>     > In /etc/lvm/lvm.conf, I add a filter so that pvscan do not probe
>     for the
>     > backend LV.  This works fine on normal suitation.
>     >
>     > Now A is primary, and B is secondary.    Break the link of A's
>     > storage(fc san), HA cluster will detect the error and failover the
>     > resource from A to B. But the drbd resource  and filesystem can not be
>     > stopped on A, so A will be reboot (due to stop fail handle) and B will
>     > takeover all the resource. When A rejoin the cluster, the drbd
>     resource
>     > can not be start as secondary automatically: the backend LV can not be
>     > attached to the drbd resource.
>     >
>     > vcs2:~ # lvs
>     >   LV       VG     Attr   LSize    Origin Snap%  Move Log Copy%
>      Convert
>     >   drbd0_lv drbdvg -wi-ao  800.00M
>     >   oralv    oravg  -wi-a- 1000.00M
>     > vcs2:~ # modprobe drbd
>     > vcs2:~ # drbdadm up drbd0
>     > 0: Failure: (104) Can not open backing device.
>     > Command 'drbdsetup 0 disk /dev/drbdvg/drbd0_lv /dev/drbdvg/drbd0_lv
>     > internal --set-defaults --create-device --on-io-error=pass_on
>     > --no-disk-barrier --no-disk-flushes' terminated with exit code 10
>     > vcs2:~ # fuser -m /dev/drbdvg/drbd0_lv
>     > vcs2:~ # lvdisplay /dev/drbdvg/drbd0_lv
>     >   --- Logical volume ---
>     >   LV Name                /dev/drbdvg/drbd0_lv
>     >   VG Name                drbdvg
>     >   LV UUID                Np92C2-ttuq-yM16-mDf2-5TLE-rn5g-rWrtVq
>     >   LV Write Access        read/write
>     >   LV Status              available
>     >   # open                 1
>     >   LV Size                800.00 MB
>     >   Current LE             200
>     >   Segments               1
>     >   Allocation             inherit
>     >   Read ahead sectors     auto
>     >   - currently set to     1024
>     >   Block device           252:6
>     >
>     >
>     > My solution is:
>     > 1.restore the default configure of /etc/lvm/lvm.conf and run
>     > pvscan/vgchange -ay to active the lv on drbd0(now on the backend
>     lv) and
>     > deactive it again.
>     > 2. change the lvm.conf to cluster config and run pvscan/vgchange
>     -ay again
>     > 3. start drbd0 , attach the backend lv
>     > 4. run drbdadm verify drbd0 on primary node.
>     >
>     > It does work.
>     >
>     > Have anyone a better solution? Thanks.
> 
>     I played with this same configuration and decided that the headache of
>     LV under and over DRBD was not justified. I have instead used partition
>     -> drbd -> lvm and life has been very much easier.
> 
>     --
>     Digimer
>     Papers and Projects: https://alteeve.ca/w/
>     What if the cure for cancer is trapped in the mind of a person without
>     access to education?
> 
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?



More information about the drbd-user mailing list