<div dir="ltr">I find a solution : <div>1. drbdadm sh-ll-dev drbd0 find drbd0's backend lv</div><div>2. map lv to dm-x , ls /sys/block/dm-x/holders to find the frontend lv</div><div>3. dmsetup remove -f $frontlv</div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/10/7 Digimer <span dir="ltr"><<a href="mailto:lists@alteeve.ca" target="_blank">lists@alteeve.ca</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On 06/10/13 23:26, Mia Lueng wrote:<br>
> I have built a drbd cluster. The storage setting is like the following:<br>
><br>
> backend LV--->drbd0--->pv-->vg-->userlv<br>
><br>
> That means I create a drbd device on a LV, and create a volume group on<br>
> drbd device again.<br>
><br>
> In /etc/lvm/lvm.conf, I add a filter so that pvscan do not probe for the<br>
> backend LV. This works fine on normal suitation.<br>
><br>
> Now A is primary, and B is secondary. Break the link of A's<br>
> storage(fc san), HA cluster will detect the error and failover the<br>
> resource from A to B. But the drbd resource and filesystem can not be<br>
> stopped on A, so A will be reboot (due to stop fail handle) and B will<br>
> takeover all the resource. When A rejoin the cluster, the drbd resource<br>
> can not be start as secondary automatically: the backend LV can not be<br>
> attached to the drbd resource.<br>
><br>
> vcs2:~ # lvs<br>
> LV VG Attr LSize Origin Snap% Move Log Copy% Convert<br>
> drbd0_lv drbdvg -wi-ao 800.00M<br>
> oralv oravg -wi-a- 1000.00M<br>
> vcs2:~ # modprobe drbd<br>
> vcs2:~ # drbdadm up drbd0<br>
> 0: Failure: (104) Can not open backing device.<br>
> Command 'drbdsetup 0 disk /dev/drbdvg/drbd0_lv /dev/drbdvg/drbd0_lv<br>
> internal --set-defaults --create-device --on-io-error=pass_on<br>
> --no-disk-barrier --no-disk-flushes' terminated with exit code 10<br>
> vcs2:~ # fuser -m /dev/drbdvg/drbd0_lv<br>
> vcs2:~ # lvdisplay /dev/drbdvg/drbd0_lv<br>
> --- Logical volume ---<br>
> LV Name /dev/drbdvg/drbd0_lv<br>
> VG Name drbdvg<br>
> LV UUID Np92C2-ttuq-yM16-mDf2-5TLE-rn5g-rWrtVq<br>
> LV Write Access read/write<br>
> LV Status available<br>
> # open 1<br>
> LV Size 800.00 MB<br>
> Current LE 200<br>
> Segments 1<br>
> Allocation inherit<br>
> Read ahead sectors auto<br>
> - currently set to 1024<br>
> Block device 252:6<br>
><br>
><br>
> My solution is:<br>
> 1.restore the default configure of /etc/lvm/lvm.conf and run<br>
> pvscan/vgchange -ay to active the lv on drbd0(now on the backend lv) and<br>
> deactive it again.<br>
> 2. change the lvm.conf to cluster config and run pvscan/vgchange -ay again<br>
> 3. start drbd0 , attach the backend lv<br>
> 4. run drbdadm verify drbd0 on primary node.<br>
><br>
> It does work.<br>
><br>
> Have anyone a better solution? Thanks.<br>
<br>
</div></div>I played with this same configuration and decided that the headache of<br>
LV under and over DRBD was not justified. I have instead used partition<br>
-> drbd -> lvm and life has been very much easier.<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Digimer<br>
Papers and Projects: <a href="https://alteeve.ca/w/" target="_blank">https://alteeve.ca/w/</a><br>
What if the cure for cancer is trapped in the mind of a person without<br>
access to education?<br>
</font></span></blockquote></div><br></div>