[DRBD-user] Why isn't DRBD recognized as valid LVM PV?

Ralph.Grothe at itdz-berlin.de Ralph.Grothe at itdz-berlin.de
Wed Mar 12 18:26:36 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Luciano,

I swapped the order in lvm.conf's filter,
preceding all with a reject of md4.
(also discarded md0, which is /boot, and md2 which is swap (not
on LV!) )

# df /boot
Filesystem           1K-blocks      Used Available Use% Mounted
on
/dev/md0                101018     30523     65279  32% /boot
 
# swapon -s
Filename                                Type            Size
Used    Priority
/dev/md2                                partition       2530104 0
-1

# cd /etc/lvm

# rcsdiff lvm.conf
=================================================================
==
RCS file: lvm.conf,v
retrieving revision 1.2
diff -r1.2 lvm.conf
54c54
<     filter = [ "a|^/dev/sd[ab][1-9]?$|", "a|^/dev/md[0-9]$|",
"a|^/dev/drbd[0-9]$|", "r|.*|" ]
---
>     filter = [ "r/md4/", "a|^/dev/drbd[0-3]$|",
"a|^/dev/md[135]$|", "r|.*|" ]


# drbdadm state r0
Primary/Unknown
 
# drbdadm dstate r0
UpToDate/DUnknown
 
# drbdadm sh-dev r0
/dev/drbd0

# pvcreate -ff /dev/drbd0 
  Physical volume "/dev/drbd0" successfully created

# pvscan
  PV /dev/md5     VG vgrh     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/md3     VG vgdata   lvm2 [27.94 GB / 9.44 GB free]
  PV /dev/md1     VG vg00     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/drbd0               lvm2 [30.74 GB]
  Total: 4 [77.76 GB] / in use: 3 [47.02 GB] / in no VG: 1 [30.74
GB]



Wow, was this owe to the filter or that I used -ff this time on
pvcreate?


# pvdisplay -m /dev/drbd0 
  --- NEW Physical volume ---
  PV Name               /dev/drbd0
  VG Name               
  PV Size               30.74 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               L8I98d-aRlp-zlRe-o83m-kqyL-AbmD-svsL0W
   
# vgcreate -s 8m vgdrbd /dev/drbd0 
  Volume group "vgdrbd" successfully created

# lvcreate -L 2g -n lv_fiddle vgdrbd 
  Logical volume "lv_fiddle" created

# mkfs.ext3 -q /dev/vgdrbd/lv_fiddle 

# mount /dev/vgdrbd/lv_fiddle /mnt/tmp1

# df /mnt/tmp1
Filesystem           1K-blocks      Used Available Use% Mounted
on
/dev/mapper/vgdrbd-lv_fiddle
                       2064208     68680   1890672   4% /mnt/tmp1


Great,
now let me check if complete deactivation and reactivation
(which will later be handled by the Heartbeat resource scripts)
also works.

# umount /mnt/tmp1

# vgchange -a n vgdrbd
  0 logical volume(s) in volume group "vgdrbd" now active

# drbdadm secondary r0

# drbdadm detach r0

# drbdadm state r0
Unconfigured

# pvs
  /dev/drbd0: open failed: Wrong medium type
  PV         VG     Fmt  Attr PSize  PFree
  /dev/md1   vg00   lvm2 a-    9.54G 3.91G
  /dev/md3   vgdata lvm2 a-   27.94G 9.44G
  /dev/md5   vgrh   lvm2 a-    9.54G 3.91G


Oops, I wonder if "Wrong medium type" only disappears when drbd
is completely shut down on this node?

# service drbd stop
Stopping all DRBD resources.
# pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/md1   vg00   lvm2 a-    9.54G 3.91G
  /dev/md3   vgdata lvm2 a-   27.94G 9.44G
  /dev/md5   vgrh   lvm2 a-    9.54G 3.91G
# pvscan
  PV /dev/md5   VG vgrh     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/md3   VG vgdata   lvm2 [27.94 GB / 9.44 GB free]
  PV /dev/md1   VG vg00     lvm2 [9.54 GB / 3.91 GB free]
  Total: 3 [47.02 GB] / in use: 3 [47.02 GB] / in no VG: 0 [0   ]


Yeah.

And back online...


# service drbd start
Starting DRBD resources:    [ d(r0) s(r0) n(r0) ].
..........
***************************************************************
 DRBD's startup script waits for the peer node(s) to appear.
 - In case this node was already a degraded cluster before the
   reboot the timeout is 120 seconds. [degr-wfc-timeout]
 - If the peer was available before the reboot the timeout will
   expire after 0 seconds. [wfc-timeout]
   (These values are for resource 'r0'; 0 sec -> wait forever)
 To abort waiting enter 'yes' [ -- ]:[  10]:[  11]:[  12]:yes[
13]:

# pvscan
  PV /dev/md5   VG vgrh     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/md3   VG vgdata   lvm2 [27.94 GB / 9.44 GB free]
  PV /dev/md1   VG vg00     lvm2 [9.54 GB / 3.91 GB free]
  Total: 3 [47.02 GB] / in use: 3 [47.02 GB] / in no VG: 0 [0   ]
# drbdadm disconnect r0
# drbdadm attach r0
Failure: (124) Device is attached to a disk (use detach first)
Command 'drbdsetup /dev/drbd0 disk /dev/md4 /dev/md4 internal
--set-defaults --create-device --on-io-error=detach' terminated
with exit code 10
# drbdadm dstate r0
UpToDate/DUnknown
# pvscan
  PV /dev/md5   VG vgrh     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/md3   VG vgdata   lvm2 [27.94 GB / 9.44 GB free]
  PV /dev/md1   VG vg00     lvm2 [9.54 GB / 3.91 GB free]
  Total: 3 [47.02 GB] / in use: 3 [47.02 GB] / in no VG: 0 [0   ]
# drbdadm state r0
Secondary/Unknown
# drbdadm primary r0
# pvscan
  PV /dev/md5     VG vgrh     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/md3     VG vgdata   lvm2 [27.94 GB / 9.44 GB free]
  PV /dev/md1     VG vg00     lvm2 [9.54 GB / 3.91 GB free]
  PV /dev/drbd0   VG vgdrbd   lvm2 [30.73 GB / 28.73 GB free]
  Total: 4 [77.75 GB] / in use: 4 [77.75 GB] / in no VG: 0 [0   ]
# vgchange -a y vgdrbd
  1 logical volume(s) in volume group "vgdrbd" now active
# lvs vgdrbd
  LV        VG     Attr   LSize Origin Snap%  Move Log Copy% 
  lv_fiddle vgdrbd -wi-a- 2.00G                              



Great, seems to work now.
Such a tiny mistake after all prevented it and stole me hours.

Many thanks for your help!

Regards

Ralph


> -----Original Message-----
> From: Luciano Rocha [mailto:strange at nsk.no-ip.org]
> Sent: Wednesday, March 12, 2008 5:15 PM
> To: Grothe, Ralph
> Cc: drbd-user at lists.linbit.com
> Subject: Re: [DRBD-user] Why isn't DRBD recognized as valid LVM
PV?
> 
> 
> On Wed, Mar 12, 2008 at 04:27:55PM +0100, 
> Ralph.Grothe at itdz-berlin.de wrote:
> > Hello DRBD Users,
> <snip>
> > 
> > I first thought to have found the reason in a misconfigured
> > lvm.conf filter.
> > But then the default "match all", which I had commented out,
> > should have matched any drbd[0-9] anyway.
> > 
> > # grep -B3 '^ *filter' /etc/lvm/lvm.conf
> >     # By default we accept every block device:
> >     #filter = [ "a/.*/" ]
> > 
> >     filter = [ "a|^/dev/sd[ab][1-9]?$|", "a|^/dev/md[0-9]$|",
> > "a|^/dev/drbd[0-9]$|", "r|.*|" ]
> 
> Could you try a filter that finds only the drbd devices? And 
> others, if
> you have other PVs, but not /dev/md4 (actually, just inserting
"r|md4"
> at the very beginning should be enough).
> 
> Then, pvcreate /dev/drbd...; pvscan; and vgcreate.
> 
> -- 
> lfr
> 0/0
> 



More information about the drbd-user mailing list