[DRBD-user] drbd Input/output error

Dan Barker dbarker at visioncomm.net
Sun Feb 16 15:20:47 CET 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


LVM grabbed the device before DRBD got to it.

If you have VGs on drbd resources, you need a filter so the raw device doesn't get mounted at boot time. It works before laying down the VG, because at boot the VG doesn't exist. After you have configured logical volumes on the drbd resource, on the next boot LVM takes them over before DRBD can get them.

There are some device name filters needed in the configuration (Don't know the specifics, I've not done this. You show the file - 
/etc/lvm/lvm.conf - but all lines are commented out!) that will skip the /dev/mdx as candidates for VGs, allow DRBD to get at them and create the drbd resources.

Your data is probably fine, but on which node?

[root at wirt ~]# cat /etc/lvm/lvm.conf  |grep -i filt
    # A filter that tells LVM2 to only use a restricted set of devices.
    # The filter consists of an array of regular expressions.  These
    # Don't have more than one filter line active at once: only one gets used.
    # filter = [ "r|.*|", "a|/dev/drbd[0-9]$|" ]
    # filter = [ "r|/dev/cdrom|" ]
    # filter = [ "a/loop/", "r/.*/" ]
    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]
    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]
    # Since "filter" is often overridden from command line, it is not suitable
    # for system-wide device filtering (udev rules, lvmetad). To hide devices
    # global_filter. The syntax is the same as for normal "filter"
    # above. Devices that fail the global_filter are not even opened by LVM.
    # global_filter = []
    # The results of the filtering are cached on disk to avoid
    # mlock_filter = [ "locale/locale-archive", "gconv/gconv-modules.cache" ]

Dan

> -----Original Message-----
> From: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-
> bounces at lists.linbit.com] On Behalf Of Piotr Kloc
> Sent: Sunday, February 16, 2014 4:37 AM
> To: Pascal Berton
> Cc: drbd-user at lists.linbit.com
> Subject: Re: [DRBD-user] drbd Input/output error
> 
> > Reading your infos, I understand that /dev/md2 is at the same time the
> > backing device of vg1 (pvdisplay recognizes md2 as a valid PV) and
> drbd1,
> > which is nonsense.
> >I'd rather expect you put vg1 on top of drbd1, not md2, or you create 2
> >resources drbd1 and drbd2 that would use vg1-vm1 and vg1-vm2 as their
> >respective backing store.
> > Drbd resources and pvs have their own metadata structures that
> potentially
> > overlap in your current configuration, so your problems I guess...
> > Could you send us the results of vgdisplay -v vg1 to confirm this ?
> 
> 
> I have created the volume group on /dev/drbd1
> I think the command was   vgcreate vg1 /dev/drbd1
> 
> but now i see the vg1 is on /dev/md2 so it mess up :(
> 
> 
> [root at wirt ~]#  vgdisplay -v vg1
>     Using volume group(s) on command line
>     Finding volume group "vg1"
>   /dev/drbd1: read failed after 0 of 4096 at 1892425662464: Input/output
> error
>   /dev/drbd1: read failed after 0 of 4096 at 1892425728000: Input/output
> error
>   /dev/drbd1: read failed after 0 of 4096 at 0: Input/output error
>   /dev/drbd1: read failed after 0 of 4096 at 4096: Input/output error
>     /dev/drbd1: read failed after 0 of 4096 at 0: Input/output error
>   --- Volume group ---
>   VG Name               vg1
>   System ID
>   Format                lvm2
>   Metadata Areas        1
>   Metadata Sequence No  6
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                2
>   Open LV               2
>   Max PV                0
>   Cur PV                1
>   Act PV                1
>   VG Size               1.72 TiB
>   PE Size               4.00 MiB
>   Total PE              451189
>   Alloc PE / Size       409600 / 1.56 TiB
>   Free  PE / Size       41589 / 162.46 GiB
>   VG UUID               TIp3Ii-v6u4-E23S-wELl-2PzS-CQ4S-zSaAwS
> 
>   --- Logical volume ---
>   LV Path                /dev/vg1/vm1
>   LV Name                vm1
>   VG Name                vg1
>   LV UUID                cQ47kS-QJHW-rVR8-3PC1-PDyD-r9md-SEwDtW
>   LV Write Access        read/write
>   LV Creation host, time wirt.feb.net.pl, 2013-12-17 21:41:19 +0100
>   LV Status              available
>   # open                 1
>   LV Size                1.17 TiB
>   Current LE             307200
>   Segments               1
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     512
>   Block device           253:0
> 
>   --- Logical volume ---
>   LV Path                /dev/vg1/vm2
>   LV Name                vm2
>   VG Name                vg1
>   LV UUID                PHNzo5-y4UW-KUBT-9hcG-HsmE-5St4-KpGWK7
>   LV Write Access        read/write
>   LV Creation host, time wirt.feb.net.pl, 2014-02-15 00:09:53 +0100
>   LV Status              available
>   # open                 1
>   LV Size                400.00 GiB
>   Current LE             102400
>   Segments               1
>   Allocation             inherit
>   Read ahead sectors     auto
>   - currently set to     512
>   Block device           253:4
> 
>   --- Physical volumes ---
>   PV Name               /dev/md2
>   PV UUID               5i9EaO-fJKs-e10m-fr0i-0oC2-QaKm-OSel3f
>   PV Status             allocatable
>   Total PE / Free PE    451189 / 41589
> 
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list