Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, Nov 11, 2011 at 03:33:33PM -0700, David Mohr wrote: > Hi, > > we have a live server running drbd 8.3.7 in primary/secondary mode. > It has a 10tb resource configured right now, but the underlying > storage device is 20tb big. The resource is configured directly on a > raid partition, but we're using lvm on top of drbd. > > Our initial plan was to set up a second drbd resource. But after > reading the documentation some more, it seems like a better choice > to keep just one resource and resize it so that drbd can better > manage the sync traffic. If that means only using 16tb, then that's > fine. > > Now it's 8.3.7, so there is no volume support yet. We tried to take > one server down and follow the offline grow instructions in the > manual, but drbdadm complained: > > root at s1a:~# drbdadm create-md vm1 > >pvs stderr: /dev/sdb1: Skipping (regex) > >pvs stderr: Failed to read physical volume "/dev/sdb1" > >pvs stderr: Unlocking /var/lock/lvm/P_global > >pvs stderr: _undo_flock /var/lock/lvm/P_global > > > >md_offset 15999998881792 > >al_offset 15999998849024 > >bm_offset 15999510564864 > > > >Found LVM2 physical volume signature > >15624522036 kB left usable by current configuration > >Could not determine the size of the actually used data area. > > > >Device size would be truncated, which > >would corrupt data and result in > >'access beyond end of device' errors. > >If you want me to do this, you need to zero out the first part > >of the device (destroy the content). > >You should be very sure that you mean it. > >Operation refused. > > > >Command 'drbdmeta 0 v08 /dev/sdb1 internal create-md' terminated > >with exit > >code 40 > >drbdadm create-md vm1: exited with code 40 > > > I'm assuming drbd sees the lvm that is contained _within_ this > resource and bails out. > > What would be the best course of action to make the additional > storage space available? drbdmeta create-md detects the LVM meta data signature, and calls out to pvs to do the meta data parsing. Your filter in lvm.conf correctly tells pvs to ignore sdb1, so drbdmeta can not determine the size, and plays it safe. If you invoke it like so: LVM_SYSTEM_DIR= drbdadm create-md vm1 pvs will not find your lvm.conf, default to it's builtin filter settings, which should allow it to read and parse the LVM meta data block. drbdmeta should then figure that the currently used space is much less than the device size, and that it is ok to create the meta data. > We are trying to avoid version upgrades as > much as possible since this is a production system (yes, I realize > it was failure on our part to not get everything setup right away). > I could live with upgrading drbd to 8.3.8, You should upgrade to 8.3.12, actually. Which, btw, also can do up to 1 Petabyte per device, if you are 64bit and got enough RAM to store the bitmap. > but we're on 2.6.34 and 8.3.8 recommends a newer kernel. Huh? What makes you think so? -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com