[DRBD-user] 0.6.x to 0.7.0 upgrade questions

Lars Ellenberg Lars.Ellenberg at linbit.com
Wed Aug 4 02:28:58 CEST 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

/ 2004-08-03 22:30:40 +0200
\ Stefan Andersson:
> Hello,
> I read the
> http://svn.drbd.org/drbd/tags/drbd-0.7.1/upgrade_0.6.x_to_0.7.0.txt
> and have some questions:
> >  In order to do this upgrade you either need to
> >
> >   A) shrink your filesystems on the DRBD devices by at least 128MB, or
> >   B) grow the backing_storage of the DRBD devices by at least 128MB, or
> >   C) have one separate block_device for all meta data
> >
> >  A)
> >  ext2/ext3   resize2fs
> >  reiserfs    resize_reiserfs     
> >  xfs         xfsdump, xfsrestore ; xfs can only grow
> >
> >  B)
> >  lvresize    in case you run DRBD on LVM
> >  (fdisk)     (Only do this if you know what you are doing.)
> >
> >  C)
> >  The size of the meta-data device needs to be at least n*128MB,
> >  where n is the number of DRBD resources you want to use.
> I use drbd on top of a lvm volume. From the description, I assume that
> the meta-data is stored at the end of the block device, as storing it at
> the beginning would both wreck the existing filesystem and be
> overwritten by mkfs.

yes, as long as you set "meta-data internal;".

> So, if I lvresize my logical volume to hold the metadata and upgrade to
> 0.7.x, what will happen if I would like a bigger filesystem?
> Assume that I have a 50GB filesystem now running 0.6.x. I expand this
> with 128MB to hold the metadata, and upgrade to 0.7.x.
> Later, I increase the size to 70.128GB on both machines, update drbd.conf
> to reflect the new size, resize2fs. This means the old metadata is
> overwritten by the filesystem. Does drbd handle this and recreate the
> metadata in the last 128MB of the resized logical volume.

you don't do the resize2fs on the lower level device (that'll void your
agreement with the drbd module), but one the drbd.
the drbd says its size is 50GB, even though you may have increased the
lv below it. so you first need to "drbdadm resize mail [or whatever]",
which will move the meta-date for you. then you do the same on the other
node. then you do the resize2fs on the active node.
and it should just work.

we will have problems if you down the drbd after you did the lvresize,
but before the drbdadm resize, drbd won't find its meta data anymore,
and will just create a new set!

[Philipp, this indeed needs to be improved... due in, say, 0.7.4 ? ]

> How is this place determined? Seek to the end of the device minus 128MB?

right. aligned at 4K. to be exact, sector offset of the meta data area
for "internal" meta data is determinded by:
((capacity of lower level device in 512 byte sectors) & ~7UL) - 128*1024*2

> Is there any check being done to assure that it is not occupied by a
> filesystem?


if it is currently mounted or otherwise "claimed", we don't touch it.

but if we can successfully claim it, and you say
 drbdsetup /dev/drbd0 disk [whatever] internal -1
(which is what drbdadm attach or up or adjust do)
then there either is a valid drbd meta data block, or we create one.
if that was just a typo, tough luck :(

yes, we could require some "--just-create-it-I-MEAN-WHAT-I-SAY" flag,
in case we don't find valid meta data...  maybe we actually do it, and
drbdadm would then ask you pesty questions like  wether you are sure
that you really want to ... you get the idea.  but currently we don't.
rm -r * does not ask questions either. so what.

see also the DRBD-0.7.x quick start guide:

	Lars Ellenberg

please use the "List-Reply" function of your email client.

More information about the drbd-user mailing list