[DRBD-user] resize disks/partitions

Lars Ellenberg lars.ellenberg at linbit.com
Mon May 12 20:23:30 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, Apr 29, 2014 at 10:07:06AM +0100, Lee Musgrave wrote:
> hi,
> did try posting this before, but it looks like it didn't make it to the
> list.
> 
> i have two servers running drbd 8.3.11 on ubuntu 12.04
> 
> i have two resources on /dev/sdb  (sdb1 and sdb2)
> and one resource on dev/sdc  (sdc1)
> 
> the metadata is kept on /dev/sda2
> 
> /dev/drbd0 uses sdb1  metadata is /dev/sda2[0]
> /dev/drbd1 uses sdb2  metadata is /dev/sda2[1]
> /dev/drbd2 uses sdc1  metadata is /dev/sda2[2]
> 
> drbd0 has a jfs filesystem containing iscsi config data
> drbd1 is a blockio iscsitarget containing vm disk images
> drbd2 is a fileio iscsitarget containing an ocfs2 partition
> 
> i need to move all these resources to larger disks,
> 
> if i leave the current resource disks attached, i can attach the new disks
> to one of the servers, currently unpartitioned, as /dev/sdd, and /dev/sde
> on the other server, i can only attach one of the new disks without
> removing one of the current resource disks.
> 
> i've looked at the documentation, and am still unsure of the best way
> forward.

Sorry for the late reply,
too bad no-one else has taken this up.

You intend to replace the lower level disks (by larger ones),
not just resize in them as in "leave data in place,
but extend the capacity at the end".

So you need a full resync anyways (no old data in place).

Simply take down one node, replace disks, re-create meta data,
if you like you could change to LVM, or to internal meta data,
or to what we call "flexible external" meta data, or whatever.

Then configure these newly initialized DRBD,
and connect them to the existing other node.
Wait for the full sync.
Switch roles.
Do the same again.
Once you connect this time, appart from the second full resync,
DRBD will recognize that both lower level disks have grown in capacity,
and extend the effective capacity of the replicated device automatically.

Then you can grow the contained file systems
on the then Primary with the appropriate file system tools.

> having external metadata, it appears the new size should be recognized
> automatically, i can't do this online, since they are new disks, not just a
> resize. (plus i'm not using LVM) doing it offline, it looks like i need to
> do both nodes at the same time, which gives me no way of moving the data
> over.
> 
> can i not replace the disks on the secondary node with the new disks,
> configured with the larger partitions, and let them do a full sync from the
> primary? then disconnect the primary, change the disks on that, and let
> that do another full re-sync from what was the secondary?

Right.

> if not, what is the best way to get this done? downtime will not be an
> issue, this is currently in test, getting it ready for production, i
> cannot, however, afford to lose the data that is already on the drbd
> partitions. although if it can be done without downtime then good, i can
> get it procedure tested and documented in case it needs to be done in
> future when the systems are live.

If you had things on LVM,
you could add a new PV,
do lvextend or pvmove as you see fit,
without losing redundancy during the capacity upgrade.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list