[DRBD-user] LVM snapshot for off site backup

Lars Ellenberg Lars.Ellenberg at linbit.com
Mon Feb 12 15:28:13 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


/ 2007-02-11 04:26:09 -0600
\ Johnny Hughes:
> Hello,
> 
> We have been successfully using DRBD for our samba PDC and for our main
> mail server for several years.  We've never really had any issues and
> DRBD has provided failover capability several times.  Thank you for the
> great product.
> 
> In my current setup the drbd partition is a separate drive, and it is on
> a standard linux partition (fdisk output):
> 
> /dev/hdb1               1       19457   156288321   83  Linux
> 
> The DRBD solution is on (2) CentOS-4 machines (I am the lead CentOS-4
> developer).
> 
> I am thinking about how to provide an off site backup of the information
> on the live DRBD device for an emergency recovery capability.  This
> would just be a partition that contains all the data so that the machine
> could be used in the event of a catastrophic failure at one site.  It
> would not need failover or other capabilities and would only be used if
> a site had a problem for an extended period of time.
> 
> One of the items I am backing up is the /var/lib/mysql directory, so I
> was thinking that the best way to do this so that everything is
> consistent is to convert the underlying device from a normal Linux
> partition to LVM2.  I could then just take an LVM snapshot of the
> underlying device and rsync that off site, then remove the snapshot
> while DRBD was running.
> 
> There are other possibilities for the data after converting both the
> devices that DRBD lives on to LVM2.  (like a script stopping heartbeat
> and drbd on the secondary machine, mounting the LVM device and doing a
> snapshot, unmounting the device and turning on drbd then heartbeat and
> letting the secondary catch back up).
> 
> I am just wondering if someone is already doing this and if so, what are
> they doing?

I'd recommend to convert it to lvm,
then take the snapshot locally.

for the snapshot, you need additional space, anyways.
the recommended setup would be:

have some local raid1 / raid5 on /dev/mdX
pvcreate .... /dev/mdX
vgcreate .... /dev/mdX
lvcreate .... -n drbd0-md ...
lvcreate .... -n drbd0 ...
and have drbd meta-disk  on /dev/vg*/drbd0-md,
drbd disk on /dev/vg*/drbd0.

you can upgrade the current primary first,
upgrade its hardware and disk layout,
changes its drbd.conf to reflect that changes,
and then "drbdadm attach all",
check that it came up as "Inconsistent",
so it will receive a full sync.

drbdadm syncer all; drbdadm connect all
wait for the sync,
switchover,
upgrade the other box...

then you can script your lvm snapshots and backup.

we have similar setups in production,
running reports against a "snapshot db"
on the (drbd+, 3rd) standby node.

a word about lvm snapshots from my experience:
for reliable scripted snaphot creation and removal you will need a
recent (2.6.16 and later) kernel, therefore recent module-utils and
other direct kernel dependencies, a recent dmsetup and libdevmapper,
recent (and properly configured) udev, and so on...

otherwise it may do unexpected things like freezing your io-subsystem,
oopsing/locking up the kernel during snapshot creation or removal...

for more information, browse the lvm(-dev) mailing list archives.

-- 
: Lars Ellenberg                            Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH      Fax +43-1-8178292-82 :
: Vivenotgasse 48, A-1120 Vienna/Europe    http://www.linbit.com :
__
please use the "List-Reply" function of your email client.



More information about the drbd-user mailing list