[DRBD-user] DRBD and LVM Snapshot with 2 nodes configuration

Lars Ellenberg Lars.Ellenberg at linbit.com
Wed Apr 7 08:36:50 CEST 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


/ 2004-04-07 01:22:33 +0200
\ Andreas Semt:
> >for 5):
> >  you can "bring up" (tell the fs to resume) any user immediately
> >  after you took the snapshot. LVM does NOT access any "underlying fs",
> >  the FS is accessing the LV.
> >  creating a snapshot basically sets up a COW mapping for the LV in
> >  question. so each access to the LV after you took the snapshot
> >  will first copy the existing block (if not already copied) to the
> >  snapshot, and then continue (I think they do it this way, and not
> >  the other way 'round, but I'd need to look it up to be sure).
> 
> I believed only a copy of the original data blocks (or Logical Extents) 
> are copied to the snapshot, if these data blocks on the LV were changed?

that is what I am saying :)
the difference is *when* does the copy takes place. and thats not when
you set up the COW table (create the snapshot), but "just in time" when
the data blocks change later on.

> However, the old problem is: How to make a snapshot of a drbded block 
> device while all services are running (and get their data from the fs on 
> the drbd block device)? Back to LVM2 on top of drbd (with filters):
> 1) Can I solve the problem with this approach (drbd is online, lvm2 
> filter makes it possible to hide drbd from the snapshot, can do a 
> *clean* snapshot).

Yes, now you have the fs directly above LVM, where it expects it to be.
But here, too, you need to make sure that you have enough space in the
VG to create the snapshot.
Depending on your requirements and setup, you might get away with
adding non-DRBD PVs to the VG, to have the snapshot on. Or you have two
DRBDs as two PVs, and reserve one of them for the snapshot of the other.
Or only use part of one DRBD for the LV and FS, and use the rest of the
same DRBD as snapshot area. Or ...

If you have more than one DRBD as PV in the same VG, you have to make
sure that you always fail over all dependant PVs *at the same time!*
And you should make sure that all of them live on some local RAID.

> or
> 2.) [the other way]
> 
> you wrote:
> >so what should work is backup/snapshot during downtime:
> >
> >    node-A: Primary; node-B: Secondary
> >node-A# stop_all_services && umount /dev/nb0 && drbd stop
> >    node-A: Secondary; node-B: Secondary
> >either node# lvmcreate {snapshot}
> >either node# drbd start ; mount_that_device && start_all_services
> >then do anything you want to do with the snapshot, and delete the
> >snapshot after you are done with it.  you should be able to access the
> >drbd device as usual while accessing the snapshot.
> >
> 
> That means LVM and drbd can coexistence, but for a snapshot drbd have to 
> be offline so LVM can contact the underlying fs and create the snapshot.
> But if the fs in on top of drbd and drbd is offline, how can lvm reach 
> the fs?

there IS NO underlying FS.
The filesystem is ON TOP of the block device.
And if it is not mounted, there is no point in notifying it
about anything...

> After snapshot creation, I can put drbd online again and drbd AND lvm 
> working with the same data (the fs sees only the /dev/nb0 device). Right?

yes.

> >the difficult thing is: doing a clean and consistent snapshot while drbd
> >is online.
> 
> Doing a snapshot while drbd is online is not a good solution, so drbd 
> off | snapshot | drbd on gives a *clean* snapshot. I hope so ...

Thats it.

	Lars Ellenberg



More information about the drbd-user mailing list