[DRBD-user] configuring 2 services on 2 hosts

J. Ryan Earl oss at jryanearl.us
Thu Jan 6 19:50:46 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Reply Inline:

On Thu, Jan 6, 2011 at 11:40 AM, Stefan Seifert <nine at detonation.org> wrote:

> On Thursday 06 January 2011 18:24:56 J wrote:
> > Putting in the lvm mainly to do snapshot style backups.  This makes me
> > wonder if it's a doable usecase to add more drives and have more than
> > one drbd partition in a pv group and then resize the lv if needed to
> > increase space (Man that's wild stuff).
> >
> > But anyway, is setting up a pv on top of a drbd a good idea?
> It is certainly a supported configuration, but in your case I'd do it the
> other
> way round: putting two drbd devices on top of lvm. This way you can resize
> without trouble and distribute the space easily.

 With DRBD as a PV on top of MD, you can still allocate out that PV
incrementally and resize the individual LVs as needed.  If you put in bigger
disks, you can just add another partition for the new space, create a new
PV, and concatenate the exist VG in linear mode with the new PVs.  Then the
existing VG is now made up of 2 PVs on the same disk in linear mode.  Space
management flexibility here doesn't shouldn't play into this decision as
having LVM instanced twice in the block device doesn't add any new
flexibility over a single instance.

> And you can still use it for
> snapshotting even on the secondary node! So in short, I'd do it this way:
> md0: sda1, sdb1
> mount md0 /boot
> md1: sda2, sdb2
> use md1 as pv for vg1
> have swap, / and your application and database volumes in vg1
> Use application and database volumes for drbd and put filesystems on top.

I'm pretty sure I understand how you intend for the DRBD devices to be laid
out above even though you don't state it.  The key distinction between
snapshotting above or below DRBD is whether the  snapshot data itself is
replicated to both nodes.  With DRBD on-top of LVs (ie LV as backing store),
you can make the snap-shot Copy-on-Write volume exist on only 1 node.  I
can't imagine why you would do that other than some fringe performance cases
since taking the snapshot under DRBD increases the risk of losing the
snapshot by in a node failure.  ie If you have an active-passive setup,
where the passive had significantly more I/O than the primary and local
snap-shot on the secondary was used for offline analytics and reporting.  I
guess it really depends on if you want the copy-on-write data mirrored or

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110106/7a6b440b/attachment.htm>

More information about the drbd-user mailing list