[DRBD-user] How to colocate multiple DRBD resources in HB2?

Lars Ellenberg lars.ellenberg at linbit.com
Tue Feb 24 13:33:31 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, Feb 24, 2009 at 10:05:24AM +0100, Kai Stian Olstad wrote:
> On Mon, Feb 23, 2009 at 2:17 PM, Lars Ellenberg
> <lars.ellenberg at linbit.com> wrote:
> > On Mon, Feb 23, 2009 at 10:50:30AM +0100, Kai Stian Olstad wrote:
> >> Hi,
> >>
> >> I'm setting up a file server cluster with SLES 10, DRBD 8.2.7, LVM2,
> >> Samba 3.2.8 and Heartbeat 2.1.4.
> >> The DRBD resources is master/slave (ocf/heartbeat).
> >>
> >> In this configuration it will be multiple DRBD resources that are
> >> added to the same volume group in LVM.
> >
> > you mean these are PVs of a VG made from multiple DRBD?
> >
> > you are aware that we recommend against such a setup,
> > as long as we cannot provide write ordering accross multiple DRBD.
> 
> Yes, they are PVs of one VG made multiple DRBDs. The reason for this
> is that our SAN doesn't support increasing the size of a disk, so we
> have to add a new one.
> 
> I'm not aware that you doesn't recommend this setup. I probably
> overlooked it in the documentation.
> What is the recommended solution in this kind of setup? Is it LVM ->
> DRBD -> LVM?

the problem:
	DRBD ensures logical write ordering within one DRBD.
	DRBD does not (yet) do anything
	about write ordering accross multiple DRBDs.

	drbd0 -> pv0
	drbd1 -> pv1

	vgXY created from (pv0, pv1)

	some LV within vgXY, which happens to
	cross the pv0/pv1 boundary.

	loss of connectivity for some reason,
	race condition: one of drbd0/drbd1 may still get
	some writes through while the other already gave up.

	then you do a switchover.

	you may be unlucky enough to not notice immediately,
	but the data in "some LV" mentioned above is now inconsistent.

there may be all sorts of failure scenarios where pv0/pv1 do not in fact
represent the same point in "storage time".

my recommendation would be one VG per DRBD.

if you can live with it, its is probably a good idea to have
one DRBD (thus one pv, and one VG) per hardware array.

if you just have to increase the size of the VG itself,
and not only the overall available storage,
you may consider to do a linear or stripe set
below DRBD.

e.g. do a MD raid0 from multiple hardware arrays,
then put DRBD on top of that.

btw, if you decide to stay with multiple DRBD per VG:
constraints: I'd probably not colocate them,
but add a "promote (and colocate) before VG-activation" constraint to
each of the DRBD, which should take care of the colocation anyways.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list