MD by HB management; was: [DRBD-user] Why isn't DRBDrecognized as valid LVM PV?

Ralph.Grothe at itdz-berlin.de Ralph.Grothe at itdz-berlin.de
Fri Mar 14 10:31:11 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Lars,

thanks for the reply.

Btw, I remember to have met you once at the LinuxConf EU 2007
where we shared a taxi from 
Cambridge railway station.

> -----Original Message-----
> From: drbd-user-bounces at lists.linbit.com
> [mailto:drbd-user-bounces at lists.linbit.com]On Behalf Of Lars
Ellenberg
> Sent: Thursday, March 13, 2008 7:06 PM
> To: drbd-user at lists.linbit.com
> Subject: Re: MD by HB management; was: [DRBD-user] Why isn't
> DRBDrecognized as valid LVM PV?
> 
> 
> On Thu, Mar 13, 2008 at 01:15:34PM +0100, 
> Ralph.Grothe at itdz-berlin.de wrote:
> > > I stacked my devices like
> > > 
> > > 0xfd partition => level 1 MD => DRBD => LVM2 PV
> > > 
> > > As I see from the haresources of the old cluster 
> > > the MDs haven't been managed by Heartbeat separately yet.
> > > 
> > > I assume because of the 0xfd part. type that the arrays are
auto
> > > assembled according to the UUIDs of their members already
within
> > > initrd on boot up, 
> > > including the one which serves as the base of the DRBD.
> > > 
> > > I would like to change this so that assembly and stopping
of
> > > this particular MD array is entirely performed by
heartbeat.
> 
> > I think what I want (viz. discriminate autoraid
assembly)isn't going
> > to work because, of course the OS /boot MD as well as the MD
PV of
> > vg00 require to be autostarted.
> 
> and "the one which serves as the base of the DRBD"
> does not need to be autostarted? why?

yes, of course it does.
It was just a maybe far-fetched idea that this MD could possibly
more appropriately be started and stopped
by the init script that manages drbd.
But I aggree that this was a silly thought.

> 
> I don't see why you would want heartbeat
> to manage your md raids in your setup at all.
> 
> that would only be useful (and necessary) if you had that md on
> "shared scsi" disk, which obviously could only be active
> on one node at a time, so it would need to be assembled
> on failover time after stonith succeeded fencing the other
node.
> 
> I don't see any use for this in a DRBD setup.
> I think it would even be cumbersome to setup, since you need
the md
> active even on the "passive", namely currently Secondary, DRBD
node.
> 
> am I missing something?

No, you're right.


May I ask you somewhat astray,
although I'm sure that this is probably in great depth pointed
out somewhere in the docs,
if with my brand new DRBD 8.2.5 installation I could activate the
DRBD primary on both nodes simultaneously
and have the concurrent write accesses from both nodes be warded
by some sophisticated locking mechanism,
as is presumably provided by e.g. GFS or OCFS2 etc.?

Currently this cluster is HB1, active/standby.

Inspite of shifting to HB2, which I fear would have to be paid
with the loss of ease of administration,
I rather fancy to instead have HB1 manage shared storage in guise
of a DRBD+GFS combo
along with an LVS load balancer.
This shared storage should then be used by a varying number of
web services/apps
which I would like to provide in OpenVZ VEs running on the same
cluster nodes,
but which wouldn't have to be HA (i.e. not HB managed).
This may sound pretty awkward because one usually separates the
LVS cluster from the services behind it 
which are forwarded to.
And I know that trying to escape HW costs this way while trying
to maintain HA at the same
must be like a contradiction in itself.

But anyway, do you think this could work at all,
and would make any sense?

Cheers

Ralph






More information about the drbd-user mailing list