[DRBD-user] DRBD and (C)LVM

Manuel Prinz manuel.prinz at uni-due.de
Tue Nov 2 23:39:05 CET 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi J. Ryan,

thanks for your answers! As you may have already guessed, I'm quite new to the
whole HA topic and I'm sorry if my request sounded amateurish; well, that's
what I'm trying to overcome by asking questions here to learn.

On Tue, Nov 02, 2010 at 02:50:05PM -0500, J. Ryan Earl wrote:
> On Tue, Nov 2, 2010 at 9:15 AM, Manuel Prinz <manuel.prinz at uni-due.de>wrote:
> > A short introduction: I have two RAID arrays which I'd like to join via
> > LVM (as PVs) and replicate them via DRBD.
> 
> 2 RAID arrays *per host* you mean?  How are your RAID arrays configured?

Yes, sorry for being unclear. Each host has two RAID-6 arrays configured.
That's far from optimal, sure. It's a limitation of the controller. (The only
other option I see would be to use JBOD + software RAID altogether. But that's
off-topic. I don't mind if you share your thoughts on that, though.)

> I recommend:
> 
> C. Create an software RAID0 'md' device between your two array controllers.
> Use the md device as your backing storage.  Put LVM on top of the final
> DRBD device.

That sounds pretty clever!

> "CLVM" and "LVM" with locking_type=3 are pretty much the same thing.  For
> locking_type=3 to be used, the clvmd service needs to be running but yea,
> changing the locking type to 3 is what turns LVM into CLVM.
> […]
> Or later modify the VG to be clustered.  The VG must be clustered, which
> means all the RHCS cluster infrastructure must be running including clvmd.

Thanks for clarifying!

> Fencing is required for cman and thus CLVM and GFS2.  Not sure why you think
> there should be no concurrent access issues, there are.

I was confused since STONITH is disabled in a lot of examples, including in
mentioned white paper (which uses OCFS2). Expecting people writing about that
topic to know what they are doing might have been a bit naive, as it appears.
Of course you're right, there could be issues below the FS layer, so fencing
is always a good idea. At least that's what I got from your statement. Got it
right?

> GFS2 is an enterprise ready, stable, proven, robust clustered filesystem
> with OK performance.  I'd definitely say OCFS2 of 1.2.x and earlier did not
> qualify as that.  There were/are outstanding bugs open for years.  I haven't
> done any work with OCFS2-1.4.x which was released a few months ago.  How
> about you go bleeding edge and let us all know if you're willing to do the
> leg work and/or take the risk. :-)

If going with OCFS2, I was going to use 1.4.4 anyway, as that's provided by
my distribution of choice. Since I do not have spare hardware for testing,
I've to decide for on or the other. I already tend to go with OCFS2, so yeah,
maybe you'll hear me screaming soon at this very place! ;)

> Now you're in a different land.  I thought you were talking about putting a
> clustered file-system on your DRBD nodes.

I'm not sure if you're trying to tell me that this is off-topic for this list
or that's some kind of insane in a way I do not see. If I have a clustered FS
on DRBD nodes, someone should put data there, right? So exporting the storage
via iSCSI seemed to be a (reasonable?) option to accomplish that. If it's
off-topic here I'd very much appreciate it if you could point me to a better
forum!

Again, thanks for your answers!

Best regards,
Manuel



More information about the drbd-user mailing list