[DRBD-user] DRBD and (C)LVM

J. Ryan Earl oss at jryanearl.us
Tue Nov 2 20:50:05 CET 2010

On Tue, Nov 2, 2010 at 9:15 AM, Manuel Prinz <manuel.prinz at uni-due.de>wrote:

> A short introduction: I have two RAID arrays which I'd like to join via
> LVM (as PVs) and replicate them via DRBD.

2 RAID arrays *per host* you mean?  How are your RAID arrays configured?

> First, I can think of two solutions to that:
>     A. Create two DRBD resources on top of the arrays. The DRBD
>        resource would be on each of /dev/sd[bc]. The two DRBD resources
>        would be used as PV and the VG and LV(s) created on top of that.
>        The FS would reside on the LV.
>     B. Create one DRBD resouce on top of an LV. Both /dev/sd[bc] would
>        be used as PV. A LV would be created, with a DRBD resource on
>        top. The FS would reside on the DRBD resource.

I recommend:

C. Create an software RAID0 'md' device between your two array controllers.
 Use the md device as your backing storage.  Put LVM on top of the final
DRBD device.

> I did not find any information about which setup two prefer.

B is particularly bad, there is no reason you should ever do this.  In
general, 'md' performance beats LVM performance when it comes to striping
and aggregating block devices.  A is going to be a bigger headache to manage
than C and you may incur some extra CLVM overhead with the 2 shared PVs.

> Second, I have some questions regarding Active/Active setups. I
> understand the need of a FS that has support for distributed locking. If
> such a setup runs on top of LVM, would I need CLVM or is LVM with
> locking_type=3 sufficient?

"CLVM" and "LVM" with locking_type=3 are pretty much the same thing.  For
locking_type=3 to be used, the clvmd service needs to be running but yea,
changing the locking type to 3 is what turns LVM into CLVM.

> Do I need to pass --clustered=y to vgcreate?

Or later modify the VG to be clustered.  The VG must be clustered, which
means all the RHCS cluster infrastructure must be running including clvmd.

> As there should be no problem with concurrent access, is STONITH
> required in such a setup? The LinuxTag white paper disables it but does
> not give an explanation. I guess it's because of the FS but am not sure.

Fencing is required for cman and thus CLVM and GFS2.  Not sure why you think
there should be no concurrent access issues, there are.

> There's also a lot of FUD on the net regarding GFS2 and OCFS2,
> especially in regards to PaceMaker integration. Is GFS2 really better
> integrated and more reliable?

GFS2 is an enterprise ready, stable, proven, robust clustered filesystem
with OK performance.  I'd definitely say OCFS2 of 1.2.x and earlier did not
qualify as that.  There were/are outstanding bugs open for years.  I haven't
done any work with OCFS2-1.4.x which was released a few months ago.  How
about you go bleeding edge and let us all know if you're willing to do the
leg work and/or take the risk. :-)

> Last, I wonder what's the best solution to export the storage to other
> nodes. I have bad experiences with NFS, and iSCSI looks like the way to.
> With a DLM-aware FS it should be OK to access them from several nodes.
> Or is there a better way to export the storage to other nodes?

 Now you're in a different land.  I thought you were talking about putting a
clustered file-system on your DRBD nodes.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20101102/63034c3c/attachment.htm>

More information about the drbd-user mailing list