[DRBD-user] primary/primary drbd + md + lvm

Matthew Ingersoll matth at digitalwest.net
Fri Sep 18 18:11:46 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sep 17, 2009, at 8:33 PM, Gianluca Cecchi wrote:

> On Thu, Sep 17, 2009 at 11:42 PM, Michael Tokarev <mjt at tls.msk.ru>  
> wrote:
> Gianluca Cecchi wrote:
> []
>
> And what about instead do somthing like this:
>
> - sda and sdb on both nodes
> - raid0 on both nodes so that on eac node you have one md0 device
> - only one drbd resource based on md0 on both
>
> This is wrong.  If either disk fails, whole raid0 fails,
> ie, half the thing fails.
>
> With reverse approach, ie, two drbd resources on top of
> disks, and raid0 on top of drbd resources - if any disk
> fails only that 1/4 of whole thing fails.
>
> It's classical raid10.  It's always done like
> disk => mirroring => striping, but never like
> disk => striping => mirroring.
>
>
> - use drbd0 device as PV for your VG
> - add clvmd to your cluster layer (you need cman too for clvmd)
>
> I'm doing it but only with one disk per node
>
> With one disk per node it's simple raid1.
>
> /mjt
>
>
>
> My approach is done in reality and is named raid0+1, while the  
> initial post one is socalled raid1+0 (aka raid10 as you wrote)
>
> See also  http://en.wikipedia.org/wiki/Nested_RAID_levels
>
> BTW: in general, failures to take care of happens not only at hw  
> level (hard disk in your comment) but also in sw (in the scenario  
> here, possibly in sw raid layer or drbd layer, for example)
>
> Gianluca

I'm not so concerned with raid1+0 vs raid0+1 but the use of striping  
on the same volume between two active nodes.  Any pros/cons for md or  
lvm striping? Or more specifically, are they considered safe in this  
setup or will corruption slowly come about?

--
Matth





More information about the drbd-user mailing list