[DRBD-user] primary/primary drbd + md + lvm

Gianluca Cecchi gianluca.cecchi at gmail.com
Fri Sep 18 05:33:34 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Sep 17, 2009 at 11:42 PM, Michael Tokarev <mjt at tls.msk.ru> wrote:

> Gianluca Cecchi wrote:
> []
>
>> And what about instead do somthing like this:
>>
>> - sda and sdb on both nodes
>> - raid0 on both nodes so that on eac node you have one md0 device
>> - only one drbd resource based on md0 on both
>>
>
> This is wrong.  If either disk fails, whole raid0 fails,
> ie, half the thing fails.
>
> With reverse approach, ie, two drbd resources on top of
> disks, and raid0 on top of drbd resources - if any disk
> fails only that 1/4 of whole thing fails.
>
> It's classical raid10.  It's always done like
> disk => mirroring => striping, but never like
> disk => striping => mirroring.
>
>  - use drbd0 device as PV for your VG
>> - add clvmd to your cluster layer (you need cman too for clvmd)
>>
>> I'm doing it but only with one disk per node
>>
>
> With one disk per node it's simple raid1.
>
> /mjt
>



My approach is done in reality and is named raid0+1, while the initial post
one is socalled raid1+0 (aka raid10 as you wrote)

See also  http://en.wikipedia.org/wiki/Nested_RAID_levels

BTW: in general, failures to take care of happens not only at hw level (hard
disk in your comment) but also in sw (in the scenario here, possibly in sw
raid layer or drbd layer, for example)

Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090918/a241c035/attachment.htm>


More information about the drbd-user mailing list