[DRBD-user] MD Raid-0 over DRBD or DRBD over MD Raid-0

Greg Freemyer greg.freemyer at gmail.com
Thu Feb 15 16:22:59 CET 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 2/15/07, Graham Wood <gwood at dragonhold.org> wrote:
>
> > My original plan was to to create the raid-0 on top of 16 DRBD
> > devices so that in the event of a single physical disk failure; only a single
> > physical disk, and not the entire 16-drive stripe would have to be rebuilt.
> MD is not (as far as I know) cluster aware.
>
> That means that if you create the MD device on one node over the DRBD
> device, the other node will see that the MD device is already active
> and get a little upset.  You can almost definitely force it to go
> active on both nodes, but the metadata would not (as a result) be
> consistent.  On a RAID0 stripe this may not be too bad, but I really
> wouldn't recommend it.
>
> I'd also be very wary or doing anything with that many drives as a
> RAID0 - but I'm paranoid.
>
> Graham

I jumped over here from the heartbeat list because I wanted to see the
responses.  I think this is a good / important question and if not
currently workable I would push that supporting this should be a
design goal for some future release of heartbeat, md and/or drbd.

re: paranoid:
He is effectively trying to do a RAID10.  That is in theory the most
reliable of the basic RAID levels. (better than raid3/4/5 for sure.  I
don't know about raid6, raid50, or raid60.)  In all cases raid is only
reliable if the system is well monitored and failed disks are rapidly
replaced.  ie. if you leave a disk in a failed mode for a month you
have a huge window of vulnerability for a second disk crash bringing
down the whole raid.

Specifically, he wants to stripe together 16 mirror pairs.  Each
mirror pair should be extremely reliably if the failed drive is
rapidly detected, replaced, and resync'ed.  The RAID10 setup would be
1/16th as reliable, but in theory that should still be very good.

re: MD not cluster aware.
I'm assuming the OP wants to have MD itself managed by heartbeat in a
Active / Passive setup.  If so, you only have one instance of MD
running at a time.  MD maintains all of its meta-data on the
underlying disks I believe, so drbd should be replicating the drbd
meta-data between the drives as required.

Heartbeat could manage the failover if you have a full computer failure.

If you have a complete disk failure, drbd should initiate i/o shipping
to the alternate disk, right? So the potential exists to have a
functioning RAID10 even in the presence of 16 disk failures.  (ie.
exactly one failure from each of the mirror pairs.)  OTOH, if you lose
both disks from any one pair, the whole raid10 is failed.

What happens (from a drbd perspective) if you only have sector level
failures on a disk?

Greg
-- 
Greg Freemyer
The Norcross Group
Forensics for the 21st Century



More information about the drbd-user mailing list