[DRBD-user] Any recommendations or cautions on using RAID under DRBD?

Doug Knight dknight at wsi.com
Tue Mar 4 15:10:10 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Thanks Gordan,
I re-read my email after your response, and realized high availability
is another big priority (hence our using heartbeat). My thought was that
an initial drive failure in the RAID array should be handled within the
current primary system, with DRBD/heartbeat fail-over as the next tier
of failure recovery. Which brings up another question; how does DRBD
handle RAID set failures? Obviously, a mirrored drive's failure would be
transparent, but what about in the case of a striped set? Is there a way
to determine that a striped set has failed and is in rebuild, and
trigger DRBD to fail over to the secondary system where no rebuild is
taking place? Maybe a Linux-HA question?

On Tue, 2008-03-04 at 13:44 +0000, drbd at bobich.net wrote:

> On Tue, 4 Mar 2008, Doug Knight wrote:
> > Performance is a first priority, followed by
> > disk space, so I've been looking at RAID 10 and RAID 50 as possible
> > setups. We're planning on using 15K drives, which limits each physical
> > drive to 146GB, which is why I need the striping. The controllers will
> > have battery backed up cache, which from the postgres groups I've been
> > told will give us a significant performance boost.
> [...]
> > The system we're looking at is the Dell PowerEdge 6800, most
> > likely with a non-DRBD mirrored system drive, plus a DRBD replicated
> > RAID 10 or 50 array supporting around 500GB.
> If performance is your 1st priority, then do you really need more than 
> RAID 01? Have each machine with a RAID0 stripe (apart from possibly the 
> system/root partition), and then mirror it using DRBD (RAID1). RAID 51 
> seems a bit OTT if you want performance first, and you'll still get 
> mirroring from DRBD.
> Just make sure that your bus and controller can keep up with 
> the throughput of so many drives. The PCI(X/e/...) bus will only handle so 
> much bandwidth, and you are unlikely to see more than 80% of it even under 
> optimal conditions. If your drives between them can sustainedly (or 
> even just with buffer->controller bursts - new drives have big 
> caches!) churn out more than that, then you might as well get 
> slower/bigger/cheaper (pick any two ;-) drives.
> Gordan
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080304/b562e10b/attachment.htm>

More information about the drbd-user mailing list