[DRBD-user] configuring 2 services on 2 hosts

Digimer linux at alteeve.com
Thu Jan 6 19:43:55 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 01/06/2011 01:13 PM, J. Ryan Earl wrote:
> Reply inline:
> 
> On Thu, Jan 6, 2011 at 7:46 AM, Digimer <linux at alteeve.com
> <mailto:linux at alteeve.com>> wrote:
> 
>     On 01/06/2011 03:27 AM, Felix Frank wrote:
>     >> In any case, be sure to have (at least) RAID 1 on each node
>     backing the
>     >> DRBD devices to help minimize downtime. Drives fail fairly
>     frequently...
>     >> software RAID 1 is an inexpensive route to much better uptime. :)
>     >
>     > If the budget isn't severely restricted, I'd also throw in an actual
>     > RAID controller, software RAID being an unnecessary pain.
>     >
>     > Cheers,
>     > Felix
> 
>     If I may provide a counter-point arguing in favour of software RAID;
> 
>     With hardware RAID, you array and your data is bound to that controller.
>     Should the controller fail at some point, you will find yourself
>     scrambling trying to find a compatible controller, and you will be down
>     until you do (shy of falling back to recovering from backup). 
> 
> 
> Given that DRBD is mirroring the data to another host, this shouldn't
> matter.
>  
> 
>     I had this
>     happen to me enough that I now won't use hardware RAID unless it's for
>     performance (or similar) reasons the make software RAID unfeasible.
> 
> 
> A backed-write cache will offer one of the larger performance
> improvements for DRBD, especially with lots of transactions/FUA-style
> block I/O.  If you're running a SQL database, you want backed
> write-cache on a RAID controller.
>  
> 
> 
>     With software RAID, you have a familiar set of tools (mdadm) that many
>     people can help you with. More importantly though, you can move your
>     array to almost any other machine and get it up and running again with
>     relatively little effort, potentially dramatically reducing your mean
>     time to recovery.
> 
> 
> Manually rebuilding MD devices is time consuming and error prone.  You
> have to replicate the partition structure manually to the new drive,
> possibly expire the failed drive, and add in each partition manually for
> each MD device with separate mdadm invocations.  Hardware RAID is
> operationally much better when you're working with a team of people.
>  See amber light on drive, yank and replace.  No OS level commands
> required, which if done incorrectly, could even corrupt the remaining
> data.  If you're the only admin and it's sitting in your closet at home,
> that's one thing, if your're managing many of these things, perhaps in a
> team and they are distributed geographically across data
> centers thousands of miles away with remote hands and eyes, that's another.
> 
> If you're going to use software RAID, I suggest the first thing you do
> is simulate physical failure and recovery of disks, practice the
> process, script it out.  Verify your block drivers don't crap out
> hot-plugging the drives, all that.
> 
> -JR

Regarding the testing; Do this *regardless* of the route you take. :)

J.'s arguments /for/ hardware RAID are certainly valid and deciding to
go with hardware RAID for these reasons is quite reasonable. I think you
now have the arguments for both sides, and should now be able to decide
which suits you.

As a final counter-point for software RAID; a non-homogenous
infrastructure, which many without large IT budges end up with, is
another (or perhaps restated) argument as the swapability of the array
between hosts remains flexible despite disparate hardware.

Cheers

-- 
Digimer
E-Mail: digimer at alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin:  http://nodeassassin.org



More information about the drbd-user mailing list