[DRBD-user] configuring 2 services on 2 hosts

J. Ryan Earl oss at jryanearl.us
Thu Jan 6 19:13:31 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Reply inline:

On Thu, Jan 6, 2011 at 7:46 AM, Digimer <linux at alteeve.com> wrote:

> On 01/06/2011 03:27 AM, Felix Frank wrote:
> >> In any case, be sure to have (at least) RAID 1 on each node backing the
> >> DRBD devices to help minimize downtime. Drives fail fairly frequently...
> >> software RAID 1 is an inexpensive route to much better uptime. :)
> >
> > If the budget isn't severely restricted, I'd also throw in an actual
> > RAID controller, software RAID being an unnecessary pain.
> >
> > Cheers,
> > Felix
> If I may provide a counter-point arguing in favour of software RAID;
> With hardware RAID, you array and your data is bound to that controller.
> Should the controller fail at some point, you will find yourself
> scrambling trying to find a compatible controller, and you will be down
> until you do (shy of falling back to recovering from backup).

Given that DRBD is mirroring the data to another host, this shouldn't

> I had this
> happen to me enough that I now won't use hardware RAID unless it's for
> performance (or similar) reasons the make software RAID unfeasible.

A backed-write cache will offer one of the larger performance improvements
for DRBD, especially with lots of transactions/FUA-style block I/O.  If
you're running a SQL database, you want backed write-cache on a RAID

> With software RAID, you have a familiar set of tools (mdadm) that many
> people can help you with. More importantly though, you can move your
> array to almost any other machine and get it up and running again with
> relatively little effort, potentially dramatically reducing your mean
> time to recovery.

Manually rebuilding MD devices is time consuming and error prone.  You have
to replicate the partition structure manually to the new drive, possibly
expire the failed drive, and add in each partition manually for each MD
device with separate mdadm invocations.  Hardware RAID is operationally much
better when you're working with a team of people.  See amber light on drive,
yank and replace.  No OS level commands required, which if done incorrectly,
could even corrupt the remaining data.  If you're the only admin and it's
sitting in your closet at home, that's one thing, if your're managing many
of these things, perhaps in a team and they are distributed geographically
across data centers thousands of miles away with remote hands and eyes,
that's another.

If you're going to use software RAID, I suggest the first thing you do is
simulate physical failure and recovery of disks, practice the process,
script it out.  Verify your block drivers don't crap out hot-plugging the
drives, all that.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110106/eef5964d/attachment.htm>

More information about the drbd-user mailing list