[DRBD-user] drbd8 and 80+ 1TB mirrors/cluster, can it be done?

Tim Nufire drbd-user_tim at ibink.com
Tue May 27 18:17:53 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Christian, thanks for your feedback :-) My responses are inline below...

Tim

On May 26, 2008, at 10:51 PM, Christian Balzer wrote:

>> * 80 SATA ports per server from HighPoint RocketRAID PCI cards and
>> built-in bays
>> * 80 1TB HDs per server, mostly in external enclosures
>>
> This sounds like a lot of pain and suffering in the making, you will
> probably be a lot better off with something like this:
> http://www.servercase.com/miva/miva?/Merchant2/merchant.mv+Screen=PROD&Store_Code=SC&Product_Code=RMC8E-XP-SM%28++RSC-8ED-D5R-SA1C1-0-R%29&Category_Code=6U%2F8U+Rackmounts

My business depends are being able to deliver lots of reliable storage  
and the lowest possible cost. In terms of capital costs, this means  
something like $0.50 per formatted GB. I looked at servers like the  
one you mention above but couldn't make the numbers work :-/ Instead,  
I'm using enclosures like:

http://www.addonics.com/products/raid_system/rack_overview.asp and
http://www.addonics.com/products/raid_system/mst4.asp

>> I am running Debian "Etch" 4.0r3 i386, drbd v8.0.12, Heartbeat  
>> v2.0.7,
>> LVM v2.02.07 and mdadm v2.5.6. I'm using MD to assigned stable device
>> names (RAID1 with just 1 disk, /dev/md0, /dev/md1, etc...), drbd8 to
>> mirror the disks 1-1 to the second server and then lvm to create a  
>> few
>> logical volumes on top of the drbd8 mirrors.
>>
> You might want to check out Debian backports for more up to date  
> packages
> where it matters, like heartbeat.

This was suggested on the heartbeat mailing list as well.... What is  
the most stable/recommended heartbeat release to use?

> And you really want to use a resilient approach on the lowest level,  
> in
> your case RAID5 over the whole disks (with one spare at least given  
> that
> with 80 drives per node you are bound to have disk failures frequently
> enough). In fact I'd question the need for having a DRBD mirror of
> archive backups in the first place, but that is your call and money.  
> ^^

Unfortunately, cost is driving most of my decisions and RAID5 adds  
10-20% to the total cost. I'm using DRBD in part because it both  
replicates data and provides high-availability for the servers/ 
services. I'll have some spare drives racked and powered so when  
drives go bad I can just re-mirror to a good drive leaving the dead  
device in the rack indefinitely.

Has anyone else tried to do something like this? How many drives can  
DRBD handle? How much total storage? If I'm the first then I'm  
guessing drive failures will be the least of my issues :-/

Tim



More information about the drbd-user mailing list