[DRBD-user] drbd8 and 80+ 1TB mirrors/cluster, can it be done?

Tim Nufire drbd-user_tim at ibink.com
Tue May 27 06:51:38 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I'm building out a storage farm and want to use drbd8 to mirror 80+  
1TB drives between 2 nodes in a cluster. Everything was going well in  
my initial testing but when I added my 4th 1TB drive I hit the vmalloc  
"out of memory" errors described here:

http://www.linux-ha.org/DRBD/FAQ#head-c6586035cbdd5cdae726b02406b838ee2fa56eae

I'm running a 32-bit OS so my first step is certainly to upgrade to 64- 
bit... Will that be enough to get to me 80+ 1TB mirrors on a single  
cluster or will I hit other scaling issues? From the link above, it  
looks like I'll need to dedicate about 2.5GB or RAM to the drbd  
bitmaps... Can this be done on a 64-bit OS if I add enough RAM to my  
servers? My storage farm will be used to archive backup files which  
are written once and rarely accessed so I don't think drive  
performance will be an issue as long as the servers can handle the  
overhead of attaching 80+ large disks.

My cluster hardware is as follows...

* 2 Dell 2900 Servers with dual quad-core Xeons & 2 GB of RAM
* 80 SATA ports per server from HighPoint RocketRAID PCI cards and  
built-in bays
* 80 1TB HDs per server, mostly in external enclosures

I am running Debian "Etch" 4.0r3 i386, drbd v8.0.12, Heartbeat v2.0.7,  
LVM v2.02.07 and mdadm v2.5.6. I'm using MD to assigned stable device  
names (RAID1 with just 1 disk, /dev/md0, /dev/md1, etc...), drbd8 to  
mirror the disks 1-1 to the second server and then lvm to create a few  
logical volumes on top of the drbd8 mirrors.

Any insights the folks on this list can provide on how I can scale  
this setup will be greatly appreciated and save me days of trail and  
error!

Thanks,
Tim



More information about the drbd-user mailing list