[DRBD-user] Maximum size of DRBD configuration?

Brian Thomas bthomas at wolfgeek.net
Sat Jan 17 01:39:14 CET 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

The problem you're having is exactly the one I ran into. It's not an
issue specifically with drbd per se, but the method it uses to hold
it's bitmap for the filesystems.

Checking the archive should find the thread on this, but it requires
a kernel modification to make it work.

Here's a c'n'p of what I posted to the list. It's a solution that I've
had running for over two weeks now, under heavy load, with no
problems. You just need to modify page.h and recompile your kernel.


Essentially, even if you make a change like turning on HIMEM, you
cannot vmalloc() more than 128MB of kernel memory, as defined by

system-a# grep "__VMALLOC_RES" /usr/src/linux/include/asm-i386/page.h
#define __VMALLOC_RESERVE       (128 << 20)
#define VMALLOC_RESERVE  ((unsigned long)__VMALLOC_RESERVE)

This is apparently insufficient; although I only need 108MB for the
bitmaps, I guess that's too much.

I was able to fix it by making this change:

#define __VMALLOC_RESERVE       (512 << 20)

Once I did this, recompiled, and rebooted, it works great:

cat /proc/drbd
version: 0.6.10 (api:64/proto:62)

0: cs:SyncingAll st:Secondary/Primary ns:0 nr:91800 dw:91800 dr:0 pe:0
	[>...................] sync'ed:  0.1% (1669226/1669316)M
	finish: 1846:28:12h speed: 256 (250) K/sec
1: cs:SyncingAll st:Secondary/Primary ns:0 nr:91800 dw:91800 dr:0 pe:0
	[>...................] sync'ed:  0.1% (1669226/1669316)M
	finish: 1780:31:31h speed: 260 (250) K/sec

(Before you say it, yes, I know my syncer is slow. :) )



On Fri, Jan 16, 2004 at 06:52:07PM -0500, Trey Palmer wrote:
> We have a pair of large smb/nfs fileservers running stock
> 2.4.22 on Red Hat 9, using heartbeat and DRBD 0.6.9 from the
> CVS tree on 11/21/2003.  Each machine is a dual Xeon with 1GB
> RAM attached to a net total of 4 TB external hardware RAID
> storage.  The cross-connect is a bonded pair of e1000 gigabit
> cards on each machine.
> We found through trial and error that drbd in our setup only seems
> to allow about 2.6 TB of total space to be configured.   drbdsetup
> fails to initialize properly any device that will go over that
> amount of total space configured, although the devices created
> up to that point continue to work fine.
> Is this something that is dependent on kernel tuning and the like,
> is it limited by something hardcoded into drbd, or is there perhaps
> some hardware limitation?
> If it's simply that drbd can't handle any more, will the problem
> be fixed by upgrading to 0.7 at some point in the future?
> Thanks very much for any help, and thanks also for all the hard
> work making drbd a great tool.
> --
> Trey Palmer       Unix Systems Administrator     trey at isye.gatech.edu
> Georgia Tech Industrial and Systems Engineering	  404-385-3080
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

More information about the drbd-user mailing list