Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Lars Ellenberg wrote: > On Tue, Dec 02, 2008 at 01:09:04PM +0000, Hari Sekhon wrote: > >> Hi, >> >> I have a server with just around 17TB usable storage which I want to >> replicate at the block level via drbd. >> >> I got the complaint regarding the 16TB max partition size supported, so >> reduced the 17TB partition to 16TB and successfully set up the drbd >> metadata on it. >> >> However, now I can only see 8TB of it. >> I understand the limit is 8TB having looking through previous mails to >> the this list, but then why did it let me use up a 16TB partition and >> waste half of it? >> > > drbd used to be able to handle 4 TB at max. > then some drbd can handle 8 TB on 64bit kernels, > but incorrectly pretended it could support 16 TB. > > that was then fixed in more recent drbd, to actually only support 8TB, > and don't try to use more, as otherwise you'd Oops the kernel sooner or later. > > and now, even more recent drbd, namely drbd 8.3 (to be released "soon"), > supports 16 TB (even on 32bit kernel!). it will then probably support > even more (probably only on 64bit kernel, though) with some of its > dot-releases. > > >> This is running on a x86_64 CentOS 5 server with drbd 0.8.13 (I had been >> using drbd 2.8 but this kept crashing the system with a kernel panic >> when trying to write to the mounted drbd0 partition with xfs on it. >> Downgrading to 8.0.13 and then re-creating the metadata, mounting and >> retrying solved the problem) >> >> Is there any way of getting the thing to do all 16TB of the partition? >> >> Otherwise I'll have to destroy it and rework it with 2x8TB which would >> be a pain. I'd then be tempted to do an lvm on the top of those 2 drbds >> even though I know Lars recommends against it, because I really could do >> with the 16TB contiguous space replicated... >> > > try drbd 8.3, even though its a release candidate only, right now. > Ok I fetched 8.3 to try it out, but now I've got an error building. I'll post it separately though. to not hijack my own thread... -h -- Hari Sekhon Always open to interesting opportunities http://www.linkedin.com/in/harisekhon