Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Mon, 15 Oct 2007 00:12:32 +0200, Rene Mayrhofer <rene.mayrhofer at gibraltar.at> wrote: > On Sonntag 14 Oktober 2007, Kelly Byrd wrote: >> > c) Using drbd8 volumes as the Xen backend devices, on top of LVM2. > That >> > is, >> > both cluster nodes hold one (or two, the second one for swap) LV(s) > for >> > each >> > Xen domU and those are synced with one drbd8 primary/secondary volume >> > each. >> >> I do this with VMware Server: >> - Big storage (raid0, raid5, raid10, whatever) as /dev/md0 >> - One volume group, many logical volumes. >> - One drbd resource (pri/sec) per logical volume, make a filesystem on >> this. >> - One VM per drbd. >> >> With this, I avoid pri/pri, but I can still migrate VMs one at a time >> from node to node. > > And this construction is stable for you? How many drbd resources do you > have > running? On which hardware, with which kernel, if you don't mind me > asking? > > I am currently running Debian etch on HP servers (with 3/4 GB RAM and > raid1 > storage backends for the volume groups - one at each node), but it's far > from > stable, and all crashes/hangs/reboots in the last few months were caused > by > drbd in one form or the other. > I just went into production after a month or so of testing various configurations. Lars suggested the above. - I'm running 22 drbds, approx 90GB each, with XFS on each. - Stable? So far. My torture test was to do full syncs while doing iozone in the VMs. Worked great many many times. - OS details: CentOS5 with the 2.6.18-8.1.14.el5.centos.plus kernel. - DRBD is 8.0.4 also from the centosplus repository. - I'm running on cheap servers a couple of SuperMicro 5015. 1U boxes with a singel Intel quad core CPU, 8GB of ram and four 500GB sata drives each.