[DRBD-user] DRBD inside KVM virtual machine

Nick Morrison nick at nick.on.net
Fri Oct 21 12:00:50 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


My apologies if this is a frequently-asked question, but I hope it can be answered easily.  I'm a bit new to this, so forgive my n00bness.

I have two physical servers, reasonably specced.  I have a 250GB LVM volume spare on each physical server (/dev/foo/storage).  I would like to build a KVM/QEMU virtual machine on each physical server, connect /dev/foo/storage to the virtual machines, and run DRBD inside the two VM guests.

From there, I plan to run heartbeat/pacemaker to provide a HA/failover NFS server to other VM guests residing on the same physical servers.


I started this project by doing the DRBD and heartbeat/pacemaker/NFS on the physical machines, and nfs-mounting a folder containing the VM guest's hard disk .img files, but ran into problems when I tried switching primary/secondary and moving the NFS server - under some circumstances, I couldn't unmount my /dev/drbd0, because the kernel said something still had it locked (even though the NFS server was supposedly killed.)  I am assuming this is a complication with mounting an NFS share on the same server as it's shared from.  So:  I decided to think about doing the NFS serving from inside a KVM, instead.

I've also toyed with OCFS2 and Gluster; I thought perhaps doing an active/passive DRBD (+NFS server) would create less risk of split-brain.

Am I mad?  Should it work?  Will performance suck compared with running DRBD directly on the physical machines?  I understand I will probably have high CPU usage during DRBD syncing, as QEMU's IO (even virtio) will probably load up the CPU, but perhaps this will be minimal, or perhaps I can configure QEMU to let the VM guest talk very directly to the physical host's block device..

Your thoughts are welcomed!


More information about the drbd-user mailing list