[DRBD-user] DRBD inside KVM virtual machine

Arnold Krille arnold at arnoldarts.de
Fri Oct 21 20:22:59 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Friday 21 October 2011 12:00:50 Nick Morrison wrote:
> My apologies if this is a frequently-asked question, but I hope it can be
> answered easily.  I'm a bit new to this, so forgive my n00bness.
> 
> I have two physical servers, reasonably specced.  I have a 250GB LVM volume
> spare on each physical server (/dev/foo/storage).  I would like to build a
> KVM/QEMU virtual machine on each physical server, connect /dev/foo/storage
> to the virtual machines, and run DRBD inside the two VM guests.

Why would you want to do that?

> From there, I plan to run heartbeat/pacemaker to provide a HA/failover NFS
> server to other VM guests residing on the same physical servers.
> 
> Rationale:
> 
> I started this project by doing the DRBD and heartbeat/pacemaker/NFS on the
> physical machines, and nfs-mounting a folder containing the VM guest's
> hard disk .img files, but ran into problems when I tried switching
> primary/secondary and moving the NFS server - under some circumstances, I
> couldn't unmount my /dev/drbd0, because the kernel said something still
> had it locked (even though the NFS server was supposedly killed.)  I am
> assuming this is a complication with mounting an NFS share on the same
> server as it's shared from.  So:  I decided to think about doing the NFS
> serving from inside a KVM, instead.
> 
> I've also toyed with OCFS2 and Gluster; I thought perhaps doing an
> active/passive DRBD (+NFS server) would create less risk of split-brain.

I toyed with ocfs2 for about two days. Then I just went for gfs2 and since 
then our dual-primary system is providing the storage for the virtual systems 
just fine. That is we mostly use files as disks for the VMs. If you are okay 
with clustered lvm, you can also do that. But as far as I see it, that is 
harder to extend unless you export the underlying drbd-resource via iscsi/AoE 
to the third and folowing nodes.
With gfs2 (or ocfs2) the third node can just mount the same dir via nfs.

> Am I mad?  Should it work?  Will performance suck compared with running
> DRBD directly on the physical machines?

It is one additional layer of complexity and weak points: For your cluster to 
work, you not only need the host to work but also some special virtual 
machines. And of course you add one layer of buffering and cpu-load.

Have fun,

Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111021/a78d3e02/attachment.pgp>


More information about the drbd-user mailing list