[DRBD-user] Setup question using DRBD, cluster FS & KVM

Florian Haas florian at hastexo.com
Fri Nov 18 08:55:47 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi Trey,

On 11/18/11 07:22, Trey Dockendorf wrote:
> In preparation to begin testing DRBD I've found my superiors have new
> requirements.  The desired setup is 3 nodes in the cluster, replicating
> a LVM volume between all 3.  The volume will contain QCOW2 images for
> KVM, and to have concurrent access I've been looking at using GFS2.

OK, so if it's 3 nodes that a hypervisor is meant to run on, then that
rules out putting DRBD-based storage and the hypervisors on the same
boxes. (It certainly doesn't rule out the use of DRBD altogether; see

> The potential complication is all 3 servers are not identical.  Two are
> identical, Dell 2950s.  The other is a new Dell R510.  The 2950s run 6 x
> SATA 7200RPM in RAID 6, and the R510 has it's system on RAID1 SAS and 6
> x SAS 10000RPM in RAID 6.  Is it correct that with DRBD, the combination
> of mismatched performance of the disk I/O would be a problem?  How much
> more difficult is a 3 node cluster over 2 node?

For multiple-node write access, it's impossible. What DRBD currently
supports is adding a third node in a stacked configuration, but that is
only useful for backup and DR purposes. You can't really think of the
third node as a fully-fledged member of the virtualization cluster.

> Also, if I'm able get an iSCSI, what role would DRBD play in the model
> of 3 servers w/ shared storage?  I assume to allow concurrent access to
> the same iSCSI space, that I would have to still use a cluster file
> system (live migration).  Would DRBD then be used to replicate the
> system partitions or with KVM is it only useful to replicate the VM
> store when not using shared storage?

You can put your iSCSI on DRBD-backed storage, and then use that iSCSI
target as centralized storage for your hypervisors. You may not even
need to set up cLVM. That's what this talk explains:


(Free-of-charge registration required, or just use your Google Profile
or WordPress account, or anything else that supports OpenID, to log in.)

You can also take a look at this Tech Guide which I wrote while working
at Linbit, which is still hosted on their site:


> With or without a shared storage device (most likely without), how would
> failover work for the virtual servers?   Is that where Pacemaker comes
> in?  Basically a way to trigger the still-alive servers to bring up the
> VMs that were running on the failed server.

Yes, watch the talk. :)


Need help with High Availability?

More information about the drbd-user mailing list