[DRBD-user] Three node cluster?

Michael michael.auckland at gmail.com
Tue Apr 17 20:19:51 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, Apr 18, 2012 at 6:03 AM, Arnold Krille <arnold at arnoldarts.de> wrote:
> On Tuesday 17 April 2012 10:08:33 you wrote:
>> On Tue, Apr 17, 2012 at 3:05 AM, Arnold Krille <arnold at arnoldarts.de> wrote:
>> > On 15.04.2012 22:05, Björn Enroth wrote:
>> >> I am looking for information of how to deal with a KVM three node cluster
>> >> with DRBD
>> >> I have a "baby machine" ubuntu 11.10 pacemaker/drbd cluster with two
>> >> nodes,
>> >> local disks with drbd setup in between. This is working flawless.
>> >>
>> >> My challenge now is that I want to add a third node with the same setup.
>> >> How do I handle drbd in this setup? I'd like to have all nodes active, to
>> >> be able to migrate resources, mainly kvm virtual guests, around the
>> >> cluster
>> >> as I see fit. I'd also like pacemaker to be able to dynamically handle
>> >> the
>> >> load.
>> >
>> > While drbd is great, this is exactly our intended use-case and also
>> > exactly
>> > the reason I am looking at other storage solutions. drbd can't do more
>> > than
>> > two nodes.
>> >
>> > You can of course distribute the drbd-resources so that some are n1/n2,
>> > some n2/n3 and some n1/n3, but that becomes an administrators nightmare.
>> > And once you decide that you need four nodes with the data present on at
>> > least three nodes, you are stuck.
>> > You can layer the drbd-resources but that is more meant for semi-distant
>> > mirrors and manual fail-over.
>> > And if you want live-migrations for your vms with more then two primary
>> > filesystem nodes...
>> >
>> > I am currently looking at glusterfs, there is also moosefs and ceph(fs),
>> > but only the first is meant to be stable enough that redhat gives
>> > commercial support for it. There are also other distributed cluster
>> > filesystems like lustre, but they lack redundancy.
>>
>> FWIW, I agree that GlusterFS is probably the best available option at
>> this time, for this use case. I'd recommend Ceph (Qemu-RBD,
>> specifically) if the virtualization cluster was larger, but the
>> GlusterFS option combines excellent ease-of-use with good redundancy
>> and would probably be your best bet for this cluster size.
>
> Afaik ceph can do replication of the kind "give me three replicas, no matter
> how many backend-nodes are up". A feature gluster is missing in the current
> stable version (but I've been told that its coming in 3.3, due next months).
>
> I like the rbd-device interface of ceph, but during my tests with ceph during
> this last winter (yes, the whole week!:) ceph died on me two times and took
> its data with it. glusterfs also has problems when the participating nodes
> change ip-addresses, but still all the data is accessible...
>
> But all these technologies are fairly new, no one yet has found the ultimative
> solution to the split-brain-problem for example, and so your milage may
> vary...
>
> A sidenote to get back on topic: While my co-admin had his small victory some
> months ago, when it was decided to go back from drbd-hosting to his extra-
> iscsi machine, last week that machine had a problem starting after an power-
> outage. The only vm running fine was that last (and un-important) vm coming
> from drbd...
>
> Have fun,
>
> Arnold
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>

I had a similar setup (3 nodes kvm cluster)
drbd nodes 1-2 and 2-3 ( so like 2 storage nodes )
Setup was very complex and now I moved all to gluster (3.3b3)  - much
easier setup.

to control the cluster is it easier to use Linux Cluster Management
Console   (http://lcmc.sourceforge.net/)
including live migrations of VMs

to avoid split-brain-problem use the same solution as drbd - stonih.

-- 
--
Michael



More information about the drbd-user mailing list