[DRBD-user] 3-node active/active/active config?

Jiann-Ming Su sujiannming at gmail.com
Wed Nov 18 21:07:21 CET 2009

On Wed, Nov 18, 2009 at 4:19 AM, Lars Ellenberg
<lars.ellenberg at linbit.com> wrote:
> On Tue, Nov 17, 2009 at 11:30:04PM -0500, Jiann-Ming Su wrote:
>> On Tue, Nov 17, 2009 at 12:50 PM, Lars Ellenberg
>> <lars.ellenberg at linbit.com> wrote:
>> >
>> > No. You did not understand.
>> >
>> > It is not a question of performance.
>> > Or whether a write reached all *nodes*.
>> >
>> > In your setup, it is technically *impossible*
>> > for a write to reach all lower level *disks*.
>> >
>> Can you give a brief explanation of why that is the case?
> I thought I already did?

Gianluca's explanation cleared it up for me.

>> Is the problem the dual path?  What if a single path was used in some
>> stacked configuration where a drbd is used as a backing device for
>> another drbd share?
> "Stacked DRBD", three (or four) node setups, are perfecly fine and
> supported.  It is NOT possible to have more than two nodes _active_
> though.  See the User's Guide or contact LINBIT for details.

Ah, okay.  Thanks for clarifying that.

>> > A sure way to data corruption.
>> >
>> > Got me this time?
>> >
>> >  ;)
>> >
>> > So by all means: use one iSCSI on DRBD cluster,
>> > and have any number of ocfs2 clients via iSCSI.
>> > Or double check if NFS can do the trick for you.
>> >
>> We have three geographically independent locations
> Who is "We", and where are those locations?
> How far appart? Network latency? Bandwidth?

Gig attached, less than 5ms ping times.

> Data Volume?

Less than 1GB.

> Approximate number of Files? Directories? Files per Directory?

Roughly 5000-10000 files and directories combined.

> Average and peak _file_ creation/deletion/modification rate?

Over a day, as low as 500 files/hr up to 10000 files/hr modification rate.

> Average _data_ change rate?
> Peak data change rate?
>> that have to share data, but still remain independent.
> And you think you want to drive one of the currently available
> cluster filesystem in some "geographically dispersed" mode.
> Yeahright.
> Cluster file systems are latency critical.

For this application, the filesystem performance, specifically writes,
is not that critical.  What's important is data replication.  We're
much more interested in the ability to write/modify files from any of
the three nodes.

> Even if you'd get the all-active-fully-meshed-replication to work (we at
> LINBIT are working on adding that functionality in some later version of
> DRBD), latency will kill your performance.

Performance is in the eye of the beholder... ;-)

> And whenever you have a network hickup, you'd have no availability,
> because you'd need to reboot at least one node.

Yeah, that's one of the nice things about the two node config.  It's
relatively resilient to network issues.

> I'm sure you have an interessting setup.
> But to save you a lot of time experimenting with things that simply
> won't work outside the lab, or possibly not even there, I think you
> really could use some consultancy  ;)

Yeah, that's why I asked here first. :-)  Thanks for all your insights.

Jiann-Ming Su
"I have to decide between two equally frightening options.
 If I wanted to do that, I'd vote." --Duckman
"The system's broke, Hank.  The election baby has peed in
the bath water.  You got to throw 'em both out."  --Dale Gribble
"Those who vote decide nothing.
Those who count the votes decide everything.”  --Joseph Stalin

More information about the drbd-user mailing list