[DRBD-user] 3-node active/active/active config?

Lars Ellenberg lars.ellenberg at linbit.com
Wed Nov 18 10:19:36 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, Nov 17, 2009 at 11:30:04PM -0500, Jiann-Ming Su wrote:
> On Tue, Nov 17, 2009 at 12:50 PM, Lars Ellenberg
> <lars.ellenberg at linbit.com> wrote:
> >
> > No. You did not understand.
> >
> > It is not a question of performance.
> > Or whether a write reached all *nodes*.
> >
> > In your setup, it is technically *impossible*
> > for a write to reach all lower level *disks*.
> >
> 
> Can you give a brief explanation of why that is the case?

I thought I already did?

> > again, your fancy cool and whatever setup won't work.
> > DO NOT DO THIS.
> >
> 
> 
> Is the problem the dual path?  What if a single path was used in some
> stacked configuration where a drbd is used as a backing device for
> another drbd share?

"Stacked DRBD", three (or four) node setups, are perfecly fine and
supported.  It is NOT possible to have more than two nodes _active_
though.  See the User's Guide or contact LINBIT for details.

> > A sure way to data corruption.
> >
> > Got me this time?
> >
> >  ;)
> >
> > So by all means: use one iSCSI on DRBD cluster,
> > and have any number of ocfs2 clients via iSCSI.
> > Or double check if NFS can do the trick for you.
> >
> 
> We have three geographically independent locations

Who is "We", and where are those locations?
How far appart? Network latency? Bandwidth?
Data Volume?
Approximate number of Files? Directories? Files per Directory?
Average and peak _file_ creation/deletion/modification rate?
Average _data_ change rate?
Peak data change rate?

> that have to share data, but still remain independent. 

And you think you want to drive one of the currently available
cluster filesystem in some "geographically dispersed" mode.

Yeahright.

Cluster file systems are latency critical.

Even if you'd get the all-active-fully-meshed-replication to work (we at
LINBIT are working on adding that functionality in some later version of
DRBD), latency will kill your performance.

And whenever you have a network hickup, you'd have no availability,
because you'd need to reboot at least one node.

I'm sure you have an interessting setup.
But to save you a lot of time experimenting with things that simply
won't work outside the lab, or possibly not even there, I think you
really could use some consultancy  ;)

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list