[DRBD-user] stacked primaries-scenarios and drbd proxy size

Lars Ellenberg lars.ellenberg at linbit.com
Thu Sep 27 15:41:30 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Sep 27, 2012 at 01:08:37PM +0200, Nils Stöckmann wrote:
> Hi Lars,
> 
> I more or less recognized drbd is not what I need on monday and
> unsubscribed today from drbd-users just minutes before I saw you answer
> to my mail. I somehow missed it in a block of unread mail, sorry about that.
> 
> I will be giving CODA a try, a distributed file system.
> 
> If you feel my response could be helpful to others, please forward it to
> the drbd-users list.

I have a few more comments inline.

> Am 21.09.2012 10:16, schrieb Lars Ellenberg:
> >> To accomplish this,  I had the idea to build this:
> >>
> >>
> >> MAIN SITE            ||          Small Office Site
> >> A           B                    C
> >> |           |                    |
> >> RAID        RAID                 RAID
> >> |
> >> DRBD1-Gbit--DRBD1       
> >> |             | 
> >> |           DRBD2------VPN-------DRBD2
> >> |             |                    |
> >> LVM          LVM                  LVM
> >>
> >> Nodes A and B shall be used for load balancing and shall be able to
> >> dynamically switch tasks and active services.
> > What exactly do you want to load-balance,
> > and why do you think you need to load-balance it?

> Actually no single service needs to be load-balanced, however a lot of
> services will be running on the machines, and having all run on one
> would reduce its responsiveness in an extent, that it is noticeable to
> the users. Not bad, but still noticeable.
> The resulting configuration is more like high availibility.

So you don't need cluster file systems or multi-primary at all.
At least not for "load balancing".

You can have some DRBD Primary on A, some on B, and if you chose so,
even a few on C, and move them around as needed.

By default, you would distribute the services
(e.g. pacemaker node-preference location constraints).

> >> Is this actually possible? The "three nodes" DRBD manual page doesn't
> >> explicitly forbid multiple primaries, however it doesn't explicitly say
> >> it's possible, either.
> >>
> >> I have the idea to create several gfs and a few ocfs volumes on lvm.
> > Forget it.
> >
> > First, do not mix ocfs2 and gfs2 on the same system.
> >
> > Second, don't use cluster file systems where you don't need them.
> > No, you don't need them.
> Why?
> > Then, latency over your VPN would kill you, respectively the performance
> > and responisveness of any clustered file system.
> That's what came to my mind on monday. Dual Primary needs protocol C,
> protocol C needs synchronous writes, synchronous means "you have to wait
> until the vpn has transferred everything".

Fortunately, if I understand correctly, you don't even need your data
everywhere at the same time, but only on service migration?

> > Also, you'd need to reliably hard-reset (stonith) at least one of the
> > nodes for each and every connectivity hickup.
> >
> > Furthermore, it just does not work.
> > You envision
> >   A -- B-upper
> >        B-lower ---------- C
> >
> > Writes done on C would reach B-lower,
> > but A would never know about them.

> Thanks for stating that explicitly, i wasn't sure about this, but
> assumed ANY change on b would lead to an update, be it from b-upper or
> b-lower. (say "bidirectional stack").

> >> As an alternative,
> > Right.
> > You'll need an alternative.
> >
> > So maybe step back a few steps,
> > and let us know what you really *need*.
> >
> > Not what you wish for, or what you think you would like,
> > if it was even possible, because it would be cool... :-)

> What I really need is a way for three-way live read/write
> data-synchronization, including the internet.

*why*
What for.

Probably many people wish for this,
but few actually *need* it.

> rsync is not really an option because of the lacking liveness and the
> deleted/new-problem
> At the moment i continouusly run into csync-errors (permission denied
> and "format error while receiving data", i posted that one on
> csync2-users, that's one reason I don't take that as an alternative.
> Plus: It's not really live. It has to scan the filesystem for changes
> each time.

There is an inotify daemon for csync2...

> I want changes to be transmitted live, as drbd does for two
> primaries.)

Again, why, and what for?

 ;-)

> >> At the moment, we have the following data to be clustered:
> >> 450K Files
> >> 60K Folders
> >> 150GB Data
> > That is really "nothing".
> > But not very interesting for the replication, either.
> Can you rewrite that in a different wording? I'm not sure about what you
> intend to say.

It is a comparatively small data set
(depending on your point of view, naturally).

The amount of data is not that interesting for replication,
much more interesting is the change rate.

You then wrote a bit about your estimated change rate
in the following paragraphs, so ignore that comment.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com



More information about the drbd-user mailing list