[DRBD-user] three node backup setup without public/internal IP address

Lars Ellenberg lars.ellenberg at linbit.com
Wed May 7 12:54:34 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, May 06, 2014 at 02:02:14PM +0200, Sven Duscha wrote:
> Hi,
> 
> I am trying to set up a three-way redundant DRBD. The way I understand
> it, this is only possible as a primary/secondary-pair with an "external"
> stacked-on-top backup.
> 
> resource r0 {
>         address        146.107.216.240:7789;

107.146.in-addr.arpa.	10800	IN	SOA	dmz-dns3.helmholtz-muenchen.de.

Wir machen übrigens gelegentlich auch Trainings in München und Wien,
falls das von Interesse ist.

> The problem is that my third node is within the same IP range as the
> other two.

That is a common setup.

> Therefore is no dedicated second IP interface with an external IP.

You don't need a second interface,
and you don't need an external IP either.

> I then get an error message about a doubly assigned IP address.

No. You get a message about doubly assigned (IP address : port) tuple.
If you cannot use a dedicated "service" IP for the stacked DRBD,
at least use a different port.

What you *should* do is have a service IP (it's ok if that is in the
same range), and put that as secondary IP on the interface,
on the node that is supposed to be primary.

That way, the third node can connect to the same (service) IP,
regardless of which node of the "primary cluster" is currently active.

But that is not required, you could use the node IPs as well.
Though reconnect to the third node after switchovers on the main cluster
will be cumbersome then.

> Is the a way to create a such setup within the same cluster? I read up a
> bit on the Pacemaker section which proposes a three-way and four-way
> method, though, also intended to be "external". Do I need the commercial
> DRBD-Proxy for the Pacemaker-setup or is this optional?

DRBD Proxy is independent from "third node" setups,
though typically they are used together.

DRBD Proxy helps in masking latency peaks caused by write bursts,
if the replication link is "high latency low throughput",
where both "high latency" and "low throughput" is obviously relative,
and range from 300ms and 10 MBit (for some customers) to 10ms and 10Gbit
(for other customers), and are to be compared with the typical
performance characteristics of the primary system.

Obviously it only helps to mask write *bursts*.
If your sustained average write rate exceeds the average drain rate
through compression and replication link, you can only either
disconnect, or throttle the primary system to that drain rate.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list