[DRBD-user] Three node setup questions

Lars Ellenberg lars.ellenberg at linbit.com
Wed Feb 18 10:13:06 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Tue, Feb 17, 2009 at 03:52:16PM -0700, David.Livingstone at cn.ca wrote:
> Hello,
> 
> I currently have two two-node clusters running heartbeat and drbd(see
> background below). I also have a two-node test which I decided to update
> to the latest releases of all. In so doing I downloaded and installed
> drbd 8.3.0(drbd-8.3.0.tar.gz) which includes three-node setups using
> stacked clusters. Specifically havng a third backup/brp node
> geograghically removed from our production cluster is very appealing.
> 
> I have looked at the online manual(http://www.drbd.org/users-guide/)
> and read the current information for three-node setups and have some
> observations/questions :
> - An illustraion/figure of a three-node setup would help.

there are several ways to to it.
you can also have four nodes: two two-node DRBD, the primary of which is
the "lower" resource of a "stacked" DRBD.

> - From your "Creating a three-node setup" example on which machine does
> the stacked-on-top-of address run(ie 192.168.42.1) ?

IP should be managed by heartbeat/pacemaker.  it needs to be present
before you promote the "upper" resource to Primary.

> In my case my third
> node is not on the same ip segment as my two other nodes.

no matter.

> - After doing some searching I hit on the http://drbd-plus.linbit.com
> page which mentions configuraion keywords "ignore-on" and "use-csums".
> Neither of these exist in the drbd.conf man page. Are they needed ?

solved differently.
ignore-on was not flexible enough, so it was dropped.
use-csums has been replaced with csums-alg (so you can chose the
algorithm to be used for the checksum based resync).

> - The manual talks about the drbdupper resource used in R1 style
> clusters. What about CRM style clusters ?

"interessting" setups with "interessting" constraints.
or use drbdupper resource anyways.
We probably need a blog post or other feature about this.

> - In the R1 style configuraion you state :
> "The third node, which is set aside from the Heartbeat cluster, will
> have the other half of the stacked resource available permanently."
> I presume by this you mean that if the two-node cluster disappears that
> the mounting/application startup on the backup node is done manually ?

more or less, yes.

> Other Questions :
> - Is the manual available for download/printing ?

No. We hand it out in training sessions, though.

> - Has anyone used the nx_lsa(Linx Sockets Acceration) driver to run drbd ?

I'm not exactly sure what that is supposed to do.

> Background :
> 1. Current two-node production clusters :
> - HW : - Proliant DL380G5
> - Crossover for drbd : HP NC510C(NetXen) 10GB using nx_nic
> - SW : - RHEL5 and kernel-PAE-2.6.18-92.1.10.el5
> - drbd : drbd-8.2.6-3, drbd-km-2.6.18_92.1.10.el5PAE-8.2.6-3,
> - heartbeat/pacemaker :
> heartbeat-2.99.0-3.1
> heartbeat-common-2.99.0-3.1
> heartbeat-resources-2.99.0-3.1
> pacemaker-heartbeat-0.6.6-17.2
> pacemaker-pygui-1.4-5.1
> 
> 1. Test two-node cluster :
> - HW : - Proliant DL380G4
> - SW : - Latest RHEL5 and kernel-2.6.18-128.1.1.el5
> - drbd : drbd-8.3.0-3, drbd-km-2.6.18_128.1.1.el5-8.3.0-3
> - heartbeat/pacemaker :
> heartbeat-2.99.2-6.1.i386.rpm
> heartbeat-common-2.99.2-6.1.i386.rpm
> heartbeat-resources-2.99.2-6.1.i386.rpm
> pacemaker-1.0.1-3.1.i386.rpm
> pacemaker-pygui-1.4-11.9.i386.rpm
> 


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list