[DRBD-user] 3 Node Quorum DRBD Pacemaker Cluster isn't Promoting
Brune
Brune.Max at aol.com
Tue Sep 24 16:10:15 CEST 2019
Goal: A three node quorum cluster for High Availability. On of these
nodes should be a diskless tiebreaker.
OS: On all three nodes CentOS7 is being used.
DRBD9 is installed and configured with the following configuration files
on ALL Nodes:
/etc/drbd.d/global_common.conf
global {
?????????????? usage-count yes;
}
common {
?????????????? handlers {
???? ?????? ???????? ?????? quorum-lost "echo b > /proc/sysrq-trigger";
?????????????? }
?????????????? startup {
?????????????? }
?????????????? options {
?????????????????????????????? quorum majority;
???? ?????? ???????? ?????? auto-promote yes;
?????????????????????????????? on-no-quorum suspend-io;
?????????????? }
?????????????? disk {
?????????????? }
?????????????? net {
???? ?????? ???????? protocol C;
???? ???????? }
}
/etc/drbd.d/r0.res
resource r0 {
?????????????? meta-disk internal;
?????????????? device /dev/drbd0;
?????????????? net {
?????????????????????????????? allow-two-primaries no;
?????????????? }
?????????????? on kvmhost0.maxcloud.org {
?????????????????????????????? disk?????????? none;
?????????????????????????????? address???? 100.64.1.26:7789;
?????????????????????????????? node-id???? 0;
?????????????? }
?????????????? on kvmhost1.maxcloud.org {
?????????????????????????????? disk?????????? /dev/ImagesVG/kvmLV;
?????????????????????????????? address???? 100.64.1.27:7789;
?????????????????????????????? node-id???? 1;
?????????????? }
?????????????? on kvmhost2.maxcloud.org {
?????????????????????????????? disk?????????? /dev/ImagesVG/kvmLV;
?????????????????????????????? address???? 100.64.1.28:7789;
?????????????????????????????? node-id???? 2;
?????????????? }
?????????????? connection-mesh {
?????????????????????????????? hosts kvmhost1.maxcloud.org kvmhost2.maxcloud.org
kvmhost0.maxcloud.org;
?????????????? }
}
pacemaker and corosync are only installed and configured on the Nodes
with actual DRBD disks (In my case in the pictures it's kvmhost1 and
kvmhost2):
/etc/corosync/corosync.conf
totem {
?????? version: 2
?????? cluster_name: kvmcluster
?????? transport: udpu
?????? interface {
?????????????? ringnumber: 0
?????????????? Bindnetaddr: {address of kvmhost}
?????????????? broadcast: yes
?????????????? mcastport: 5405
?????? }
}
quorum {
?????????????? provider: corosync_votequorum
???? ???????? two_node: 1
}
nodelist {
?????? node {
?????????????? ring0_addr: kvmhost1.maxcloud.org
???? ???????? name: kvmhost1.maxcloud.org
?????????????? nodeid: 1
?????? }
?????? node {
?????????????? ring0_addr: kvmhost2.maxcloud.org
???? ???????? name: kvmhost2.maxcloud.org
?????????????? nodeid: 2
?????? }
}
logging {
?????? to_logfile: yes
?????? logfile: /var/log/cluster/corosync.log
?????? to_syslog: yes
?????? timestamp: on
}
My Issue is that after I put the primary node in standby (via pacemaker)
or shut it down completely pacemaker does not start the Recources on the
other Node. Furthermore DRBD is telling me that it has no quorum after I
shutdown the primary Node. Which is probably the reason why pacemaker
does not start the Recources...
It's probably because I made a mistake configuring pacemaker and DRBD
but I cannot figure it out.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: step 3 waiting until cluster stops all recources.PNG
Type: image/png
Size: 161317 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190924/9513b2fe/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: step 2 standby primary node.PNG
Type: image/png
Size: 161669 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190924/9513b2fe/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: step 1 everything looks good.PNG
Type: image/png
Size: 160881 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20190924/9513b2fe/attachment-0005.png>
More information about the drbd-user
mailing list