[DRBD-user] DRBD9 and Proxmox - no quorum in 2 nodes cluster

Shafeek Sumser shafeeks at gmail.com
Tue Mar 14 10:47:31 CET 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Yannis,

Thanks for the information you provided.

On pve1, I have initiate the cluster and add the node pve2.  When the
drbdctrl is primary on pve1 (secondary on pve2) and I shutdown the pve2,
the drbd storage is available.  I can do any manipulation and even the VM
is working.  But on the other side, if I shutdown pve1 (where drbdctrl is
primary), the drbd storage is not available on pve2.  Moreover, any
drbdmanage commands (lists-nodes, volumes etc..) does not work on pve2.  it
says:

root at pve2:~# drbdmanage list-nodes
Waiting for server: ...............
No nodes defined

The log goes as follows:
Mar 14 13:39:39 pve2 drbdmanaged[20776]: INFO       Leader election by wait
for connections
Mar 14 13:39:39 pve2 drbdmanaged[20776]: INFO       DrbdAdm: Running
external command: drbdsetup wait-connect-resource --wait-after-sb=yes
--wfc-timeout=2 .drbdctrl
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      DrbdAdm: External
command 'drbdsetup': Exit code 5
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
degr-wfc-timeout has to be shorter than wfc-timeout
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
degr-wfc-timeout implicitly set to wfc-timeout (2s)
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
outdated-wfc-timeout has to be shorter than degr-wfc-timeout
Mar 14 13:39:41 pve2 drbdmanaged[20776]: ERROR      drbdsetup/stderr:
outdated-wfc-timeout implicitly set to degr-wfc-timeout (2s)
Mar 14 13:39:41 pve2 drbdmanaged[20776]: WARNING    Resource '.drbdctrl':
wait-connect-resource not finished within 2 seconds


Regarding the Split Brain issue, I can't find in the log that a split brain
situation has been detected on survival node ie pve2 for the moment.  I
have done a drbdmanage primary drbdctrl but still the drbd storage is not
available.  How can I resolve the split brain manually so as the drbd
storage continues to work even if pve1 (primary is down).

I will try to test the scenario by adding a third drbd node (pve3) to the
cluster (drbdmanage add-node command) on pve1 and I will let you know.

Thanks

Shafeek



On Mon, Mar 13, 2017 at 10:41 PM, Yannis Milios <yannis.milios at gmail.com>
wrote:

> >the drdb storage becomes unavailable >and the drbd quorum is lost..
>
> From my experience using only 2 nodes on drbd9 does not work well, meaning
> that the cluster loose quorum and you have to manually troubleshoot the
> split brain.
> If you really need a stable system, then use 3 drbd nodes. You could
> possibly use the 3rd node as a drbd control node only ?? Just guessing...
>
> Yannis
> --
> Sent from Gmail Mobile
>



-- 
Shafeek SUMSER
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20170314/cd10ca87/attachment.htm>


More information about the drbd-user mailing list