[DRBD-user] Impossible to get primary node.
Robert Altnoeder
robert.altnoeder at linbit.com
Fri Sep 27 11:11:38 CEST 2019
> On 26 Sep 2019, at 11:20, Rob Kramer <rob at solution-space.com> wrote:
>
> I'm using a dual-node pacemaker cluster with drbd9 on centos 7.7. DRBD is set up for 'resource-only' fencing, and the setup does not use STONITH.
If that cluster’s availability is somewhat important, which is normally the reason why it is a cluster in the first place, then it really should...
> The issue is that if both nodes are stopped in sequence, then there is no way to start the cluster with only the node that was powered down first, because DRBD considers the data outdated.
>
> I understand that using outdated data should be prevented, but in my case outdated data is better than no system at all (in case the other node it completely dead).
In the typical scenario, that would not be a problem, because there is a difference between a node being stopped and a node simply disappearing. If the node just fails, the DRBD on the secondary will not become outdated.
One of the few cases where you would encounter the problem you describe would be if the secondary is stopped (e.g. for maintenance), so it considers itself outdated, and then the primary fails, and you want to restart the secondary, which is now outdated.
> Any drbd command to force the outdated node to be primary fails:
>
> [*fims2] ~> drbdadm primary tapas --force
> tapas: State change failed: (-7) Refusing to be Primary while peer is not outdated
> Command 'drbdsetup primary tapas --force' terminated with exit code 11
>
> I can't find any sequence of commands that can convince drbd (or pacemaker) that I *want* to use outdated data.
This should work:
drbdadm del-peer tapas:fims1
drbdadm primary —force tapas
br,
Robert
More information about the drbd-user
mailing list