[DRBD-user] Impossible to get primary node.

Rob Kramer rob at solution-space.com
Thu Sep 26 11:20:34 CEST 2019


Hi all,

I'm using a dual-node pacemaker cluster with drbd9 on centos 7.7. DRBD 
is set up for 'resource-only' fencing, and the setup does not use 
STONITH. The issue is that if both nodes are stopped in sequence, then 
there is no way to start the cluster with only the node that was powered 
down first, because DRBD considers the data outdated.

I understand that using outdated data should be prevented, but in my 
case outdated data is better than no system at all (in case the other 
node it completely dead). Any drbd command to force the outdated node to 
be primary fails:

   [*fims2] ~> drbdadm primary tapas --force
   tapas: State change failed: (-7) Refusing to be Primary while peer is 
not outdated
   Command 'drbdsetup primary tapas --force' terminated with exit code 11

I can't find any sequence of commands that can convince drbd (or 
pacemaker) that I *want* to use outdated data. If I remove the 'fencing 
resource-only' entry from the drbd config, then I can to a sequence of 
commands that make the primary --force work (basically, set cluster in 
maintenance,  down and up drbd, primary --force). I've made sure that 
stray fencing constraints are removed from the cluster cib as well.

Surely there has to be some way to force drbd to listen to me, and stop 
trying to protect my data at the cost of having no system that is 
runnable at all?

This is the first system that we've rolled out that used drbd9; it's 
possible that the --force would work OK in 8.x.

I've included the drbd config below.

Cheers!

      Rob


---------------------------------------------

resource tapas {
   protocol C;

   startup {
     wfc-timeout            0;    ## Infinite!
     outdated-wfc-timeout    120;
     degr-wfc-timeout        120;  ## 2 minutes.
   }

   disk {
     on-io-error     detach;
     resync-rate     60M;              ## ~0.5 * 125 MB/s (1Gb/s)
   }

   handlers {
     split-brain "/opt/sol/tapas/bin/split-brain-helper.sh";

     fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
     after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
   }

   net {
     max-buffers         8000;
     max-epoch-size      8000;
     sndbuf-size         2M;

     fencing resource-only;

     after-sb-0pri       discard-least-changes;
   }

   device        /dev/drbd0;
   disk            /dev/mapper/centos-drbd;
   meta-disk        internal;

   on fims1 {
     address    x.36:7789;
   }

   on fims2 {
     address    x.37:7789;
   }
}



More information about the drbd-user mailing list