[DRBD-user] Secondary server not mounting drbd partition if auto_failback is off

jan gestre ipcopper.ph at gmail.com
Sat Jul 17 14:55:49 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi Everyone,

I've setup a 2-node DRBD+Heartbeat test environment, everything is
fine if autofail is set to on, however when I stopped heartbeat with
auto_failback is set to off on node1, node2 does not mount /dev/drbd0
and still remain secondary, am I missing something?

Here's my configuration files:


global {    usage-count yes;
common {
  syncer { rate 10M; }
resource r0 {
        protocol C;
        disk { on-io-error detach; }
        startup { wfc-timeout 30; degr-wfc-timeout 20; }
        on node1.cluster.local {
                device    /dev/drbd0;
                disk      /dev/sda3;
                meta-disk internal;
        on node2.cluster.local {
                device    /dev/drbd0;
                disk      /dev/sda3;
                meta-disk internal;



debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 60
udpport 694
bcast eth1
ucast eth1
auto_failback on
node node1.cluster.local
node node2.cluster.local



node1.cluster.local drbddisk::r0
IPaddr:: httpd


More information about the drbd-user mailing list