[DRBD-user] Secondary server not mounting drbd partition if auto_failback is off

jan gestre ipcopper.ph at gmail.com
Sat Jul 17 14:55:49 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Everyone,

I've setup a 2-node DRBD+Heartbeat test environment, everything is
fine if autofail is set to on, however when I stopped heartbeat with
auto_failback is set to off on node1, node2 does not mount /dev/drbd0
and still remain secondary, am I missing something?

Here's my configuration files:

--------
drbd.conf

global {    usage-count yes;
}
common {
  syncer { rate 10M; }
}
resource r0 {
        protocol C;
        disk { on-io-error detach; }
        startup { wfc-timeout 30; degr-wfc-timeout 20; }
        on node1.cluster.local {
                device    /dev/drbd0;
                disk      /dev/sda3;
                address   172.16.88.101:7789;
                meta-disk internal;
        }
        on node2.cluster.local {
                device    /dev/drbd0;
                disk      /dev/sda3;
                address   172.16.88.102:7789;
                meta-disk internal;
        }
}

---------------------

ha.cf

debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 60
udpport 694
bcast eth1
ucast eth1 192.168.136.133
auto_failback on
node node1.cluster.local
node node2.cluster.local

----------

haresources

node1.cluster.local drbddisk::r0
Filesystem::/dev/drbd0::/replication::ext3
IPaddr::172.16.88.103/24/eth0/172.16.88.255 httpd


TIA.



More information about the drbd-user mailing list