[DRBD-user] switch was down, all drbd machines rebootet

Heiko rupertt at gmail.com
Fri Jul 3 16:01:27 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

i had an earlier discussion here where we came to the conclusion that using
Protocol C can cause crashes.
Yesterday we had problems with one of our switches and therefore the drbd
enabled machines couldnt see each other,
than all the machines did reboots, created splitbrains and a lot of work.
Do you think the crashes/reboots are caused by the same problem or can we
prevent this behavouir by optimizing our
heartbeat drbd config? Ill attach a drbd config and the ha.cf

---------------------------------
drbd.conf

common {
  protocol C;
}



resource drbd_backend {
  startup {
    degr-wfc-timeout 120;    # 2 minutes.
  }
  disk {
    on-io-error   detach;
  }
  net {
  }
  syncer {
        rate 500M;
        al-extents 257;
  }

  on xen-B1.fra1.mailcluster {
    device    /dev/drbd0;
    disk      /dev/md3;
    address   172.20.2.1:7788;
    meta-disk internal;
  }
  on xen-A1.fra1.mailcluster {
    device    /dev/drbd0;
    disk      /dev/md3;
    address   172.20.1.1:7788;
    meta-disk internal;
  }
}

---------------------------------------
ha.cf

#use_logd on
logfile /var/log/ha-log
debugfile /var/log/ha-debug
logfacility local0
keepalive 2
deadtime 10
warntime 3
initdead 20
udpport 694
ucast eth0 172.20.1.1
ucast eth0 172.20.2.1
node xen-A1.fra1.mailcluster
node xen-B1.fra1.mailcluster
auto_failback on



thnx a lot


.r
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090703/8306e8bb/attachment.htm>


More information about the drbd-user mailing list