[DRBD-user] WFC stays a longtime before connect

Pierre LEBRECH pierre.lebrech at laposte.net
Thu Jul 9 12:11:18 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

start context : 3-node cluster, every node connected, HA services on node1, DRBD version 8.3.2rc1 on linux 2.6.30.

I switch HA services to node2 with "/usr/lib/heartbeat/hb_standby all" from node1.


Well, it worked but for drbd1 (stacked resource) it stays many many seconds (15/20 seconds) WFC (1) before connect.

I think it should be faster.

Any ideas?

Thanks.


(1) either on node1 and node3


Here is the drbd.conf :

global {
    usage-count yes;
}
common {
  syncer { rate 10M; }
  net {
    max-buffers 40000;
  }
}

resource r0 {
  protocol C;
  handlers {
    pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
  }
  startup {
    wfc-timeout  0;
    degr-wfc-timeout 120;
  }
  disk {
    on-io-error   detach;
  }
  net {
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
  syncer {
    rate 90M;
    al-extents 128;
    csums-alg md5;
  }

  on node1 {
      device     /dev/drbd0;
      disk       /dev/md2;
      address    10.0.0.1:7788;
      meta-disk  /dev/md1 [0];
  }
  on node2 {
      device     /dev/drbd0;
      disk       /dev/md2;
      address    10.0.0.2:7788;
      meta-disk  /dev/md1 [0];
  }
}

resource r0-U {
  protocol C;

  syncer {
    csums-alg md5;
	rate 5M;
  }

  stacked-on-top-of r0 {
    device    /dev/drbd1;
    address   192.168.2.15:7788;
  }

  on node3 {
    device    /dev/drbd1;
    disk      /dev/md2;
    address   192.168.2.14:7788;
    meta-disk internal;
  }
}



More information about the drbd-user mailing list