[DRBD-user] halt after split brain on Red Hat Cluster 5

Chris Harms chris at cmiware.com
Tue Jul 3 03:01:20 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi All,

I'm having a problem after simulating a network failure (unplugging the 
cables) and reconnecting.  Upon reconnecting the cables, both nodes get 
halted by the system and do not log anything.  I have removed the 
default settings for Split Brain scenarios in drbd.conf and replaced 
them with what I thought were innocuous commands:

Is there an unlisted default setting in DRBD that might issue a halt to 
the system?  Also, if I want the cluster manager to do fencing, what 
would be good settings for the after split brain handlers?

My resources are setup as such:

global {
        usage-count no;
}
resource res {
  protocol C;
  handlers {
    pri-on-incon-degr "echo 'local-io-error in DRBD' >> 
/var/log/drbd-errors.log";
    pri-lost-after-sb "echo 'local-io-error in DRBD' >> 
/var/log/drbd-errors.log";
    local-io-error "echo 'local-io-error in DRBD' >> 
/var/log/drbd-errors.log";
  }
  startup {
    wfc-timeout         0;  ## Infinite!
    degr-wfc-timeout   60;  ## 2 minutes.
  }
  disk {
    on-io-error detach;
  }
  net {
    # timeout           60;
    # connect-int       10;
    # ping-int          10;
    # max-buffers     2048;
    # max-epoch-size  2048;
    after-sb-0pri discard-younger-primary;
    after-sb-1pri consensus;
    after-sb-2pri disconnect;
  }
  syncer {
    rate   50M;
    al-extents 257;
  }

  on node1 {
    device      /dev/drbd0;
    disk        /dev/sda5;
    address     192.168.13.203:7789;
    meta-disk   internal;
  }

  on node2 {
    device     /dev/drbd0;
    disk       /dev/sda3;
    address    192.168.13.206:7789;
    meta-disk  internal;
  }
}





More information about the drbd-user mailing list