Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi,
I recently got a DRBD (8.4.2-2) cluster up (still testing). It seems to work nicely with Pacemaker CRM in several scenarios I have tested. Here is my config.
global {
usage-count yes;
}
common {
handlers {
outdate-peer /usr/lib/drbd/crm-fence-peer.sh;
fence-peer /usr/lib/drbd/crm-fence-peer.sh;
after-resync-target /usr/lib/drbd/crm-unfence-peer.sh;
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
split-brain "/usr/lib/drbd/notify-split-brain.sh root";
}
startup {
degr-wfc-timeout 0;
}
net {
shared-secret 1QP69G4kWDslx2TMiaEStI6bwaGH5y8d;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
disk {
on-io-error call-local-io-error;
fencing resource-and-stonith;
}
}
The io-error handler only gets called when the primary node has a disk issue. I have not seen the secondary node call the "local-io-error" handler when it had disk access issues. Is this by design?
Thanks,
Prakash