[DRBD-user] Mystery with online verify and Out of sync sectors.

Dimitrij Hilt dimitrij.hilt at fhe3.com
Fri Jun 13 19:38:27 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


we have big trouble with DRBD an online verification. We have two
servers (Supermicro wuth Areca 1680/SAS and 4 x SAS Harddrive in raid10).
On these servers runs Debian system witch MySQL and own vanila kernel
(newest and drbd-8.2.6.

Everytimes after 'drbdadm verify all' we got lot of Out of sync
messages. It makes no sense where 'drbdadm verify all' runs. Messages
comes from Primary or secondary.

I'v tried to disconnect all applications and run 'drbdadm verify all' on
idle drbd devices: same issue.
I'v tried to invalidate one drbd device, runs full sync and directly
after full sync runs 'drbdadm verify all': same issue.

And i can see that write operations blocks on Primary during 'drbdadm
verify all' runs. Not in a baginning but after 20-30 mins.

Hardware is new, has own NIC for DRBD traffic with cross over cabel. All
RAM is ECC and does not seems any problems. Raid controllers does not
seems any problems too.

Any idee why it happens?

Our config:

global {
    minor-count 8;
    usage-count no;

common {
  syncer {
        rate 20M;


# this need not be r#, you may use phony resource names,
# like "resource web" or "resource mail", too

resource mysql {

  protocol C;

  handlers {
    pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
    local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
    outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
    pri-lost "echo pri-lost. Have a look at the log files. | mail -s
'DRBD Alert
' MAIL";
    split-brain "echo split-brain. drbdadm -- --discard-my-data connect
SOURCE ? | mail -s 'DRBD Alert' MAIL";
    out-of-sync "echo out-of-sync. drbdadm down $DRBD_RESOURCE. drbdadm
::::0 se
t-gi $DRBD_RESOURCE. drbdadm up $DRBD_RESOURCE. | mail -s 'DRBD Alert'

  startup {
    wfc-timeout  0;
    degr-wfc-timeout 120;    # 2 minutes.

  disk {
    on-io-error   detach;

  net {
    max-buffers     2048;
    ko-count 4;
    cram-hmac-alg "sha1";
    shared-secret "secret";
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
    data-integrity-alg "md5";

  syncer {
    rate 20M;
    al-extents 3833;
    cpu-mask 1;
    verify-alg md5;

  on host-a {
    device     /dev/drbd0;
    disk       /dev/vg00/dbmysql;
    meta-disk  /dev/vg00/drbdmeta[0];

  on host-b {
    device    /dev/drbd0;
    disk      /dev/vg00/dbmysql;
    meta-disk  /dev/vg00/drbdmeta[0];


Dimitrij HIlt

More information about the drbd-user mailing list