[DRBD-user] DRBD LVM2 Trouble

Tyler Seaton tyler at imprev.com
Thu Mar 6 02:52:31 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hey Guys,

I have a pretty bad situation on my hands.

We had a node configured running DRBD 8.0.6. The goal was to keep this  
running in standalone mode until we provisioned a matching machine. We  
purchased the matching machine and finally had it fully configured  
today. I kicked off the initial sync, and had hoped that we would have  
both machines in sync within a day or two.

This was unfortunately not the case. When I kicked off the sync all  
seemed well however our application quickly began throwing error's as  
the primary node became read only. I quickly shut off drbd on the  
secondary node and attempted to return the original configuration to  
the primary server. Sadly no amount of back peddling has helped us. We  
are currently dead in the water.

DRBD was configured on the primary node with LVM. We have/had 3  
resources configured the first 2 being 2TB in size and the 3rd being  
1.4-5TB in size. Since stopping the initial sync I have not been able  
to mount LVM Volume Group that sits above the three resources. NOTE:  
the SDB devices on nfs2 are numbered differently.

/var/log/messages was giving the following messages:

Mar  5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534208,  
limit=3421310910
Mar  5 14:38:35 nfs2 kernel: attempt to access beyond end of device
Mar  5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534216,  
limit=3421310910
Mar  5 14:38:35 nfs2 kernel: attempt to access beyond end of device
Mar  5 14:38:35 nfs2 kernel: drbd2: rw=0, want=3434534224,  
limit=3421310910
Mar  5 14:38:35 nfs2 kernel: attempt to access beyond end of device


Here is my conf:

global {
     usage-count no;
}

common {

   syncer {rate 20M;}

   handlers {
     local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
     outdate-peer "/sbin/drbd-peer-outdater";
   }

   startup {
     wfc-timeout  120;       # s
     degr-wfc-timeout 120;    # 2 minutes.
   }

   disk {
     on-io-error   detach;
   }

   net {
     sndbuf-size 512k;
     max-buffers     2048;
     unplug-watermark   128;
     cram-hmac-alg "xxx";
     shared-secret "xxx";
     after-sb-0pri disconnect;
     after-sb-1pri disconnect;
     after-sb-2pri disconnect;
     rr-conflict disconnect;
   }

}

resource drbd_sdb1 {
   protocol C;

   syncer {
     al-extents 1031;
   }

   on nfs2.imprev.net {
     device     /dev/drbd0;
     disk       /dev/sdb1;
     address    216.145.23.104:7788;
     flexible-meta-disk  /dev/sdb2;
   }

   on nfs1.imprev.net {
     device    /dev/drbd0;
     disk      /dev/sdb1;
     address   216.145.23.103:7788;
     flexible-meta-disk /dev/sdb2;
   }
}

resource drbd_sdb3 {
   protocol C;

   syncer {
     al-extents 1031;
   }

   on nfs2.imprev.net {
     device     /dev/drbd1;
     disk       /dev/sdb3;
     address    216.145.23.104:7789;
     flexible-meta-disk  /dev/sdb4;

   on nfs1.imprev.net {
     device    /dev/drbd1;
     disk      /dev/sdb3;
     address   216.145.23.103:7789;
     flexible-meta-disk /dev/sdb4;
   }
}

resource drbd_sdb5 {
   protocol C;

   syncer {
     al-extents 1031;
   }

   on nfs2.imprev.net {
     device     /dev/drbd2;
     disk       /dev/sdb5;
     address    216.145.23.104:7790;
     flexible-meta-disk  /dev/sdb6;
   }

   on nfs1.imprev.net {
     device    /dev/drbd2;
     disk      /dev/sdb5;
     address   216.145.23.103:7790;
     flexible-meta-disk /dev/sdb6;
   }
}


  
  



More information about the drbd-user mailing list