[DRBD-user] drbd status not in sync on the cluster nodes

Vadym Chepkov chepkov at yahoo.com
Tue Apr 13 01:56:17 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

I have very strange error I never seen before
Nodes disagree what status they are in:

on xen-1:

 2: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----
    ns:252 nr:0 dw:252 dr:22041307 al:14 bm:0 lo:0 pe:10 ua:0 ap:0 ep:1 wo:b oos:0

 2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
    ns:12268 nr:235396 dw:247660 dr:44040844 al:30 bm:28 lo:0 pe:0 ua:1430 ap:0 ep:1431 wo:f oos:0

These modules are installed:


resource vtiger {
  net {
  on xen-1 {
    device    /dev/drbd2;
    disk      /dev/vg1/vtiger;
    flexible-meta-disk internal;
  on xen-2 {
    device    /dev/drbd2;
    disk      /dev/vg1/vtiger;
    flexible-meta-disk internal;

vtger is a logical volume with a xen guest:

disk = [ "drbd:vtiger,xvda,w" ]

This happens whenever I bring vm down (I marked it unmanaged in pacemaker first, but I think it still calls 
On the node it was running it goes to secondary, on the second node it still thinks it's primary.

when I issue command I get

# drbdadm secondary vtiger
No response from the DRBD driver! Is the module loaded?

Only reboot seems matter after that :(

Sincerely yours,
  Vadym Chepkov

More information about the drbd-user mailing list