Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi
I have very strange error I never seen before
Nodes disagree what status they are in:
on xen-1:
2: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----
ns:252 nr:0 dw:252 dr:22041307 al:14 bm:0 lo:0 pe:10 ua:0 ap:0 ep:1 wo:b oos:0
2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----
ns:12268 nr:235396 dw:247660 dr:44040844 al:30 bm:28 lo:0 pe:0 ua:1430 ap:0 ep:1431 wo:f oos:0
These modules are installed:
drbd-km-2.6.18_164.15.1.el5xen-8.3.7-12
drbd-udev-8.3.7-1
drbd-xen-8.3.7-1
drbd-pacemaker-8.3.7-1
drbd-utils-8.3.7-1
resource vtiger {
net {
allow-two-primaries;
}
on xen-1 {
device /dev/drbd2;
disk /dev/vg1/vtiger;
address 10.0.0.1:7787;
flexible-meta-disk internal;
}
on xen-2 {
device /dev/drbd2;
disk /dev/vg1/vtiger;
address 10.0.0.2:7787;
flexible-meta-disk internal;
}
}
vtger is a logical volume with a xen guest:
disk = [ "drbd:vtiger,xvda,w" ]
This happens whenever I bring vm down (I marked it unmanaged in pacemaker first, but I think it still calls
On the node it was running it goes to secondary, on the second node it still thinks it's primary.
when I issue command I get
# drbdadm secondary vtiger
No response from the DRBD driver! Is the module loaded?
Only reboot seems matter after that :(
Sincerely yours,
Vadym Chepkov