[DRBD-user] DRBD Unconfigured state after service switch

Simone Del Pinto delpintosimone at gmail.com
Fri Feb 22 11:26:07 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi guys,

we are using drbd 8.3.13 on our 2 linux server to keep data of the oer
MySQL server.

corosync and pacemaker ensure that we have a virtuel IP running and that
our DB is always up on one of those nodes.
During a failover test we noted that when we put the Master node in standy
( crm node standby )

all services switch to other node but DRBD goes in "Unconfigured state".
Below the situation before standby:

*crm_mon -V1*
*============*
*Last updated: Fri Feb 22 11:20:41 2013*
*Last change: Thu Feb 21 17:44:58 2013 via crm_attribute on FRCVD2047*
*Stack: openais*
*Current DC: FRCVD2046 - partition with quorum*
*Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14*
*2 Nodes configured, 2 expected votes*
*5 Resources configured.*
*============*
*
*
*Online: [ FRCVD2046 FRCVD2047 ]*
*
*
* p_IPaddr2_zenoss_cluster_ip    (ocf::heartbeat:IPaddr2):       Started
FRCVD2046*
* p_zends_fs     (ocf::heartbeat:Filesystem):    Started FRCVD2046*
* p_zends_service        (lsb:zends):    Started FRCVD2046*
* Master/Slave Set: ms_zends_drbd [p_zends_drbd]*
*     Masters: [ FRCVD2046 ]*
*     Slaves: [ FRCVD2047 ]*
*
*
Now the situation after "crm node standby":

*============*
*Last updated: Fri Feb 22 11:20:59 2013*
*Last change: Fri Feb 22 11:20:53 2013 via crm_attribute on FRCVD2046*
*Stack: openais*
*Current DC: FRCVD2046 - partition with quorum*
*Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14*
*2 Nodes configured, 2 expected votes*
*5 Resources configured.*
*============*
*
*
*Node FRCVD2046: standby*
*Online: [ FRCVD2047 ]*
*
*
*p_IPaddr2_zenoss_cluster_ip     (ocf::heartbeat:IPaddr2):       Started
FRCVD2047*
*p_zends_fs      (ocf::heartbeat:Filesystem):    Started FRCVD2047*
*p_zends_service (lsb:zends):    Started FRCVD2047*
* Master/Slave Set: ms_zends_drbd [p_zends_drbd]*
*     Masters: [ FRCVD2047 ]*
*     Stopped: [ p_zends_drbd:1 ]*
*
*
And now DRBD situation on "old" Master node ( *FRCVD2046 ):*
*
*
*
service drbd status
drbd driver loaded OK; device status:
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by dag at Build64R6,
2012-09-04 12:06:10
m:res        cs            ro  ds  p  mounted  fstype
0:zendsdata  Unconfigured
*

Now if i run a DRBD service reload I have this scenario:

*service drbd reload*
*Reloading DRBD configuration: .*
*[root at FRCVD2046 davide.lonero]# service drbd status*
*drbd driver loaded OK; device status:*
*version: 8.3.13 (api:88/proto:86-96)*
*GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by dag at Build64R6,
2012-09-04 12:06:10*
*m:res        cs         ro                 ds                 p  mounted
 fstype*
*0:zendsdata  Connected  Secondary/Primary  UpToDate/UpToDate  C*
*
*
The question are:

- Why this happen?
- Is there a way to workaround this issue?
- is there a way to "force" cluster and/or drbd to reload service after a
stop procedure?

Thanks in advance to everyone can save me...

Simone
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130222/77ec42d2/attachment.htm>


More information about the drbd-user mailing list