[DRBD-user] DRBD 9: Bug or feature

cgasmith at comcast.net cgasmith at comcast.net
Wed Sep 16 20:08:44 CEST 2015

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


While testing drbd 9.0,in a three node mesh, executing drbdadm down r0 on the primary node does not promote other nodes to primary (no big deal as I'll do that with pacemaker) but drbdadm reports it cannot detach the block device since it was primary. 

[root at mom3 chucks]# drbdadm status r0 
This command will ignore resource names! 
r0 role:Primary 
disk:UpToDate 
mom1 role:Secondary 
peer-disk:UpToDate 
mom2 role:Secondary 
peer-disk:UpToDate 

[root at mom3 chucks]# drbdadm down r0 
r0: State change failed: (-2) Need access to UpToDate data 
additional info from kernel: 
failed to detach 
Command 'drbdsetup down r0' terminated with exit code 17 

even though now from another node the resource IS down 
[root at mom1 chucks]# drbdadm status 
r0 role:Secondary 
disk:UpToDate 
mom2 role:Secondary 
peer-disk:UpToDate 
mom3 connection:Connecting 

and it seems to be unable to restore that it was primary, have to promote another node to primary and the old primary has to be taken down to secondary. 


So if not a "bug" / "feature" what would be the appropriate sequence, 
1) demote current primary to secondary 
2) promote new node to primary 
3) bring down old primary 

I see some trouble brewing with this as I progress to pacemaker and begin hard failover testing. (aka power / communication loss on 1 or 2 nodes at a time) 
------------------------- 
particulars............... 
--------------------------- 
drbd 
version: 9.0.0 (api:2/proto:86-110) 
GIT-hash: 360c65a035fc2dec2b93e839b5c7fae1201fa7d9 
drbd-utils 
Version: 8.9.3 (api:2) 
GIT-hash: c11ba026bbbbc647b8112543df142f2185cb4b4b 

[root at mom1 chucks]# drbdadm dump 
# /etc/drbd.conf 
global { 
usage-count yes; 
} 

common { 
net { 
protocol C; 
} 
} 

# resource r0 on mom1: not ignored, not stacked 
# defined at /etc/drbd.d/r0.res:1 
resource r0 { 
on mom1 { 
node-id 0; 
device /dev/drbd1 minor 1; 
disk /dev/vg_submother1/local_storage; 
meta-disk internal; 
address ipv4 192.168.110.10:7789; 
} 
on mom2 { 
node-id 1; 
device /dev/drbd1 minor 1; 
disk /dev/vg_supermother2/local_storage; 
meta-disk internal; 
address ipv4 192.168.110.20:7789; 
} 
on mom3 { 
node-id 2; 
device /dev/drbd1 minor 1; 
disk /dev/vg_mom3/local_storage; 
meta-disk internal; 
address ipv4 192.168.110.30:7789; 
} 
connection-mesh { 
hosts mom1 mom2 mom3; 
net { 
use-rle no; 
} 
} 
} 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20150916/e282bd0e/attachment.htm>


More information about the drbd-user mailing list