Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Sun, Jul 17, 2016 at 3:55 PM, Roland Kammerer <roland.kammerer at linbit.com > wrote: > On Sun, Jul 17, 2016 at 07:28:16AM -0500, T.J. Yang wrote: > > Hi > > > > I am learning DRBD9 using Centos 7.2(See R1 gdoc). > > > > All the latest rpm had been created and deployed on test node A,B,C. > > > > Currently I am stuck at not able to add second node with following error > > > > Please provide pointer where I might did wrong. > > > > [root at centos7A ~]# drbdmanage add-node centos7B 192.168.42.130 > > > > Operation completed successfully > > > > Operation completed successfully > > > > Executing join command using ssh. > > > > IMPORTANT: The output you see comes from centos7B > > > > IMPORTANT: Your input is executed on centos7B > > > > You are going to join an existing drbdmanage cluster. > > > > CAUTION! Note that: > > > > * Any previous drbdmanage cluster information may be removed > > > > * Any remaining resources managed by a previous drbdmanage installation > > > > that still exist on this system will no longer be managed by > drbdmanage > > > > Confirm: > > > > yes/no: yes > > > > ERROR:dbus.proxies:Introspect error on :1.39:/interface: > > dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did > not > > receive a reply. Possible causes include: the remote application did not > > send a reply, the message bus security policy blocked the reply, the > reply > > timeout expired, or the network connection was broken. > > > > Error: Cannot connect to the drbdmanaged process using DBus > > > > The DBus subsystem returned the following error description: > > > > org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible > > causes include: the remote application did not send a reply, the message > > bus security policy blocked the reply, the reply timeout expired, or the > > network connection was broken. > > > > Error: Attempt to execute the join command remotelyfailed > > > > Join command for node centos7B: > > > > drbdmanage join -p 6999 192.168.42.130 1 centos7A 192.168.42.129 0 > > kOSMSN72ywYj+wGBogHG > > Just in case: You can get the "join" command for your node centos7B when > you execute "drbdmanage howto-join centos7B". > > I would do the following: > - Check if the control-volume on centos7A is up ("drbdsetup status > .drbdctrl") > I accidentally resolved above issue by rebooting CentoOS7A and B(not sure if I did anything more than reboot). Now I have both control-volumes up on both A/B. [root at centos7A drbd]# drbdsetup status .drbdctrl .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate centos7B role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate [root at centos7A drbd]# date Sun Jul 17 16:19:21 CDT 2016 [root at centos7A drbd]# > - Make sure the drbdmanage process is stopped on centos7B ("drbdmanage > shutdown -q"). Make sure it is stopped (ps aux && kill if necessary). > - Enter the "join" command you got on centos7A for the node centos7B on > centos7B > I went on to redo the whole process on another set of two CeontOS7 VM using example rpms created from git src. And ended up with exact dbus error message again. I did try to stop drbdmanage project on node 2. before I join the 2nd node(va03t) 1. make sure control volume is up on drbd03A [root at drbd03A ~]# sudo drbdsetup status .drbdctrl .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate [root at drbd03A ~]# drbdmanage n +-------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | | State | |-------------------------------------------------------------------------| | drbd03A | 16380 | 16372 | | ok | +-------------------------------------------------------------------------+ [root at drbd03A ~]# 2. make drbdmanage process is not on drbd03B [root at drbd03B ~]# ps -eaf |egrep 'drbd'|egrep -v egrep root 3539 2 0 17:39 ? 00:00:00 [drbd-reissue] root 3984 2283 0 17:42 pts/2 00:00:00 grep -E --color=auto drbd [root at drbd03B ~]# 3. adding 2nd node [root at drbd03A ~]# drbdmanage add-node drbd03B 10.65.184.3 Operation completed successfully Operation completed successfully Executing join command using ssh. IMPORTANT: The output you see comes from drbd03B IMPORTANT: Your input is executed on drbd03B You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes ERROR:dbus.proxies:Introspect error on :1.27:/interface: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Error: Cannot connect to the drbdmanaged process using DBus The DBus subsystem returned the following error description: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Error: Attempt to execute the join command remotelyfailed Join command for node drbd03B: drbdmanage join -p 6999 10.65.184.3 1 drbd03A 192.168.42.129 0 JqgmtgCBj2knivkRIIub [root at drbd03A ~]# 3.1 on drbd03B, join command were ssh into drbd03B from drbd03A [root at drbd03B ~]# ps -eaf |egrep 'drbd'|egrep -v egrep root 3539 2 0 17:39 ? 00:00:00 [drbd-reissue] root 3984 2283 0 17:42 pts/2 00:00:00 grep -E --color=auto drbd [root at drbd03B ~]# ps -eaf |egrep 'drbd'|egrep -v egrep root 3539 2 0 17:39 ? 00:00:00 [drbd-reissue] root 4148 4144 0 17:44 ? 00:00:00 /usr/bin/python /usr/bin/drbdmanage join -p 6999 10.65.184.3 1 drbd03A 192.168.42.129 0 JqgmtgCBj2knivkRIIub root 4199 2 0 17:44 ? 00:00:00 [drbd_w_.drbdctr] root 4201 2 0 17:44 ? 00:00:00 [drbd0_submit] root 4205 2 0 17:44 ? 00:00:00 [drbd1_submit] root 4221 2 0 17:44 ? 00:00:00 [drbd_s_.drbdctr] root 4224 2 0 17:44 ? 00:00:00 [drbd_r_.drbdctr] root 4226 1 0 17:44 ? 00:00:00 /usr/bin/python /usr/bin/dbus-drbdmanaged-service root 4269 4226 0 17:44 ? 00:00:00 drbdsetup events2 all root 4407 2283 0 17:45 pts/2 00:00:00 grep -E --color=auto drbd [root at drbd03B ~]# so in short reboot both nodes help resolved the issue. drbd03A sent the ip address of centos7A(192.168.42.129). this IP somehow got carry over from my 1st pairs of CentoOS7 VMs. afer drbd03A got rebooted. I am able to join 2nd node. [root at drbd03A ~]# drbdmanage n +--------------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | | State | |--------------------------------------------------------------------------------| | drbd03A | 16380 | 16372 | | ok | | drbd03B | 16380 | 16372 | | ok | +--------------------------------------------------------------------------------+ [root at drbd03A ~]# Thanks Roland for the debug procedure on 2nd node. > Regards, rck > > > _______________________________________________ > drbd-user mailing list > drbd-user at lists.linbit.com > http://lists.linbit.com/mailman/listinfo/drbd-user > -- T.J. Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160717/8440576b/attachment.htm>