<div dir="ltr"><div><div>Hi, Roland</div><div><br></div><div>I'm sorry to late reply and thank you for your advice.</div><div>I didn't understand how DRBD starts.</div><div>As you mentioned, res file wasn't read.</div><div>I included the path of res file in drbd.conf and two machines have connected successfully.</div><div>Even if I still have a problem with this environment but it is related to AWS environment so I will ask to AWS support.</div><div><br></div><div>Thank you for your help.</div><div>I appreciate it.</div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-07-25 15:50 GMT+09:00 Roland Kammerer <span dir="ltr"><<a href="mailto:roland.kammerer@linbit.com" target="_blank">roland.kammerer@linbit.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Mon, Jul 24, 2017 at 05:25:40PM +0900, 大川敬臣 wrote:<br>
> Please let me ask you questions about DRBD.<br>
> I'm testing DRBD 9.0.8 with RHEL 7.3 on AWS environment.<br>
><br>
> 【Configurations】<br>
> OS:RHEL 7.3<br>
> Kernel version:3.10.0-514.el7.x86_64<br>
> DRBD:drbd-9.0.8-1<br>
> drbd-utils-9.0.0<br>
> drbdmanage-0.99.5<br>
> Host Name:drbd-01、drbd-02<br>
> Disk for DRBD: /dev/xvdb<br>
><br>
> ・DRBD volume is going to be used as MySQL data volume.<br>
> ・AWS env is only test environment. PROD env is VM ESXi.<br>
><br>
> 【Installing steps】<br>
> ・Installing required packages<br>
> # yum -y install kernel-devel.x86_64<br>
> # yum -y install gcc.x86_64<br>
> # yum -y install flex.x86_64<br>
><br>
> ・Installing DRBD<br>
> # cd /usr/local/src/<br>
> # tar zxf drbd-9.0.8-1.tar.gz<br>
> # cd drbd-9.0.8-1<br>
> # make KDIR=/usr/src/kernels/$(uname -r)<br>
> # make install<br>
> # modprobe drbd<br>
> # cat /proc/drbd<br>
><br>
> ・Installing drbd-util<br>
> # cd /usr/local/src/<br>
> # tar zxf drbd-utils-9.0.0.tar.gz<br>
> # cd drbd-utils-9.0.0<br>
> # ./configure<br>
> # make<br>
> # make install<br>
><br>
> ・Installing DRBD Manage<br>
> # cd /usr/local/src/<br>
> # tar zxf drbdmanage-0.99.5.tar.gz<br>
> # cd drbdmanage-0.99.5<br>
> # ./setup.py build<br>
> # ./setup.py install<br>
><br>
> 【Initialize DRBD】<br>
> - Creating pool for DRBD<br>
> # vgcreate drbdpool /dev/xvdb1<br>
> Physical volume "/dev/xvdb1" successfully created.<br>
> Volume group "drbdpool" successfully created<br>
> #<br>
><br>
> - Execute on drbd-01<br>
> # drbdmanage init 172.31.1.155<br>
><br>
> You are going to initialize a new drbdmanage cluster.<br>
> CAUTION! Note that:<br>
> * Any previous drbdmanage cluster information may be removed<br>
> * Any remaining resources managed by a previous drbdmanage installation<br>
> that still exist on this system will no longer be managed by drbdmanage<br>
><br>
> Confirm:<br>
><br>
> yes/no: yes<br>
> Empty drbdmanage control volume initialized on '/dev/drbd0'.<br>
> Empty drbdmanage control volume initialized on '/dev/drbd1'.<br>
> Waiting for server: .<br>
> Operation completed successfully<br>
> #<br>
> # drbdadm status<br>
> .drbdctrl role:Primary<br>
> volume:0 disk:UpToDate<br>
> volume:1 disk:UpToDate<br>
><br>
> #<br>
> #<br>
> # lvdisplay -c<br>
> /dev/drbdpool/.drbdctrl_0:<wbr>drbdpool:3:1:-1:2:8192:1:-1:0:<wbr>-1:253:0<br>
> /dev/drbdpool/.drbdctrl_1:<wbr>drbdpool:3:1:-1:2:8192:1:-1:0:<wbr>-1:253:1<br>
> #<br>
> # drbdmanage list-nodes<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-------------------+<br>
> | Name | Pool Size | Pool Free |<br>
> | State |<br>
> |-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-------------------|<br>
> | drbd-01 | 10236 | 10228 |<br>
> | ok |<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-------------------+<br>
> #<br>
> # drbdmanage new-node drbd-02 172.31.8.103<br>
> Operation completed successfully<br>
> Operation completed successfully<br>
><br>
<br>
</div></div>Please read the following lines again:<br>
<span class=""><br>
> Executing join command using ssh.<br>
> IMPORTANT: The output you see comes from drbd-02<br>
> IMPORTANT: Your input is executed on drbd-02<br>
<br>
</span>Reread the lines above^^ ;-)<br>
<span class=""><br>
> You are going to join an existing drbdmanage cluster.<br>
> CAUTION! Note that:<br>
> * Any previous drbdmanage cluster information may be removed<br>
> * Any remaining resources managed by a previous drbdmanage installation<br>
> that still exist on this system will no longer be managed by drbdmanage<br>
><br>
> Confirm:<br>
><br>
> yes/no: yes<br>
> Waiting for server to start up (can take up to 1 min)<br>
> Waiting for server: ......<br>
> Operation completed successfully<br>
> Give leader time to contact the new node<br>
> Operation completed successfully<br>
> Operation completed successfully<br>
<br>
</span>I guess at that point your second node was successfully joined.<br>
<span class=""><br>
> #<br>
> #<br>
> # drbdmanage howto-join drbd-02<br>
> IMPORTANT: Execute the following command only on node drbd-02!<br>
> drbdmanage join -p 6999 172.31.8.103 1 drbd-01 172.31.1.155 0<br>
> aez1qL969FHRDYJH4qYD<br>
> Operation completed successfully<br>
> #<br>
> #<br>
><br>
> - Execute on drbd-02<br>
> # drbdmanage join -p 6999 172.31.8.103 1 drbd-01 172.31.1.155 0<br>
> aez1qL969FHRDYJH4qYD<br>
> You are going to join an existing drbdmanage cluster.<br>
> CAUTION! Note that:<br>
> * Any previous drbdmanage cluster information may be removed<br>
> * Any remaining resources managed by a previous drbdmanage installation<br>
> that still exist on this system will no longer be managed by drbdmanage<br>
><br>
> Confirm:<br>
><br>
> yes/no: yes<br>
> Waiting for server to start up (can take up to 1 min)<br>
> Operation completed successfully<br>
> #<br>
> #<br>
<br>
</span>Why did you rejoin it? It was already joined.<br>
<div><div class="h5"><br>
><br>
> - Execute on drbd-01<br>
> # drbdmanage list-nodes<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-+<br>
> | Name | Pool Size | Pool Free | |<br>
> State |<br>
> |-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-|<br>
> | drbd-01 | 10236 | 10228 | |<br>
> online/quorum vote ignored |<br>
> | drbd-02 | 10236 | 10228 | |<br>
> offline/quorum vote ignored |<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-+<br>
> [root@drbd-01 drbd.d]#<br>
><br>
> - Execute on drbd-02<br>
> # drbdmanage list-nodes<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-+<br>
> | Name | Pool Size | Pool Free | |<br>
> State |<br>
> |-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-|<br>
> | drbd-01 | 10236 | 10228 | |<br>
> offline/quorum vote ignored |<br>
> | drbd-02 | 10236 | 10228 | |<br>
> online/quorum vote ignored |<br>
> +-----------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-+<br>
> #<br>
> #<br>
><br>
> When I checked syslog on drbd-01, the messages blow were writen.<br>
> ----------<br>
> Jul 24 03:27:58 drbd-01 dbus-daemon: .drbdctrl role:Primary<br>
> Jul 24 03:27:58 drbd-01 dbus-daemon: volume:0 disk:UpToDate<br>
> Jul 24 03:27:58 drbd-01 dbus-daemon: volume:1 disk:UpToDate<br>
> Jul 24 03:29:55 drbd-01 drbdmanaged[2221]: INFO DrbdAdm: Running<br>
> external command: drbdadm -vvv adjust .drbdctrl<br>
> Jul 24 03:29:55 drbd-01 drbdmanaged[2221]: ERROR DrbdAdm: External<br>
> command 'drbdadm': Exit code 1<br>
> Jul 24 03:29:55 drbd-01 dbus-daemon: .drbdctrl role:Primary<br>
> ----------<br>
><br>
> on drbd-02<br>
> ----------<br>
> Jul 24 03:29:59 drbd-02 drbdmanaged[2184]: INFO DrbdAdm: Running<br>
> external command: drbdadm -vvv adjust .drbdctrl<br>
> Jul 24 03:29:59 drbd-02 drbdmanaged[2184]: ERROR DrbdAdm: External<br>
> command 'drbdadm': Exit code 1<br>
> Jul 24 03:29:59 drbd-02 drbdmanaged[2184]: INFO DRBDManage starting<br>
> as potential leader node<br>
> Jul 24 03:29:59 drbd-02 dbus-daemon: .drbdctrl role:Secondary<br>
> Jul 24 03:29:59 drbd-02 dbus-daemon: volume:0 disk:Inconsistent<br>
> Jul 24 03:29:59 drbd-02 dbus-daemon: volume:1 disk:Inconsistent<br>
> Jul 24 03:29:59 drbd-02 dbus-daemon: drbd-01 connection:Connecting<br>
<br>
</div></div>Looks like the two machines never connected successfully on DRBD level.<br>
I saw that recently on AWS and it is usually a problem with the network.<br>
All ports that you need are open? For the join you used the node name<br>
shown in "uname -n"?<br>
<br>
Review the res file of the control volume and try to connect the<br>
.drbdctrl on DRBD level. As long as the nodes can not connect on DRBD<br>
level, the next higher level - drbdmanage - will fail.<br>
<br>
Regards, rck<br>
______________________________<wbr>_________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer" target="_blank">http://lists.linbit.com/<wbr>mailman/listinfo/drbd-user</a><br>
</blockquote></div><br></div>