<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Just for fun, I did the following:<br>
<a class="moz-txt-link-abbreviated" href="mailto:root@san3:/etc/drbd.d#">root@san3:/etc/drbd.d#</a> drbd-overview <br>
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
<a class="moz-txt-link-abbreviated" href="mailto:root@san3:/etc/drbd.d#">root@san3:/etc/drbd.d#</a> drbdmanage add-volume test3 5GB --deploy 2<br>
Operation completed successfully<br>
Operation completed successfully<br>
<a class="moz-txt-link-abbreviated" href="mailto:root@san3:/etc/drbd.d#">root@san3:/etc/drbd.d#</a> drbd-overview <br>
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
101:oldNFS/0 Connected(2*) Secondary(2*) Incons/UpToDa <br>
102:test3/0 Connected(2*) Secondary(2*) UpToDa/Incons <br>
<br>
ie, add a volume on the "other" node, and it magically found both
new volumes and started syncing them both.<br>
<br>
So, I'm not sure why the issue happened in the first place, or why
adding a second volume from the second server fixed it. Any advice
would be appreciated,<br>
<br>
Regards,<br>
Adam<br>
<br>
<div class="moz-cite-prefix">On 7/04/2016 17:52, Adam Goryachev
wrote:<br>
</div>
<blockquote cite="mid:570611AB.3020601@websitemanagers.com.au"
type="cite">
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
Hi,<br>
<br>
I'm trying to build a new cluster of servers using debian plus
DRBD9 and drbdmanage.<br>
<br>
After a number of attempts, I thought I had everything right, and
it's all been "ok" for a couple of weeks.<br>
<br>
Today, I rebooted both machines (new power being installed for the
UPS), and then I tried to create a new volume of 700GB.<br>
<br>
Here is what I did:<br>
<tt>san2:~# drbdmanage add-volume oldNFS 700GB --deploy 2<br>
Operation completed successfully<br>
Operation completed successfully<br>
san2:~# drbdmanage list-nodes<br>
+------------------------------------------------------------------------------------------------------------+<br>
| Name | Pool Size | Pool Free
| | State |<br>
|------------------------------------------------------------------------------------------------------------|<br>
| san2.websitemanagers.com.au | 3777040 | 3109316
| | ok |<br>
| san3 | 1830932 | 1830924
| | ok |<br>
+------------------------------------------------------------------------------------------------------------+<br>
san2:~# drbdmanage list-volumes --show Port<br>
+------------------------------------------------------------------------------------------------------------+<br>
| Name | Vol ID | Size | Minor | Port
| |
State |<br>
|------------------------------------------------------------------------------------------------------------|<br>
| oldNFS | 0 | 667572 | 101 | 7001
| |
ok |<br>
| test1 | 0 | 9536 | 100 | 7000
| |
ok |<br>
+------------------------------------------------------------------------------------------------------------+<br>
san2:~# lvs<br>
LV VG Attr LSize Pool Origin Data%
Meta% Move Log Cpy%Sync Convert<br>
.drbdctrl_0 drbdpool -wi-ao----
4.00m <br>
.drbdctrl_1 drbdpool -wi-ao----
4.00m <br>
oldNFS_00 drbdpool -wi-ao----
652.07g <br>
san2:~# dpkg -l | grep drbd<br>
ii drbd-utils
8.9.6-1 amd64 RAID 1 over TCP/IP
for Linux (user utilities)<br>
ii python-drbdmanage
0.94-1 all DRBD distributed
resource management utility<br>
san2:~# cat /proc/drbd<br>
version: 9.0.1-1 (api:2/proto:86-111)<br>
GIT-hash: f57acfc22d29a95697e683fb6bbacd9a1ad4348e build by <a
moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:root@san2.websitemanagers.com.au"><a class="moz-txt-link-abbreviated" href="mailto:root@san2.websitemanagers.com.au">root@san2.websitemanagers.com.au</a></a>,
2016-03-01 00:38:53<br>
Transports (api:14): tcp (1.0.0)<br>
<br>
</tt><b>So far, everything looks good, so I thought to check out
the other node, and see what is happening there....</b><tt><br>
<br>
root@san3:~# lvs<br>
LV VG Attr LSize Pool
Origin Data% Meta% Move Log Cpy%Sync Convert<br>
.drbdctrl_0 drbdpool -wi-ao----
4.00m <br>
.drbdctrl_1 drbdpool -wi-ao----
4.00m <br>
backup_system_20141006_193935 san1small -wi-a-----
8.00g <br>
swap san1small -wi-ao----
3.72g <br>
system san1small -wi-ao----
13.97g <br>
<br>
</tt><b>Hmmm, that's strange, we don't have any new LV </b><b>here?</b><tt><br>
<br>
root@san3:~# drbdmanage list-nodes<br>
+------------------------------------------------------------------------------------------------------------+<br>
| Name | Pool Size | Pool Free
| | State |<br>
|------------------------------------------------------------------------------------------------------------|<br>
| san2.websitemanagers.com.au | 3777040 | 3109316
| | ok |<br>
| san3 | 1830932 | 1830924
| | ok |<br>
+------------------------------------------------------------------------------------------------------------+<br>
root@san3:~# drbdmanage list-volumes --show Port<br>
+------------------------------------------------------------------------------------------------------------+<br>
| Name | Vol ID | Size | Minor | Port
| |
State |<br>
|------------------------------------------------------------------------------------------------------------|<br>
| oldNFS | 0 | 667572 | 101 | 7001
| |
ok |<br>
| test1 | 0 | 9536 | 100 | 7000
| |
ok |<br>
+------------------------------------------------------------------------------------------------------------+<br>
root@san3:~# dpkg -l | grep drbd<br>
ii drbd-utils
8.9.6-1 amd64 RAID 1 over TCP/IP
for Linux (user utilities)<br>
ii python-drbdmanage
0.94-1 all DRBD distributed
resource management utility<br>
root@san3:~# cat /proc/drbd<br>
version: 9.0.1-1 (api:2/proto:86-111)<br>
GIT-hash: f57acfc22d29a95697e683fb6bbacd9a1ad4348e build by
root@san1, 2016-03-01 00:38:33<br>
Transports (api:14): tcp (1.0.0)<br>
<br>
Reading more docs, I then find this section:<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="http://www.drbd.org/doc/users-guide-90/s-check-status">http://www.drbd.org/doc/users-guide-90/s-check-status</a><br>
san2:~# drbd-overview <br>
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
101:oldNFS/0 Connec/C'ting Second/Unknow UpToDa/DUnkno <br>
<br>
<a moz-do-not-send="true" class="moz-txt-link-abbreviated"
href="mailto:root@san3:/etc/drbd.d#">root@san3:/etc/drbd.d#</a>
drbd-overview <br>
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa <br>
<br>
So, it would seem that the problem is the config hasn't been
sent to the other node, and it just doesn't know anything about
it.....<br>
<br>
san2:~# drbdadm status oldNFS --verbose<br>
drbdsetup status oldNFS <br>
oldNFS role:Secondary<br>
disk:UpToDate<br>
san3 connection:Connecting<br>
<br>
Can anyone help advise where I should look, or what I might need
to do to get this working?<br>
<br>
Thanks,<br>
Adam<br>
<br>
</tt> <br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
drbd-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>
<a class="moz-txt-link-freetext" href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a>
</pre>
</blockquote>
<br>
</body>
</html>