<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>
'Got it. Thanks.<br><br><div><div id="SkyDrivePlaceholder"></div>Date: Sat, 22 Dec 2012 01:11:03 +0100<br>From: andreas@hastexo.com<br>To: drbd-user@lists.linbit.com<br>Subject: Re: [DRBD-user] “The peer's disk size is too small!” messages on attempts to add rebuilt pee<br><br><pre>On 12/21/2012 06:39 PM, Anthony G. wrote:<br>>> well, you could try the one I put in my previous answer ... and it does<br>>> not need to be of the exact size on nfs1 ... equal or more<br>>> <br>> <br>> I will try that. It's probably apparent, but I'm new to LVM and DRBD.<br>> Is the<br>> "drbdadm adjust nfs" on nfs2 something that I can do while that system is<br>> up-and-running and servicing Production requests?<br> <br>Yes that can be done online ... use "-d" switch for dry-run and you<br>should only see a connect command as output<br> <br>Regards,<br>Andreas<br> <br>> <br>> Thanks, again,<br>> <br>> -Anthony<br>> <br>>> Date: Fri, 21 Dec 2012 18:12:23 +0100<br>>> From: andreas@hastexo.com<br>>> To: drbd-user@lists.linbit.com<br>>> CC: agenerette@hotmail.com<br>>> Subject: Re: [DRBD-user] “The peer's disk size is too small!” messages<br>> on attempts to add rebuilt pee<br>>><br>>><br>>> Please don't bypass the mailing-list ...<br>>><br>>> On 12/21/2012 06:04 PM, Anthony G. wrote:<br>>> > Thank you for your input. That was my first thought, but I caught hell<br>>> > trying<br>>> > to get the partition sizes to match. I'm not sure which size reading I<br>>> > need to<br>>> > take on -nfs2 and then which specific lvcreate command I need to<br>> execute on<br>>> > -nfs1 to get the size on the latter set properly.<br>>><br>>> well, you could try the one I put in my previous answer ... and it does<br>>> not need to be of the exact size on nfs1 ... equal or more<br>>><br>>> ><br>>> > I've recreated the lv, though (just to try and make some progress), and<br>>> > am now<br>>> > getting the following, when I try to 'service drbd start' on -nfs1:<br>>> ><br>>> > DRBD's startup script waits for the peer node(s) to appear.<br>>> > - In case this node was already a degraded cluster before the<br>>> > reboot the timeout is 0 seconds. [degr-wfc-timeout]<br>>> > - If the peer was available before the reboot the timeout will<br>>> > expire after 0 seconds. [wfc-timeout]<br>>> > (These values are for resource 'nfs'; 0 sec -> wait forever)<br>>> > To abort waiting enter 'yes' [ 123]:yes<br>>> ><br>>> > 'netstat -a' doesn't show -nfs2 listening on port 7789, but I do see<br>>> > drbd-related<br>>> > processes running on that box.<br>>><br>>> so the resource on nfs2 is in disconnected state .... do a "drbdadm<br>>> adjust nfs" on nfs2<br>>><br>>> Regards,<br>>> Andreas<br>>><br>>> ><br>>> > -Anthony<br>>> ><br>>> > Date: Fri, 21 Dec 2012 17:25:01 +0100<br>>> > From: andreas@hastexo.com<br>>> > To: drbd-user@lists.linbit.com<br>>> > Subject: Re: [DRBD-user] “The peer's disk size is too small!” messages<br>>> > on attempts to add rebuilt pee<br>>> ><br>>> > On 12/21/2012 12:13 AM, Anthony G. wrote:<br>>> >> Hi,<br>>> >><br>>> >> There's so much information relating to my current configuration, that<br>>> >> I'm not sure what I should post here. Let me start by saying that I had<br>>> >> two Ubuntu 10.04 hosts configured in a DRBD relationship: sf02-nfs1<br>>> >> (primary) and sf0-nfs2 (secondary). -nfs1 suffered a major filesystem<br>>> >> fault. I had to make -nfs2 primary and rebuild -nfs1. I want to<br>>> >> eventually have all of my machines on 12.04, so I took this as an<br>>> >> opportunity to set -nfs1 on that OS.<br>>> >><br>>> >> Here is a copy of my main configuration file (/etc/drbd.d/nfs.res):<br>>> >><br>>> >> resource nfs {<br>>> >> on sf02-nfs2 {<br>>> >> device /dev/drbd0;<br>>> >> disk /dev/ubuntu/drbd-nfs;<br>>> >> address 10.0.6.2:7789;<br>>> >> meta-disk internal;<br>>> >> }<br>>> >> on sf02-nfs1 {<br>>> >> device /dev/drbd0;<br>>> >> disk /dev/ubuntuvg/drbd-nfs;<br>>> >> address 10.0.6.1:7789;<br>>> >> meta-disk internal;<br>>> >> }<br>>> >> }<br>>> >><br>>> >><br>>> >> I'm trying to re-introduce -nfs1 into the DRBD relationship and am<br>>> >> having trouble. I have:<br>>> >><br>>> >><br>>> >> 1.) created the resource "nfs" on -nfs1 ('drbdadm create-md nfs')<br>>> >><br>>> >> 2.) run 'drbdadm primary nfs' on -nfs2 and 'drbdadm secondary nfs'<br>> on -nfs1.<br>>> >><br>>> >> 3.) run drbdadm -- --overwrite-data-of-peer primary all' from -nfs2.<br>>> >><br>>> >><br>>> >> But /var/log/kern.log shows:<br>>> >><br>>> >> =====<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.843938] block drbd0:<br>>> >> Handshake successful: Agreed network protocol version 91<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.843949] block drbd0: conn(<br>>> >> WFConnection -> WFReportParams )<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844171] block drbd0:<br>> Starting<br>>> >> asender thread (from drbd0_receiver [2452])<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844539] block drbd0:<br>>> >> data-integrity-alg: <not-used><br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844610] block drbd0: *The<br>>> >> peer's disk size is too small!*<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844617] block drbd0: conn(<br>>> >> WFReportParams -> Disconnecting )<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844626] block drbd0: error<br>>> >> receiving ReportSizes, l: 32!<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844680] block drbd0: asender<br>>> >> terminated<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844691] block drbd0:<br>>> >> Terminating asender thread<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844746] block drbd0:<br>>> >> Connection closed<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844755] block drbd0: conn(<br>>> >> Disconnecting -> StandAlone )<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844791] block drbd0:<br>> receiver<br>>> >> terminated<br>>> >><br>>> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844794] block drbd0:<br>>> >> Terminating receiver thread<br>>> >><br>>> >> =====<br>>> >><br>>> >><br>>> >> So, it seems that a difference in the size of drbd0 on the respective<br>>> >> machines is the source of my trouble. 'cat /proc/partitions' (output<br>>> >> pasted at the end of this message) on each machine tells me that<br>> -nfs2's<br>>> >> partition is around 348148 blocks larger than -nfs1's. -nfs2 contains<br>>> >> my company's Production data, so I do not, of course, want to do<br>>> >> anything destructive there. I can, however, certainly recreate the<br>>> >> resource on -nfs1.<br>>> >><br>>> >><br>>> >> Does anyone out there know what steps I need to take to make the<br>>> >> partition sizes match? Of course, I'm working under the belief that the<br>>> >> "peer's disk size is too small" message points up the source of my<br>>> >> trouble. Let me know, of course, if I need to post more information on<br>>> >> my setup.<br>>> ><br>>> > You are using LVM, so simply resize the lv below DRBD on nfs1 to be at<br>>> > least of the same size or bigger ala:<br>>> ><br>>> > lvresize -L+200M ubuntuvg/drbd-nfs<br>>> ><br>>> > ... then recreate meta-data on that resized lv on nfs1 and on nfs1 do a:<br>>> ><br>>> > drbdadm up nfs<br>>> ><br>>> ><br>>> > Regards,<br>>> > Andreas<br>>> ><br>>> > --<br>>> > Need help with DRBD?<br>>> > <a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>>> ><br>>> >><br>>> >><br>>> >> Thanks,<br>>> >><br>>> >><br>>> >> -Anthony<br>>> >><br>>> >><br>>> >><br>>> >><br>>> >><br>>> >><br>>> >> ==========<br>>> >><br>>> >> root@sf02-nfs1:/dev/ubuntuvg# cat /proc/partitions<br>>> >><br>>> >> major minor #blocks name<br>>> >><br>>> >><br>>> >> 8 0 1952448512 sda<br>>> >><br>>> >> 8 1 512000 sda1<br>>> >><br>>> >> 8 2 1 sda2<br>>> >><br>>> >> 8 5 1886388224sda5<br>>> >><br>>> >> 252 0 20971520 dm-0<br>>> >><br>>> >> 252 1 5242880 dm-1<br>>> >><br>>> >> 252 2 1706033152 dm-2<br>>> >><br>>> >> 147 0 1705981052 drbd0<br>>> >><br>>> >><br>>> >><br>>> >> root@sf02-nfs2:/etc/drbd.d# cat /proc/partitions<br>>> >><br>>> >> major minor #blocks name<br>>> >><br>>> >><br>>> >> 8 0 1952448512 sda<br>>> >><br>>> >> 8 1 248832 sda1<br>>> >><br>>> >> 8 2 1 sda2<br>>> >><br>>> >> 8 5 1952196608 sda5<br>>> >><br>>> >> 252 0 209715200 dm-0ubuntuvg-root<br>>> >><br>>> >> 252 1 36098048 dm-1ubuntuvg-swap<br>>> >><br>>> >> 252 2 1706381312 dm-2ubuntuvg-drbd--nfs<br>>> >><br>>> >> 147 0 1706329200 drbd0<br>>> >><br>>> >><br>>> >><br>>> >><br>>> >> _______________________________________________<br>>> >> drbd-user mailing list<br>>> >> drbd-user@lists.linbit.com<br>>> >> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>>> >><br>>> ><br>>> ><br>>> ><br>>> ><br>>> ><br>>> > _______________________________________________ drbd-user mailing list<br>>> > drbd-user@lists.linbit.com<br>>> > <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>>><br>>><br>> <br>> <br>> _______________________________________________<br>> drbd-user mailing list<br>> drbd-user@lists.linbit.com<br>> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>> <br> <br> <br></pre><br>_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user</div>                                            </div></body>
</html>