Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> well, you could try the one I put in my previous answer ... and it does
> not need to be of the exact size on nfs1 ... equal or more
>
I will try that. It's probably apparent, but I'm new to LVM and DRBD. Is the"drbdadm adjust nfs" on nfs2 something that I can do while that system isup-and-running and servicing Production requests?
Thanks, again,
-Anthony
> Date: Fri, 21 Dec 2012 18:12:23 +0100
> From: andreas at hastexo.com
> To: drbd-user at lists.linbit.com
> CC: agenerette at hotmail.com
> Subject: Re: [DRBD-user] “The peer's disk size is too small!” messages on attempts to add rebuilt pee
>
>
> Please don't bypass the mailing-list ...
>
> On 12/21/2012 06:04 PM, Anthony G. wrote:
> > Thank you for your input. That was my first thought, but I caught hell
> > trying
> > to get the partition sizes to match. I'm not sure which size reading I
> > need to
> > take on -nfs2 and then which specific lvcreate command I need to execute on
> > -nfs1 to get the size on the latter set properly.
>
> well, you could try the one I put in my previous answer ... and it does
> not need to be of the exact size on nfs1 ... equal or more
>
> >
> > I've recreated the lv, though (just to try and make some progress), and
> > am now
> > getting the following, when I try to 'service drbd start' on -nfs1:
> >
> > DRBD's startup script waits for the peer node(s) to appear.
> > - In case this node was already a degraded cluster before the
> > reboot the timeout is 0 seconds. [degr-wfc-timeout]
> > - If the peer was available before the reboot the timeout will
> > expire after 0 seconds. [wfc-timeout]
> > (These values are for resource 'nfs'; 0 sec -> wait forever)
> > To abort waiting enter 'yes' [ 123]:yes
> >
> > 'netstat -a' doesn't show -nfs2 listening on port 7789, but I do see
> > drbd-related
> > processes running on that box.
>
> so the resource on nfs2 is in disconnected state .... do a "drbdadm
> adjust nfs" on nfs2
>
> Regards,
> Andreas
>
> >
> > -Anthony
> >
> > Date: Fri, 21 Dec 2012 17:25:01 +0100
> > From: andreas at hastexo.com
> > To: drbd-user at lists.linbit.com
> > Subject: Re: [DRBD-user] “The peer's disk size is too small!” messages
> > on attempts to add rebuilt pee
> >
> > On 12/21/2012 12:13 AM, Anthony G. wrote:
> >> Hi,
> >>
> >> There's so much information relating to my current configuration, that
> >> I'm not sure what I should post here. Let me start by saying that I had
> >> two Ubuntu 10.04 hosts configured in a DRBD relationship: sf02-nfs1
> >> (primary) and sf0-nfs2 (secondary). -nfs1 suffered a major filesystem
> >> fault. I had to make -nfs2 primary and rebuild -nfs1. I want to
> >> eventually have all of my machines on 12.04, so I took this as an
> >> opportunity to set -nfs1 on that OS.
> >>
> >> Here is a copy of my main configuration file (/etc/drbd.d/nfs.res):
> >>
> >> resource nfs {
> >> on sf02-nfs2 {
> >> device /dev/drbd0;
> >> disk /dev/ubuntu/drbd-nfs;
> >> address 10.0.6.2:7789;
> >> meta-disk internal;
> >> }
> >> on sf02-nfs1 {
> >> device /dev/drbd0;
> >> disk /dev/ubuntuvg/drbd-nfs;
> >> address 10.0.6.1:7789;
> >> meta-disk internal;
> >> }
> >> }
> >>
> >>
> >> I'm trying to re-introduce -nfs1 into the DRBD relationship and am
> >> having trouble. I have:
> >>
> >>
> >> 1.) created the resource "nfs" on -nfs1 ('drbdadm create-md nfs')
> >>
> >> 2.) run 'drbdadm primary nfs' on -nfs2 and 'drbdadm secondary nfs' on -nfs1.
> >>
> >> 3.) run drbdadm -- --overwrite-data-of-peer primary all' from -nfs2.
> >>
> >>
> >> But /var/log/kern.log shows:
> >>
> >> =====
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.843938] block drbd0:
> >> Handshake successful: Agreed network protocol version 91
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.843949] block drbd0: conn(
> >> WFConnection -> WFReportParams )
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844171] block drbd0: Starting
> >> asender thread (from drbd0_receiver [2452])
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844539] block drbd0:
> >> data-integrity-alg: <not-used>
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844610] block drbd0: *The
> >> peer's disk size is too small!*
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844617] block drbd0: conn(
> >> WFReportParams -> Disconnecting )
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844626] block drbd0: error
> >> receiving ReportSizes, l: 32!
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844680] block drbd0: asender
> >> terminated
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844691] block drbd0:
> >> Terminating asender thread
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844746] block drbd0:
> >> Connection closed
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844755] block drbd0: conn(
> >> Disconnecting -> StandAlone )
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844791] block drbd0: receiver
> >> terminated
> >>
> >> Dec 19 19:55:47 sf02-nfs2 kernel: [9284165.844794] block drbd0:
> >> Terminating receiver thread
> >>
> >> =====
> >>
> >>
> >> So, it seems that a difference in the size of drbd0 on the respective
> >> machines is the source of my trouble. 'cat /proc/partitions' (output
> >> pasted at the end of this message) on each machine tells me that -nfs2's
> >> partition is around 348148 blocks larger than -nfs1's. -nfs2 contains
> >> my company's Production data, so I do not, of course, want to do
> >> anything destructive there. I can, however, certainly recreate the
> >> resource on -nfs1.
> >>
> >>
> >> Does anyone out there know what steps I need to take to make the
> >> partition sizes match? Of course, I'm working under the belief that the
> >> "peer's disk size is too small" message points up the source of my
> >> trouble. Let me know, of course, if I need to post more information on
> >> my setup.
> >
> > You are using LVM, so simply resize the lv below DRBD on nfs1 to be at
> > least of the same size or bigger ala:
> >
> > lvresize -L+200M ubuntuvg/drbd-nfs
> >
> > ... then recreate meta-data on that resized lv on nfs1 and on nfs1 do a:
> >
> > drbdadm up nfs
> >
> >
> > Regards,
> > Andreas
> >
> > --
> > Need help with DRBD?
> > http://www.hastexo.com/now
> >
> >>
> >>
> >> Thanks,
> >>
> >>
> >> -Anthony
> >>
> >>
> >>
> >>
> >>
> >>
> >> ==========
> >>
> >> root at sf02-nfs1:/dev/ubuntuvg# cat /proc/partitions
> >>
> >> major minor #blocks name
> >>
> >>
> >> 8 0 1952448512 sda
> >>
> >> 8 1 512000 sda1
> >>
> >> 8 2 1 sda2
> >>
> >> 8 5 1886388224sda5
> >>
> >> 252 0 20971520 dm-0
> >>
> >> 252 1 5242880 dm-1
> >>
> >> 252 2 1706033152 dm-2
> >>
> >> 147 0 1705981052 drbd0
> >>
> >>
> >>
> >> root at sf02-nfs2:/etc/drbd.d# cat /proc/partitions
> >>
> >> major minor #blocks name
> >>
> >>
> >> 8 0 1952448512 sda
> >>
> >> 8 1 248832 sda1
> >>
> >> 8 2 1 sda2
> >>
> >> 8 5 1952196608 sda5
> >>
> >> 252 0 209715200 dm-0ubuntuvg-root
> >>
> >> 252 1 36098048 dm-1ubuntuvg-swap
> >>
> >> 252 2 1706381312 dm-2ubuntuvg-drbd--nfs
> >>
> >> 147 0 1706329200 drbd0
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> drbd-user mailing list
> >> drbd-user at lists.linbit.com
> >> http://lists.linbit.com/mailman/listinfo/drbd-user
> >>
> >
> >
> >
> >
> >
> > _______________________________________________ drbd-user mailing list
> > drbd-user at lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20121221/538878ab/attachment.htm>