Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
-----Original Message-----
From: Florian Haas [mailto:florian at hastexo.com]
Sent: Sunday, October 07, 2012 5:47 AM
To: Dan Barker
Cc: drbd List
Subject: Re: [DRBD-user] Does oversize disk hurt anything?
>On Fri, Oct 5, 2012 at 7:59 PM, Dan Barker <dbarker at visioncomm.net> wrote:
>> I just lost a disk on my secondary node. I looked EVERYWHERE and can't
>> find the spare disks I bought for such an occurrence. So, I put in a
>> handy disk, twice the size.
>>
>> drbdadm create-md r1
>> drbdadm attach r1
>>
>> and off we go.
>>
>> If memory serves, create-md will build a meta-data at the END of the disk.
>> Won't that cause a lot of seek to the hub when seeking to about the
>> middle of the platters would have done the trick, had the metadata
>> been at the same offset as the primary?
>
>Well if you had created a partition (/dev/sdc1) rather than use the full disk (/dev/sdc), then you could have set up that partition to match the size of the disk on your primary.
Partition. Great idea. If I had thought of that, I'd have bought only one new 500G disk instead of two. Thanks for the hint. 1T disks cost the same as 500G these days.
>Besides, if you're using a RAID controller with a battery/flashed-backed write cache then it won't matter much. I wrote about this years ago on my blog:
>http://fghaas.wordpress.com/2009/08/20/internal-metadata-and-why-we-recommend-it/
It's not RAID, it's simply a single SATA disk controlled by the motherboard chips.
>
>> version: 8.4.0 (api:1/proto:86-100)
>> GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by
>> root at DrbdR0,
>> 2012-05-28 12:09:30 (Yes, I know. I need to upgrade).
>
>True. Rather urgently if you're on 8.4.0.
>
Agreed.
>> Failed disk: WD 500G
>> Replaced by: WD 1T
>> On server: DrbdR0
>>
>> cat /etc/drbd.d/r1.res
>> resource r1 {
>> on DrbdR0 {
>> volume 0 {
>> device /dev/drbd1 minor 1;
>> disk /dev/sdc;
>> meta-disk internal;
>> }
>> address ipv4 10.20.30.46:7790;
>> }
>> on DrbdR1 {
>> volume 0 {
>> device /dev/drbd1 minor 1;
>> disk /dev/sdc;
>> meta-disk internal;
>> }
>> address ipv4 10.20.30.47:7790;
>> }
>> startup {
>> become-primary-on DrbdR1;
>
>Why? Your cluster manager (typically Pacemaker) should take care of that for you.
No cluster manager, no NA. Easy manual failover. This is a lab environment and HA is not really needed. The users of drbd storage are ESXi hosts. To "take" the primary server off line I:
DrbdR0: drbdadm primary all (allow dual primaries is on)
DrbdR0: start iet
ESXi (all): verify all four paths to both drbd are online
Drbdr1: stop iet
Drbdr0: stop drbd
Drbdr0: <perform maintenance, or whatever>
I think I'll add a new, first step: Verify no drbd resource is diskless! (and all are uptodate).
There is a slight possibility of a problem when running dual primaries since all my ESXi allocations are thin, but it's remote enough to me for this purpose. I don't run dual-primary for more than a few minutes and certainly not while any heavy allocations are being done by the ESXi guests.
>
>Cheers,
>Florian
>
Thanks for the help.
Dan in Atlanta