[DRBD-user] Does oversize disk hurt anything?

Dan Barker dbarker at visioncomm.net
Mon Oct 8 00:00:33 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


>-----Original Message-----
>From: Florian Haas [mailto:florian at hastexo.com]
>Sent: Sunday, October 07, 2012 5:46 PM
>To: Dan Barker
>Cc: drbd List
>Subject: Re: [DRBD-user] Does oversize disk hurt anything?
>
>On Sun, Oct 7, 2012 at 2:20 PM, Dan Barker <dbarker at visioncomm.net> wrote:
>>>Well if you had created a partition (/dev/sdc1) rather than use the full disk (/dev/sdc), then you could have set up that partition to match the size of the disk on your primary.
>>
>> Partition. Great idea. If I had thought of that, I'd have bought only one new 500G disk instead of two. Thanks for the hint. 1T disks cost the same as 500G these days.
>
>The physical device sizes differing isn't a problem at all; DRBD will just select the smaller size of the two.
I know drbd is just using the outside 500G of the oversize disk. It's just that the metadata is in near the hub. A partition would have placed it mid-disk but I didn't think of that.
>
>>>Why? Your cluster manager (typically Pacemaker) should take care of that for you.
>>
>> No cluster manager, no NA. Easy manual failover. This is a lab environment and HA is not really needed. The users of drbd storage are ESXi hosts. To "take" the primary server off line I:
>> DrbdR0: drbdadm primary all (allow dual primaries is on)
>> DrbdR0: start iet
>> ESXi (all): verify all four paths to both drbd are online
>
>We may have had this discussion before, but:
>http://fghaas.wordpress.com/2011/11/29/dual-primary-drbd-iscsi-and-multipath-dont-do-that/
>
>> Thanks for the help.
>
>Pleasure.
>
>Cheers,
>Florian

Of course I've been following the dont-do-that threads. I've been down that path several times. It works great for a while and then doesn't<g>. But that was a couple of years ago.

What I am currently doing is different; the exposure is very brief, if at all.

When the second DRBD publishes its iSCSI paths, ESXi discovers them but continues to use the original path for all I/O. It's not concurrent multipath. Only when the original path dies (when I stop iet on the "primary" drbd) does ESXi switch to active I/O on the "other" path.

I think your fears are about simultaneous dual-access, not about what I'm doing. I don't think I'd recommend anyone else do it this way, it's just the way I'm doing it with the hardware laying around.

Thanks for the feedback. Here's some feedback for you: drbd is Great! Thanks for making it available. Best wishes for you at Hastexo.

Dan




More information about the drbd-user mailing list