[DRBD-user] lvm2, striping, and drbd

kbyrd-drbd kbyrd-drbd at memcpy.com
Mon Aug 27 16:53:58 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


grrr, sent from the wrong "FROM address". List admins,
please ignore the previous message just like this.


> On Mon, 27 Aug 2007 08:51:09 +0300, "Michael Tewner" <tewner at gmail.com>
> wrote:
>>
>> On 8/27/07, Lars Ellenberg <lars.ellenberg at linbit.com> wrote:
>>> On Sun, Aug 26, 2007 at 06:00:16PM -0700, kbyrd-drbd wrote:
>>> >
>>> > I haven't used drbd yet, I'm getting ready to test and deploy it.
>>> > I've searched the archives I've seen various posts about md and
>>> > lvm2 and I know some of this has been covered. But, I'm confused
>>> > about the current state of things with 0.8
>>> >
>>> > I'd like something that feels like clustered RAID1+0 (that's
>>> > striping on top of drbd). My ideal plan: lvm2 striping on top of
>>> > four drbd pairs active/active pairs. I'd run GFS on top of this.
>>> > Do I need cLVM instead of LVM2 for this. Does LVM striping even
>>> > work with drbd?
>>>
>>> don't.
>>>
>>> DRBD does not (yet) support consistency bundling of several drbd.
>>> so whenever you have a connection loss, your four devices will
>>> disconnect in a slightly different order.
>>> consistency of your stripe set cannot be guaranteed.
>>>
>>> I also think you'd get better performance out of drbd on top of raid.
>>>
> 
>> Doesn't that setup seem somewhat obsessive?
>>
>>
> 
> Was that directed at me or Lars? If me, what about it? I'm new to
> all this and I'm happy to hear my reasoning isn't sound and I can
> simplify things. With this, I can lose more than one drive as long
> as no single pair goes out. I get more performance locally because
> I'm using more spindles. If drbd needs to resync after I replace a
> drive, that drbd instance only has to resync one drive's worth of
> data. Using LVM or md to achive the RAID0-ness of it doesn't matter
> to me, I thought maybe just MD was problematic because stuff was
> changing underneath it.
> 
> What's a common active/active setup with two nodes where you've
> glued together multiple drives in each node?




More information about the drbd-user mailing list