[DRBD-user] primary/primary drbd + md + lvm

Gianluca Cecchi gianluca.cecchi at gmail.com
Thu Sep 17 20:35:13 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Sep 17, 2009 at 7:36 PM, Matthew Ingersoll <matth at digitalwest.net>wrote:

> I'm testing a 2 node primary/primary drbd setup but had a few concerns
> related to the use of md for raid0 striping.  The setup is as follows:
>
> Each node runs two drbd devices in a primary/primary setup.  These devices
> are then striped using the mdadm utility.  From there, logical volumes are
> setup using LVM (I'm running ais + clvmd to sync the nodes).  The following
> output should explain most of this (identical on node00 and node01):
>
> root at node00:~# cat /proc/mdstat
> Personalities : [raid0]
> md0 : active raid0 drbd1[1] drbd0[0]
>      117190272 blocks 64k chunks
>
> root at node00:~# pvs -a
>  PV         VG   Fmt  Attr PSize   PFree
>  /dev/md0   san0 lvm2 a-   111.76G 31.76G
>
>
> From there, logical volumes are created and shared via iscsi.  Doing
> round-robin tests on iscsi has not shown any corruption yet (this means i'm
> reading/writing to both node00 and node01).  My main concern is the striping
> portion and what is actually going on there. I also tested the striping
> using only LVM - this appears to work fine too.
>
>
And what about instead do somthing like this:

- sda and sdb on both nodes
- raid0 on both nodes so that on eac node you have one md0 device
- only one drbd resource based on md0 on both
- use drbd0 device as PV for your VG
- add clvmd to your cluster layer (you need cman too for clvmd)

I'm doing it but only with one disk per node
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090917/5bf6c070/attachment.htm>


More information about the drbd-user mailing list