<br><div class="gmail_quote">On Thu, Sep 17, 2009 at 7:36 PM, Matthew Ingersoll <span dir="ltr"><<a href="mailto:matth@digitalwest.net">matth@digitalwest.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm testing a 2 node primary/primary drbd setup but had a few concerns related to the use of md for raid0 striping. The setup is as follows:<br>
<br>
Each node runs two drbd devices in a primary/primary setup. These devices are then striped using the mdadm utility. From there, logical volumes are setup using LVM (I'm running ais + clvmd to sync the nodes). The following output should explain most of this (identical on node00 and node01):<br>
<br>
root@node00:~# cat /proc/mdstat<br>
Personalities : [raid0]<br>
md0 : active raid0 drbd1[1] drbd0[0]<br>
117190272 blocks 64k chunks<br>
<br>
root@node00:~# pvs -a<br>
PV VG Fmt Attr PSize PFree<br>
/dev/md0 san0 lvm2 a- 111.76G 31.76G<br>
<br>
<br>
>From there, logical volumes are created and shared via iscsi. Doing round-robin tests on iscsi has not shown any corruption yet (this means i'm reading/writing to both node00 and node01). My main concern is the striping portion and what is actually going on there. I also tested the striping using only LVM - this appears to work fine too.<br>
<br></blockquote></div><br>And what about instead do somthing like this:<br><br>- sda and sdb on both nodes<br>- raid0 on both nodes so that on eac node you have one md0 device<br>- only one drbd resource based on md0 on both<br>
- use drbd0 device as PV for your VG<br>- add clvmd to your cluster layer (you need cman too for clvmd)<br><br>I'm doing it but only with one disk per node<br>