<div class="gmail_quote">On Thu, Sep 17, 2009 at 11:42 PM, Michael Tokarev <span dir="ltr"><<a href="mailto:mjt@tls.msk.ru">mjt@tls.msk.ru</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Gianluca Cecchi wrote:<br>
[]<div class="im"><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
And what about instead do somthing like this:<br>
<br>
- sda and sdb on both nodes<br>
- raid0 on both nodes so that on eac node you have one md0 device<br>
- only one drbd resource based on md0 on both<br>
</blockquote>
<br></div>
This is wrong. If either disk fails, whole raid0 fails,<br>
ie, half the thing fails.<br>
<br>
With reverse approach, ie, two drbd resources on top of<br>
disks, and raid0 on top of drbd resources - if any disk<br>
fails only that 1/4 of whole thing fails.<br>
<br>
It's classical raid10. It's always done like<br>
disk => mirroring => striping, but never like<br>
disk => striping => mirroring.<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
- use drbd0 device as PV for your VG<br>
- add clvmd to your cluster layer (you need cman too for clvmd)<br>
<br>
I'm doing it but only with one disk per node<br>
</blockquote>
<br></div>
With one disk per node it's simple raid1.<br>
<br>
/mjt<br></blockquote><div><br><br><br>My approach is done in reality and is named raid0+1, while the initial post one is socalled raid1+0 (aka raid10 as you wrote)<br><br>See also <a href="http://en.wikipedia.org/wiki/Nested_RAID_levels">http://en.wikipedia.org/wiki/Nested_RAID_levels</a><br>
<br>BTW: in general, failures to take care of happens not only at hw level (hard disk in your comment) but also in sw (in the scenario here, possibly in sw raid layer or drbd layer, for example)<br></div></div><br>Gianluca<br>