With your setup, <div><br></div><div>Your read performance is going to be limited by your RAID selection. Be prepared to experiment and document the performance of various different nodes.</div><div><br></div><div>With a 1G interconnect, write performance will be dictated by network speed. You'll want jumbo frames at a minimum, and might have to mess with buffer sizes. Keep in mind that latency is just as important as throughput.</div>
<div><br></div><div>There is a performance tuning page on the linbit site. I spent a day messing with various parameters, but found no appreciable improvements. </div><div><br></div><div>With 4 drives, I think you'll get better performance with raid 10. </div>
<div><br></div><div>However, I think you'll need to install a benchmark like iozone, and spend a lot of time doing before/after comparisons.</div><div><br></div><div>Mike</div><div><br><div><br></div><div><br><div><div class="gmail_quote">
On Sat, Jun 5, 2010 at 10:27 AM, Miles Fidelman <span dir="ltr"><<a href="mailto:mfidelman@meetinghouse.net">mfidelman@meetinghouse.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi Folks,<br>
<br>
I've been doing some experimenting to see how far I can push some old hardware into a virtualized environment - partially to see how much use I can get out of the hardware, and partially to learn more about the behavior of, and interactions between, software RAID, LVM, DRBD, and Xen.<br>
<br>
What I'm finding is that it's really easy to get into a state where one of my VMs is spending all of its time in i/o wait (95%+). Other times, everything behaves fine.<br>
<br>
So... I'm curious about where the bottlenecks are.<br>
<br>
What I'm running:<br>
- two machines, 4 disk drives each, two 1G ethernet ports (1 each to the outside world, 1 each as a cross-connect)<br>
- each machine runs Xen 3 on top of Debian Lenny (the basic install)<br>
- very basic Dom0s - just running the hypervisor and i/o (including disk management)<br>
---- software RAID6 (md)<br>
---- LVM<br>
---- DRBD<br>
---- heartbeat to provide some failure migration<br>
- each Xen VM uses 2 DRBD volumes - one for root, one for swap<br>
- one of the VMs has a third volume, used for backup copies of files<br>
<br>
What I'd like to dig into:<br>
- Dom0 plus one DomU running on each box<br>
- only one of the DomUs is doing very much - and it's runnin about 90% idle, the rest split between user cycles and wait cycles<br>
- start a disk intensive job on the DomU (e.g., tar a bunch of files on the root LV, put them on the backup LV)<br>
- i/o WAIT goes through the roof<br>
<br>
It's pretty clear that this configuration generates a lot of complicated disk activity. Since DRBD is at the top of the disk stack, I figure this list is a good place to ask the question:<br>
<br>
Any suggestions on how to track down where the delays are creeping in, what might be tunable, and any good references on these issues?<br>
<br>
Thanks very much,<br>
<br>
Miles Fidelman<br>
<br>
-- <br>
In theory, there is no difference between theory and practice.<br>
In<fnord> practice, there is. .... Yogi Berra<br>
<br>
<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>Dr. Michael Iverson<br>Director of Information Technology<br>Hatteras Printing<br>
</div></div></div>