Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Aug 9, 2011, at 7:50 AM, Jean-Francois Chevrette wrote: > Hi everyone, > > we have this fairly simple setup where we have two CentOS 5.5 nodes > running xen 3.4.2 compiled from sources (kernel 2.6.18-xen) and DRBD > 8.3.7 also compiled from sources. Both nodes have two data > partitions which are synced by DRBD. Each node is running a single > VM from either of the partitions in a standard Primary/Secondary > mode. This way each node can fully utilize its CPU and memory > resources and we still have storage failover capabilities. The VMs > are using the drbd devices directly (no LVM and such). Both nodes > are connected through a gigabit ethernet port and a crossover cable. > > Over time as the VM resource usage raised it started behaving > strangely. After investigating, everything points to an IO problem > as read and writes are very slow. > > My tests have shows that while the DRBD replication is connected and > running, IO performance is very bad. Not only is it bad inside the > VM but also on the host node. This is as if DRBD would cause the > underlying IO subsystem to become very slow. Now I should say that > the servers are using Adaptec 5405 raid cards with BBUs and write > cache enabled. As for disks, we have 4x SATA drives configured as a > RAID-10. > > As soon as I disconnect DRBD, the IO performance is way better both > inside and outside the VMs. > <snip> Hi Jean-Francois, I have also been having major performance problems using a similar setup. One thing that makes me thing there might be two different problems at hand here though is that you report both reads and writes being slow -- for me, read performance has been OK, but DRBD slows down my disk writes enormously. Have you tried running the throughput & latency testing scripts in the DRBD user guide? If so I'd be curious to see what results you get. On my system I get about 50% of the throughput via the DRBD device that I get on the underlying LVM volume, and I get about a 100x increase in latency via DRBD as compared to the raw LogVol, so my systems get almost completely unresponsive when MySQL starts doing lots of small writes (for example I've measured syslog's fsync()s taking 5-10 full seconds to complete). My current theory is that this may be some nasty interaction with the 2.6.18-based Red Hat (or CentOS, in your case) kernel, since that's what I'm running and another poster here said he'd been getting poor performance on a RH system but good performance on Fedora (with a newer kernel). I'm currently making an attempt at trying it on a vanilla 3.0.1 kernel I compiled from a kernel.org source tarball and xen 4.1.1 (also compiled from source), but I'm not sure if I'm going to be able to get a full two-node system set up that way in order to really do a comprehensive test. If you find out anything more about it or discover a solution, please do post to the list! Thanks, Zev Weiss