Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, May 20, 2011 at 05:32:52PM +0900, Junko IKEDA wrote: > Hi, > > We are trying to configure a DRBD resource as a Physical Volume, using > examples from the following document. > http://www.drbd.org/users-guide/s-lvm-drbd-as-pv.html > > DRBD works well, but there are some perfomance problems. > We run "dd" like thie; > # dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=10000 oflag=direct > > then, "iostat" shows the strange output. > # iostat -xmd 10 /dev/cciss/c0d3 > ------------------------------------------- > | wMB/s | w/s > ------------------------------------------- > hdd | 68.6 | 805 > drbd | 47.3 | 910 > lvm on drbd | 31.8 | 8140 > ------------------------------------------- > > In case of "lvm on drbd", w/s shows relatively large value compared to > other cases, > then CPU %util also increase to 100%, > so DRBD sync/rsync performance drops significantly. > Pacemaker detects it as DRBD's error. > > Have anyone ever seen a similar problem? > or is there some tips for physical volume settings? DRBD version, LVM version, Device mapper version (kernel version), distribution? What about oflag=dsync instead of direct? What about not specifying any oflag, but doing # dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=10000 conv=notrunc,fsync What about reads? With iflag=direct? Without without iflag, but still cache cold, after an echo 3 > /proc/sys/vm/drop_caches? Do you have an other data point with more recent (all of it), especially with more recent LVM + upstream kernel? Or on an older distributions? > Hardware information; > I/O scheduler: deadline > partition size: 100GB > CPU: 6 core x 2 > Memory: 12GiB > HDD: SAS 10000rpm, 300GiB > > Best Regards, > Junko IKEDA > > NTT DATA INTELLILINK CORPORATION -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed