[DRBD-user] the perfomance issuce with a physical volume synchronization

Junko IKEDA tsukishima.ha at gmail.com
Fri May 20 10:32:52 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

We are trying to configure a DRBD resource as a Physical Volume, using
examples from the following document.
http://www.drbd.org/users-guide/s-lvm-drbd-as-pv.html

DRBD works well, but there are some perfomance problems.
We run "dd" like thie;
# dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=10000 oflag=direct

then, "iostat" shows the strange output.
# iostat -xmd 10 /dev/cciss/c0d3
-------------------------------------------
                   | wMB/s | w/s
-------------------------------------------
hdd              | 68.6     |  805
drbd             | 47.3     |  910
lvm on drbd   | 31.8     | 8140
-------------------------------------------

In case of "lvm on drbd", w/s shows relatively large value compared to
other cases,
then CPU %util also increase to 100%,
so DRBD sync/rsync performance drops significantly.
Pacemaker detects it as DRBD's error.

Have anyone ever seen a similar problem?
or is there some tips for physical volume settings?

Hardware information;
I/O scheduler: deadline
partition size: 100GB
CPU: 6 core x 2
Memory: 12GiB
HDD: SAS 10000rpm, 300GiB

Best Regards,
Junko IKEDA

NTT DATA INTELLILINK CORPORATION



More information about the drbd-user mailing list