[DRBD-user] the perfomance issuce with a physical volume synchronization

Junko IKEDA tsukishima.ha at gmail.com
Mon May 23 12:44:08 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

It seems that LVM2 included in RHEL 5 has same problems,
so we run the same test on RHEL 6.1.
in this case, DRBD was trapped in I/O error.

Does DRBD support "merge fn"?
I heard that device-mapper on RHEL 6 cut I/O down to page size if the
low level block device didn't support cut down.

Thanks,
Junko IKEDA

2011年5月21日4:22 Junko IKEDA <tsukishima.ha at gmail.com>:
> Hi,
>
>> DRBD version, LVM version, Device mapper version (kernel version),
>> distribution?
>
> DRBD version
> 8.3.10(we noticed this with 8.3.5 at first, and update DRBD after that)
>
> LVM version
> LVM2(included RHEL in 5.2)
>
> kernel version
> 2.6.18-92.EL5 (x86_64)
>
> distribution
> RHEL 5.2
>
>> What about oflag=dsync instead of direct?
>> What about not specifying any oflag, but doing
>> # dd if=/dev/zero of=/mnt/dd/10GB.dat bs=1M count=10000 conv=notrunc,fsync
>
> OK, I'll try it.
>
>> What about reads?
>> With iflag=direct?
>
> We just felt that reads were also slow,
> but we didn't record them as mathematical values.
>
>> Without without iflag, but still cache cold,
>> after an echo 3 > /proc/sys/vm/drop_caches?
>
> yes, we went clear the drop_caches,
> and also did "mount/unmount" the device.
>
>> Do you have an other data point with more recent (all of it),
>> especially with more recent LVM + upstream kernel?
>
> We tried the same test on RHEL 5.2、5.5、6.1,
> but  the results was the same.
>
> Thanks,
> Junko IKEDA
>



More information about the drbd-user mailing list