[DRBD-user] Extremely high latency problem

Bret Mette bret.mette at dbihosting.com
Thu Jun 5 21:07:59 CEST 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Arnold,

I realize my last message may have come off as rude, that was not my
intention. I apologize if it was received that way. If my test is flawed
please explain how, as I said, I did take it directly from the DRBD manual.


- Bret


On Thu, Jun 5, 2014 at 11:16 AM, Bret Mette <bret.mette at dbihosting.com>
wrote:

> I took the 512 byte blocksize directly from the recommended latency test
> in the DRBD manual.
>
> http://www.drbd.org/users-guide/s-measure-latency.html
>
> "This test writes 1,000 512-byte chunks of data to your DRBD device, and
> then to its backing device for comparison. 512 bytes is the smallest block
> size a Linux system (on all architectures except s390) is expected to
> handle."
>
> I'm also performing the same test on a non-DRBD device which
> performs perfectly fine. So why would I want to tune my tests to yield
> better results when I have a comparison that is already pointing out a
> problem in latency?
>
>
> On Thu, Jun 5, 2014 at 11:10 AM, Arnold Krille <arnold at arnoldarts.de>
> wrote:
>
>> On Thu, 5 Jun 2014 09:30:37 -0700 Bret Mette
>> <bret.mette at dbihosting.com> wrote:
>> > dd if=/dev/zero of=./testbin  bs=512 count=1000 oflag=direct
>> > 12000 bytes (512 kB) copied, 0.153541 s, 3.3 MB/s
>> >
>> > This was run against /root/testbin which is /dev/md1 with no LVM or
>> > DRBD
>> >
>> >
>> >
>> > dd if=/dev/zero of=./testbin  bs=512 count=1000 oflag=direct
>> > 512000 bytes (512 kB) copied, 32.3254 s, 15.8 kB/s
>> >
>> > This was run against /mnt/tmp which is DRBD /dev/drbd2 backed by an
>> > LVM logical volume, with the logical volume backed by /dev/md127 while
>> > /dev/drbd2 was in the connected state
>>
>> Change your dd-parameters for meaningful results!
>>
>> Disks are read and written in 4k junks. Writing 512 bytes means
>> actually reading 4k, replacing the 512 bytes to write, write 4k. Slow
>> by default!
>>
>> Use at least 4k junk size for dd. And use a higher count, 4k * 1000 is
>> roughly 4M. Most disks today have a cache bigger then that.
>>
>> And when you only run your test for 0.15 seconds, you don't even out
>> background-stuff from the os.
>>
>> Using dd for performance tests should result in files of several
>> gigabytes size.
>>
>> The next question is whether you really want to optimize for linear
>> access which dd kind of measures. Better use a tool like dbench to test
>> random-access which is a 99.99999% (*) more common usage pattern. Apart
>> from copying big disk-images or video files, every other use case (even
>> using disk-images for virtual machines or editing audio-/video-files)
>> is random access. That means seeking on the harddisk.
>>
>> Have fun,
>>
>> Arnold
>>
>> (*) Could be my estimation is a few 9s short...
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140605/1ea1d9ed/attachment.htm>


More information about the drbd-user mailing list