[DRBD-user] Full resync vs real-time sync

Gennadiy Nerubayev parakie at gmail.com
Fri May 1 19:16:19 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Fri, May 1, 2009 at 10:34 AM, Lars Ellenberg
<lars.ellenberg at linbit.com>wrote:

> what is your micro benchmark?


iometer.. which is not particularly micro :p

for sequential write throughput micro benchmark,
> I suggest
>
> dd if=/dev/zero of=/dev/drbdX bs=4M count=1000 oflag=direct
>
> do variations in bs= and count=  (to reveal possibly issues
> with cpu cache sizes).
>
> also do variations of
> oflag=direct
>        no page cache/buffer cache involved,
> oflag=dsync
>        completely through buffer cache/page cache,
>        but does the equivalent of "fsync" for every "bs"
> no oflag, but conv=fsync
>        completely through buffer cache/page cache,
>        and does a real fsync only once all count * bs
>        blocks are written
>
> smalish bs (< the size of your cpu cache), say bs=32k, high count,
> and oflag=direct is what is most like what the resync is doing.


I did a number of dd runs; the results are attached. The 32k direct writes
are the worst when connected.


> you can also start pinning your "dd" to a single cpu,
> preferably the same your DRBD kernel threads are running on.
> or allow only the first two cores, or whatever.


The cpu (single) in both boxes is dual non-hyperthreaded core. I can repeat
the benchmarks on one core - would passing nosmp and maxcpus=1 to the kernel
be sufficient for this test case?

Thanks,

-Gennadiy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090501/4e91dd88/attachment.htm>


More information about the drbd-user mailing list