Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
/ 2004-07-30 01:34:12 +0200
\ Bernd Schubert:
>
> > (latency figures...)
>
> Data from a nfs-mounted directory:
>
>
> 1.) async exported : 2.) sync exported
> :
[*] shows clearly an average overhead of nfs sync exports
of factor 10! (max even almost factor 20...)
these measure how long a write([record size]) call takes,
in milliseconds. (with -c -e, add in how long the fflush/fsync
takes.) here record size is 4k, so the actual data transfer time is
neglectable. that gives good figures for "round trip time" of
drbd protocol C:
application
`nfs-client -> nfs-server -> drbd -> drbd-peer
>dio >dio <--- disk io
,nfs-client <- nfs-server <- drbd <- drbd-peer
application
without -e and no sync export, you probably only get the time
application -> nfs-client.
btw, with drbd proto B, you could reduce it to
application
`nfs-client -> nfs-server -> drbd -> drbd-peer
>dio /
,nfs-client <- nfs-server <- drbd <
application
and drbd proto A is (when you use fsync)
application
`nfs-client -> nfs-server -> drbd -> [ drbd-peer ]
>dio
,nfs-client <- nfs-server <- drbd
application
I assume: -s 1m -r 4k ? with or without -c -e ?
drbd proto C ?
> N: 1024 : N: 1024
> min: 1203 : min: 1810
> avg: 1529 : avg: 17103 <== [*]
> max: 37856 : max: 695349
(roughly 1.5M/sec)
this may be -s 1g -r 1m ?
> N: 1024 : N: 1024
> min: 91182 : min: 95326
> avg: 110153 : avg: 238704
> max: 18805320 : max: 989322
for larger transfers, with sync export,
we have 1024k/0.24 sec ==> avg. throughput: 4.16M/sec
this may be -s 1g -r 4m ?
> N: 256 : N: 256
> min: 364772 : min: 495749
> avg: 384725 : avg: 962792 [ factor 2.5]
> max: 4927510 : max: 2502739
for even larger transfers, this still holds:
4M/0.96 sec ==> avg. throughput: 4.16M/sec
both max values degrade
now, what is this:
only different drbd protocol?
other iozone options?
same assumptions: -s 1m -r 4k ? with or without -c -e ?
> N: 1024 : N: 1024
> min: 983 : min: 3957
> avg: 1494 : avg: 4292 [still factor 4]
> max: 1944 : max: 224765
this may be -s 1g -r 1m ?
> N: 1024 : N: 1024
> min: 91151 : min: 94015
> avg: 108411 : avg: 410282
> max: 17054200 : max: 1587899
this may be -s 1g -r 4m ?
> N: 256 : N: 256
> min: 366288 : min: 509567
> avg: 371261 : avg: 1650030
> max: 1373654 : max: 3948684
here numbers for short transfers have decreased
see the diagrams above.
but the numbers for large transfers has increased considerably...
throughput drops to 2.x MB/sec. this is strange...
wait, this was with 0.7.0, right?
drbd 0.7.1 removed a serios handicap from protocols other than 'C' ...
and you already reported that you have an average throughput now
on a single nfs client of 9 - 10 MB/sec
(iirc, with nfs sync exports of reiserfs,
drbd proto A, kernel 2.4.27-rcX, reiserfs)
so these figures should look very different now...
we should invent some generic "drbd_bench", I think, to get reliable
comparable values of different setups on different hardware...
Lars Ellenberg
--
please use the "List-Reply" function of your email client.