[DRBD-user] drbd-0.7.0 with linux-2.4. slow?

Bernd Schubert bernd-schubert at web.de
Thu Jul 29 15:24:15 CEST 2004

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thursday 29 July 2004 12:45, Lars Ellenberg wrote:
> / 2004-07-29 01:10:31 +0200
>
> \ Bernd Schubert:
> > > can you post more verbose test results on some website, or here?
> > > which file system?
> >
> > The filesystem is reiserfs.
>
> btw, what did drbd report about its estimated
> syncer performance during initial full sync, or during any other sync?
> (grep for "drbd.: Resync done" in syslog)

This seems to vary pretty strong:

syslog.6:Jul 19 13:13:37 hamilton1 kernel: drbd0: Resync done (total 21 sec; 28867 K/sec)
syslog.6:Jul 19 13:13:42 hamilton1 kernel: drbd1: Resync done (total 25 sec; 5734 K/sec)
syslog.6:Jul 19 13:23:17 hamilton1 kernel: drbd2: Resync done (total 33 sec; 26686 K/sec)
syslog.6:Jul 19 13:31:38 hamilton1 kernel: drbd3: Resync done (total 37 sec; 22939 K/sec)
syslog.6:Jul 19 13:34:12 hamilton1 kernel: drbd4: Resync done (total 171 sec; 27792 K/sec)

syslog.4:Jul 23 10:12:54 hamilton1 kernel: drbd0: Resync done (total 28 sec; 37595 K/sec)
syslog.4:Jul 23 10:13:22 hamilton1 kernel: drbd1: Resync done (total 56 sec; 18797 K/sec)
syslog.4:Jul 23 10:13:51 hamilton1 kernel: drbd2: Resync done (total 85 sec; 12384 K/sec)
syslog.4:Jul 23 10:14:01 hamilton1 kernel: drbd3: Resync done (total 93 sec; 3577 K/sec)
syslog.4:Jul 23 10:14:29 hamilton1 kernel: drbd4: Resync done (total 120 sec; 8772 K/sec)

syslog.1:Jul 26 11:11:55 hamilton1 kernel: drbd3: Resync done (total 12 sec; 27722 K/sec)
syslog.1:Jul 26 11:12:11 hamilton1 kernel: drbd0: Resync done (total 33 sec; 31899 K/sec)
syslog.1:Jul 26 11:12:28 hamilton1 kernel: drbd4: Resync done (total 45 sec; 23392 K/sec)
syslog.1:Jul 26 11:12:41 hamilton1 kernel: drbd1: Resync done (total 61 sec; 16786 K/sec)
syslog.1:Jul 26 11:13:13 hamilton1 kernel: drbd2: Resync done (total 89 sec; 11827 K/sec)


Are you sure that these numbers are correct at all? When one looks at 
/proc/drbd during a sync, I always think that only the numbers of average sync
 for for the first sync device can be correct. 
IMHO it seems to calculate the average sync rate for the other devices including 
the the wait time and so gives smaller numbers 

>
> do you run iozone on some client on the nfs mount,
> or on the host itself on a direct mount?

Those numbers came from the direct mount of course. If we would get those 
number on the clients we certainly wouldn't complain ;)

>
> [ snip iozone output ]
>
> I'd also be interessted in
>  rm wol.dat rwol.dat
>  iozone -s 4m -r 4k -i0 -o -c -e -Q
>  iozone -s 1g -r 1m -r4m -i0 -o -c -e -Q
> and the resulting output of
>  awk '
>   /^ +[0-9]+ +[0-9]+$/ {
> 	if ($2 > max) max=$2;
> 	if (min==0 || $2 < min) min=$2;
> 	sum+=$2;
> 	N++;
>   }
>   /^$/ {
> 	printf "N:\t%8d\nmin:\t%8d\navg:\t%8d\nmax:\t%8d\n",
> 		N, min, sum/N, max;
> 	N=min=max=sum=0;
>   }' wol.dat rwol.dat
>
> (latency figures...)
>
> generally you want to include -c -e when running on nfs.

O.k. I will create these numbers late in the evening, probably less
users will write to the nfs mounts at this time.

>
> you are aware that -i0 only tests "linear" access.
> for more interessting figures, include -i2 and/or -i8
> (random/random_mix)

Yeah, but for finding the bottleneck it was sufficient.

>
> > I guess that we have to much memory for this test (3GB), so the numbers
> > are slightly unrealistic [for drbd-unconnected iozone-async ...]
>
> do they change with -c -e ?

Probably, I will also repeat it in the evening. When I wrote 4GB the rate 
from test 2.2 reduced to 60MB/s.

> now, 2.4 only has *one* thread to flush *all* devices.
> 2.6. can basically flush all devices "in parallel" ...
> what backing storage device(s) did you use?

Its one ide-to-scsi raid array. Hmm, when one only writes locally to 
one of the drbd partitions there shoudn't be several flushd threads requires?
Or does flufhing to the network also count.

>
> process scheduler latency as well as interrupt latency may have an
> impact (HZ value). io-scheduler may have an impact.
> maybe something has changed in the implementation of the network stack,
> or likely in the nfs implementation, also.

What a pity that it still runs so unstable.

> >
> > O.k., I will do that. Switching between the protocols won't do any harm
> > to the filesystem, will it?
>
> well...
> if it does, its a serious bug.
> it did not happen to me so far.

Ok, I will try it then. Anyway, we still have backups.

>
>
> what you can do to try and tune drbd:
> play with sndbuf-size
>      increasing it may help throughput,
>      decreasing it may help latency,
>      ... or vice versa ... it depends ...

Will try this as well.

> play with the mtu of the link (jumbo frames)

Already set to mtu9000 from the very beginning.

> play with max-buffers

Will do this as well.

Maybe I will also try to use sysconnect's latest drivers, 
however in 2.6. they showed some instabilities which I'm happy to be rid off
in 2.4. I already complained about their latest published driver and they 
promised to send me new ones.
This server still makes many (unexpeced problems, drbd-speed is only one 
of them :-/ )

Thanks a lot for your great help,
	Bernd



More information about the drbd-user mailing list