[DRBD-user] write performance for DRBD devices seems to be slow

Warren Beldad advisory22 at gmail.com
Wed Jul 20 14:23:20 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Here is my new results with both max-buffers and max-epoch-size set to
maximum. If i was right this is:
max-buffers 131072;
max-epoch-size 20000;

File size set to 4096 KB
	Record Size 4 KB
	SYNC Mode. 
	Include fsync in write timing
	Include close in write timing
	Command line used: ./iozone -s4m -r4k -o -e -c
	Output is in Kbytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.

KB  reclen   write rewrite    read      reread    read        write   
read      rewrite    read         fwrite     frewrite          fread  
                  freread
4096   4       680    5126   598212   590718  518354    2568   520067 
  4272  520464     9865       9076          1144118                
1204691

Seems nothings has changed. These two parameters, max-buffer and
max-epoch-size is under the net section which as i understand is for
the network configurations, based on the manual, the size of the tcp
socket send buffer, etc. I think this may not affect the performance
on the raw device if i am right. I am looking for any tuning
parameters (for the raw device) but it seems there are none.
thanks,
warren 

On 7/13/05, Diego Julian Remolina <dijuremo at ibb.gatech.edu> wrote:
> If you were looking for standard benchmarks, you should probably try bonnie++
> from: http://www.coker.com.au/bonnie++/
> 
> Here is the link to the readme file that explains how the tests done and how the
> reads and writes are performed.
> 
> http://www.coker.com.au/bonnie++/readme.html
> 
> I am also attaching a file with my results of running bonnie++ on top of an
> Areca ARC-1160 PCI-X to SATA raid controller.  I run a total of 15 disks, 2 HDD
> mirrored for the OS, 12 in raid 10 for storage and have 1 Hot spare.
> 
> I have 1.5 TB of storage on my raid10 array, but I had that split in two
> partitions, part1 which is 1TB and part2 which is the rest.  I have only
> performed tests for part1.  I ran bonnie++ 3 times and took the average of the 3
> runs on the hard drive on both, the raw device and then on the drbd device.
> Here is the comparisson:
> 
> Sequential Output Per Char using putc():  drbd performs lower:  5.54%
> Sequential Output Block using write(2):   drbd performs lower: 46.49%
> Sequential Output Rewrite:                drbd performs lower: 23.99%
> 
> However, keep in mind that with drbd, I am still getting
> 47MB/s per char, 123MB/s on block and 65MB/s on rewrites, which is still pretty
> good.
> 
> All tests were performed using ext3 with write-back caching enabled on the Areca
> raid controller. The test machine has dual Opterons 270 and 4 GB of ram (that is
> why the file test size is 8GB) running on RHEL4.
> 
> Here is a link to another page where bonnie++ was used to compare raid 5 vs
> raid10 but no drbd is involved, you may want to use those results for reference.
> 
> http://johnleach.co.uk/documents/benchmarks/raidperf.html
> 
> Diego
> 
> Quoting Roger Tsang <perj8 at hotmail.com>:
> 
> > Try increasing your max-buffers and max-epoch-size to MAX.
> >
> >
> > >To: drbd-user at lists.linbit.com
> > >Subject: Re: [DRBD-user] write performance for DRBD devices seems to be
> > >slow
> > >Date: Tue, 12 Jul 2005 19:27:29 +0800
> > >
> > >ok I have here added my iozone results.
> > >I am just new to this drbd maybe a month or two. Does anyone have the
> > >standard benchmark results for drbd or where can i find it? I am
> > >really worried about the write performance on our drbd, the difference
> > >is just too big and much more is if it is now under NFS and samba.
> > >
> > >LVM2 + XFS
> > >File size set to 4096 KB
> > >     Record Size 4 KB
> > >     SYNC Mode.
> > >     Include fsync in write timing
> > >     Include close in write timing
> > >     Command line used: ./iozone -s4m -r4k -o -e -c
> > >     Output is in Kbytes/sec
> > >     Time Resolution = 0.000001 seconds.
> > >     Processor cache size set to 1024 Kbytes.
> > >     Processor cache line size set to 32 bytes.
> > >     File stride size set to 17 * record size.
> > >                                                             random
> > >random    bkwd  record  stride
> > >               KB  reclen   write rewrite    read    reread    read
> > >write    read rewrite    read   fwrite frewrite   fread  freread
> > >             4096       4   11680   35386  1126176  1158032  926910
> > >34997  939673   35074  907599    64502    68108 1016369  1079917
> > >
> > >LVM2 + DRBD + XFS
> > >File size set to 4096 KB
> > >     Record Size 4 KB
> > >     SYNC Mode.
> > >     Include fsync in write timing
> > >     Include close in write timing
> > >     Command line used: ./iozone -s4m -r4k -o -e -c
> > >     Output is in Kbytes/sec
> > >     Time Resolution = 0.000001 seconds.
> > >     Processor cache size set to 1024 Kbytes.
> > >     Processor cache line size set to 32 bytes.
> > >     File stride size set to 17 * record size.
> > >                                                             random
> > >random    bkwd  record  stride
> > >               KB  reclen   write rewrite    read    reread    read
> > >write    read rewrite    read   fwrite frewrite   fread  freread
> > >             4096       4     619    5138   520068   516903  460170
> > >2494  459762    4503  461883    10358     9790 1021212  1003702
> > >
> > >thanks, warren
> > >
> > >On 7/8/05, Lars Ellenberg <Lars.Ellenberg at linbit.com> wrote:
> > > > / 2005-07-07 20:18:06 +0800
> > > > \ Warren Beldad:
> > > > > Hi all!
> > > > >
> > > > > I have are a simple results of the performance of my drbd test
> > >machines.
> > > > > Using the dd command, first I get the performance value of my
> > > > > harddisk. That would be formatting the /dev/sda with XFS and then
> > > > > mount it and then write a 1GB file (which is twice the size of my RAM)
> > > > > into that disk with dd command on different block sizes and then read
> > > > > back that file into the black hole.
> > > > > time dd if=/dev/zero of=testfile.txt bs=512k count=2000
> > > > > time dd if=testfile.txt of=/dev/null bs=512k
> > > >
> > > > your benchmark is broken.
> > > > man close
> > > > man fsync
> > > > no, dd does not do fsync.
> > > >
> > > > --
> > > > : Lars Ellenberg                                  Tel +43-1-8178292-0  :
> > > > : LINBIT Information Technologies GmbH            Fax +43-1-8178292-82 :
> > > > : Schoenbrunner Str. 244, A-1120 Vienna/Europe   http://www.linbit.com :
> > > > __
> > > > please use the "List-Reply" function of your email client.
> > > > _______________________________________________
> > > > drbd-user mailing list
> > > > drbd-user at lists.linbit.com
> > > > http://lists.linbit.com/mailman/listinfo/drbd-user
> > > >
> > >_______________________________________________
> > >drbd-user mailing list
> > >drbd-user at lists.linbit.com
> > >http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> >
> > _______________________________________________
> > drbd-user mailing list
> > drbd-user at lists.linbit.com
> > http://lists.linbit.com/mailman/listinfo/drbd-user
> >
> 
> 
> -------------------------------------------------
> This mail sent through IMP: http://horde.org/imp/
> 
> 
>



More information about the drbd-user mailing list