[DRBD-user] Benchmark results made with Bonnie++ on drbd.

Diego Julian Remolina diego.remolina at ibb.gatech.edu
Tue Oct 11 20:18:42 CEST 2005

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Those results look good.

DRBD will be a bit slower than a regular partition based on the protocol 
your are using.  With protocol C, all writes are not reported as 
successfull, unless they have reached both machines, so it takes a bit 
more time to make sure the data has been written on both ends. If you 
user protocol A or B, then writes may speed up a bit, but you risk 
consistency. Reads on the other hand should not need the double checking 
thus reducing time.  It is for that reason that your drbd read results 
from bonnie should almost match your regular partition reads.

I may be wrong, but I though I had previously read on the list that drbd 
can also choose to read from local or secondary machine, so that may 
also make it just a bit slower.  But like I said before I am NOT sure 
about this. DRBD gods, please comment on this if you will...

I ran my own benchmarks using 12 SATA HDDs on an Areca ARC-1160 
controller. All benchmark results are averaged from 3 runs.  I performed 
benchmarks for Raid10, Raid10 with DRBD, Raid5 and Raid6.  I created 
partitions every 1TB and benchmarked on every partition.

Raid 10: 2 partitions
Raid 5 and 6: 3 partitions.

Writes seem to be limited by the Gigabit Connection used for drbd 
according to my benchmarks (DRBD writes of 123738 kb/s look a lot like 
hitting the gigabit connection limit).

Diego

julien WICQUART wrote:
> Hi,
> 
> you can find join the results of tests i made with bonnie++ on drbd or/and nfs.
> 
> Can someone tell me if these results seem to be good?
> 
> I try to tune the drbd.conf file without improve the data access velocity:
> max-epoch-size
> max-buffers
> 
> Excuse me for my english!
> 
> Julien WICQUART
> 
> 
> ------------------------------------------------------------------------
> 
> Benchmark results using bonnie++ on drbd and nfs
> 
> The issue of the tests is to know how using drbd and/or nfs can impact 
> data access velocity.
> 
> 1.Environment
> 
> 2 servers :
> 
> PowerEdge SC1425
> 
> CPU: Intel XEON 2.8Ghz
> 
> RAM: DDRAM 1Go
> 
> DD: 160Go SATA
> 
> The 2 servers are connected by an 1Gb/s ethernet network for drbd 
> synchronisation and 100Mb/s for nfs access.
> 
> The 2 servers use a drbd raid partition  /drbd (60Go).
> 
> notice: the drbd raid partition was empty during the test (~580Mo).
> 
> Serveur “serveur1” is the NFS server and is exporting 2 partitions:
> 
>     *
> 
>       / : racine locale
> 
>     *
> 
>       /drbd : partition en raid réseau
> 
> Serveur “serveur2” is the NFS client and is mounting 2 partitions:
> 
>     *
> 
>       /mnt/racine
> 
>     *
> 
>       /mnt/drbd
> 
> 
> 2.Tests
> 
> 
> serveur1:
> 
>     *
> 
>       / : bonnie++ test on local disk (/)
> 
>         used command : bonnie++ -d / -u root -s3g
> 
>     *
> 
>       /drbd : bonnie++ test on drbd raid partition
> 
>         used command : bonnie++ -d /drbd -u root -s3g
> 
> serveur2:
> 
>     * /mnt/racine : bonnie++ test on nfs partition /mnt/racine
> 
>         used command : bonnie++ -d /mnt/racine -u root -s3g
> 
>     *
> 
>       /mnt/drbd : bonnie++ test on nfs partition /mnt/drbd
> 
>         used command : bonnie++ -d /mnt/drbd -u root -s3g:
> 
> ide:
> 
> Ide test was made on a PC PIV 2.4Ghz, 512Mo SDRAM, DD ide 40Go.
> Just for information and comparison.
> used command : bonnie++ -u root -s3g
> 
> 3.Résults
> 
> 
> 
> 	
> 
> Sequential Output
> 
> 	
> 
> Sequential Input
> 
> 	
> 
> Random Seeks
> 
> Per-Character
> 
> 	
> 
> Block
> 
> 	
> 
> Rewrite
> 
> 	
> 
> Per-Character
> 
> 	
> 
> Block
> 
> Partition
> 
> 	
> 
> test space
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> /
> 
> 	
> 
> 3Go
> 
> 	
> 
> 36267
> 
> 	
> 
> 99
> 
> 	
> 
> 59546
> 
> 	
> 
> 24
> 
> 	
> 
> 25987
> 
> 	
> 
> 8
> 
> 	
> 
> 30581
> 
> 	
> 
> 73
> 
> 	
> 
> 61702
> 
> 	
> 
> 9
> 
> 	
> 
> 150.9
> 
> 	
> 
> 0
> 
> /drbd
> 
> 	
> 
> 3Go
> 
> 	
> 
> 31627
> 
> 	
> 
> 99
> 
> 	
> 
> 45398
> 
> 	
> 
> 22
> 
> 	
> 
> 23847
> 
> 	
> 
> 9
> 
> 	
> 
> 29501
> 
> 	
> 
> 72
> 
> 	
> 
> 60896
> 
> 	
> 
> 9
> 
> 	
> 
> 159.3
> 
> 	
> 
> 0
> 
> /mnt/racine
> 
> 	
> 
> 3Go
> 
> 	
> 
> 10156
> 
> 	
> 
> 26
> 
> 	
> 
> 10028
> 
> 	
> 
> 1
> 
> 	
> 
> 2823
> 
> 	
> 
> 69
> 
> 	
> 
> 11703
> 
> 	
> 
> 28
> 
> 	
> 
> 11679
> 
> 	
> 
> 1
> 
> 	
> 
> 145.6
> 
> 	
> 
> 0
> 
> /mnt/drbd
> 
> 	
> 
> 3Go
> 
> 	
> 
> 9623
> 
> 	
> 
> 25
> 
> 	
> 
> 9607
> 
> 	
> 
> 1
> 
> 	
> 
> 3044
> 
> 	
> 
> 66
> 
> 	
> 
> 11706
> 
> 	
> 
> 28
> 
> 	
> 
> 11677
> 
> 	
> 
> 1
> 
> 	
> 
> 145.3
> 
> 	
> 
> 0
> 
> ide
> 
> 	
> 
> 
> 	
> 
> 30372
> 
> 	
> 
> 89
> 
> 	
> 
> 35855
> 
> 	
> 
> 13
> 
> 	
> 
> 19631
> 
> 	
> 
> 7
> 
> 	
> 
> 28207
> 
> 	
> 
> 73
> 
> 	
> 
> 40600
> 
> 	
> 
> 6
> 
> 	
> 
> 79.7
> 
> 	
> 
> 
> 
> 
> 	
> 
> Sequential creations
> 
> 	
> 
> Random creations
> 
> Creation
> 
> 	
> 
> Read
> 
> 	
> 
> Destruction
> 
> 	
> 
> Creation
> 
> 	
> 
> Read
> 
> 	
> 
> Destruction
> 
> Partition
> 
> 	
> 
> Number of files
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> /
> 
> 	
> 
> 16
> 
> 	
> 
> 2688
> 
> 	
> 
> 98
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 2712
> 
> 	
> 
> 99
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 8258
> 
> 	
> 
> 96
> 
> /drbd
> 
> 	
> 
> 16
> 
> 	
> 
> 2595
> 
> 	
> 
> 98
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 2699
> 
> 	
> 
> 98
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 8145
> 
> 	
> 
> 95
> 
> /mnt/racine
> 
> 	
> 
> 16
> 
> 	
> 
> 141
> 
> 	
> 
> 1
> 
> 	
> 
> 453
> 
> 	
> 
> 81
> 
> 	
> 
> 405
> 
> 	
> 
> 3
> 
> 	
> 
> 139
> 
> 	
> 
> 1
> 
> 	
> 
> 1193
> 
> 	
> 
> 3
> 
> 	
> 
> 319
> 
> 	
> 
> 2
> 
> /mnt/drbd
> 
> 	
> 
> 16
> 
> 	
> 
> 107
> 
> 	
> 
> 1
> 
> 	
> 
> 455
> 
> 	
> 
> 82
> 
> 	
> 
> 399
> 
> 	
> 
> 3
> 
> 	
> 
> 108
> 
> 	
> 
> 1
> 
> 	
> 
> 1077
> 
> 	
> 
> 3
> 
> 	
> 
> 301
> 
> 	
> 
> 2
> 
> ide
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 	
> 
> 
> 4.Conclusion
> 
> 4.1 drbd
> 
> In test configuration, we show that drbd network raid layer don't slow 
> down data access very much.
> 
> 4.2 nfs
> 
> The use of nfs divide by 3 the data access velocity, this is certainly 
> due to the nfs protocol.
> 
> 
> 5.Appendix
> 
> 5.1 Résults of tests on drbd raid partition /drbd 95% full
> 
> The issue of this test is to know if drbd slow down data access when the 
> synchronised partition is full.
> 
> 
> 
> 	
> 
> Sequential Output
> 
> 	
> 
> Sequential Input
> 
> 	
> 
> Random Seeks
> 
> Per-Character
> 
> 	
> 
> Block
> 
> 	
> 
> Rewrite
> 
> 	
> 
> Per-Character
> 
> 	
> 
> Block
> 
> Partition
> 
> 	
> 
> Test space
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> Ko/sec
> 
> 	
> 
> % CPU
> 
> /drbd
> 
> 	
> 
> 3Go
> 
> 	28808 	97
> 	44207 	23 	23673 	10
> 	29975 	75
> 	58854 	9
> 	159.1 	0
> 
> 
> 
> 	
> 
> Sequential creations
> 
> 	
> 
> Random creations
> 
> Creation
> 
> 	
> 
> Read
> 
> 	
> 
> Destruction
> 
> 	
> 
> Creation
> 
> 	
> 
> Read
> 
> 	
> 
> Destruction
> 
> Partition
> 
> 	
> 
> Number of files
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> 	
> 
> / sec
> 
> 	
> 
> % CPU
> 
> /drbd
> 
> 	
> 
> 16
> 
> 	2609 	98
> 	
> 	
> 	
> 	
> 	2672 	98
> 	
> 	
> 	8045 	96
> 
> 
> Drbd doesn't seem to suffer of the fact that the drbd raid partition was 
> full.
> 
> 
> *****************************
> _ _
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20051011/7cefd639/attachment.html>


More information about the drbd-user mailing list