[DRBD-user] A drbd *bottleneck*

Marcelo Azevedo marzevd at gmail.com
Sun Jun 22 08:17:10 CEST 2008


drbd ver : version: 8.2.6 (api:88/proto:86-88)

Tests performed:
ipref shows 125MB/s~  , pureftpd also shows 125MB/s~

physical -> drbd :  full 4GB resync = 105MB/s~    which also equals to  ,
physical -> drbd -> ext3 , in cs=standalone/WFconnection mode = 105MB/s~

standalone/WFconnection test was done using,  dd and bonnie++ , bonnie++
shows about 10MB/s less write performence (more rigorous test):
------------------------------------------------------------------------------------------------------------------
time dd if=/dev/zero of=./testfile bs=16384 count=500000
500000+0 records in
500000+0 records out
8192000000 bytes (8.2 GB) copied, 78.5591 seconds, 104 MB/s

real    1m18.971s
user    0m0.376s
sys     0m32.726s

bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
cluster2.loca 7180M           89458  46 61011  29           157652  15
658.3   0
cluster2.local,7180M,,,89458,46,61011,29,,,157652,15,658.3,0,,,,,,,,,,,,,

89MB/s~ write,  157MB/s~ read

------------------------------------------------------------------------------------------------------------------
***** Now the bottleneck is when in **** primary/primary , or
primary/secondary *** :
-------------------------------------------------------------------------------------------------------------------

time dd if=/dev/zero of=./testfile bs=16384 count=500000
500000+0 records in
500000+0 records out
8192000000 bytes (8.2 GB) copied, 100.704 seconds, 81.3 MB/s

bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./

Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Version  1.03       ------Sequential Output------ --Sequential Input-
--Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
cluster1.loca 7180M           54283  17 59925  20           158998  15
583.0   0
cluster1.local,7180M,,,54283,17,59925,20,,,158998,15,583.0,0,,,,,,,,,,,,,

55MB/s~ write / 159MB/s~ read
-----------------------------------------------------------------------------------------------------------------------------------------
why the 30-40MB/s difference , compared to resync or standalone/WFconnection
mode speed?

what operations in normal I/O activity can affect performance VS drbd resync
operation? and how can i fix them ?
if resync is done via the network and it operates at speeds equal to
standalone mode , what could hinder performance in normal primary/secondary
, primary/primary mode like this?

btw - I have no-md-flushes and no-disk-flushes options on, since without
that i am lucky to even get more than 10MB/s write speed , but you probably
know about that...

All the best , Marcelo.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linbit.com/pipermail/drbd-user/attachments/20080622/65db1379/attachment.htm 


More information about the drbd-user mailing list