# /etc/drbd.conf<br>common {<br> protocol C;<br> syncer {<br> rate 33M;<br> }<br>}<br><br>resource xendrive {<br> on cluster1.local {<br> device /dev/drbd0;<br>
disk /dev/sda3;<br> address <a href="http://10.10.10.1:7788/" target="_blank">10.10.10.1:7788</a>;<br> meta-disk internal;<br> }<br> on cluster2.local {<br> device /dev/drbd0;<br>
disk /dev/sda3;<br> address <a href="http://10.10.10.2:7788/" target="_blank">10.10.10.2:7788</a>;<br> meta-disk internal;<br> }<br> net {<br> sndbuf-size 137k;<br>
timeout 50;<br>
allow-two-primaries;<br> cram-hmac-alg sha1;<br> shared-secret TeleWebDrbdCluster2008;<br> after-sb-0pri discard-zero-changes;<br> after-sb-1pri discard-secondary;<br> after-sb-2pri call-pri-lost-after-sb;<br>
rr-conflict disconnect;<br> }<br> disk {<br> on-io-error call-local-io-error;<br> fencing resource-and-stonith;<br> no-disk-flushes;<br> no-md-flushes;<br> }<br>
syncer {<br> al-extents 3833;<br> }<br> startup {<br> wfc-timeout 0;<br> degr-wfc-timeout 10;<br> }<br> handlers {<br> local-io-error "echo BAD | mail -s 'DRBD Alert Local-io-error' root";<br>
outdate-peer /usr/local/sbin/obliterate;<br> split-brain "echo split-brain. drbdadm -- --discard-my-data connect $DRBD_RESOURCE ? | mail -s 'DRBD Alert' root";<br><br><div class="gmail_quote">
On Sun, Jun 22, 2008 at 9:17 AM, Marcelo Azevedo <<a href="mailto:marzevd@gmail.com">marzevd@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
drbd ver : version: 8.2.6 (api:88/proto:86-88)<br><br>Tests performed: <br>ipref shows 125MB/s~ , pureftpd also shows 125MB/s~<br><br>physical -> drbd : full 4GB resync = 105MB/s~ which also equals to , physical -> drbd -> ext3 , in cs=standalone/WFconnection mode = 105MB/s~<br>
<br>standalone/WFconnection test was done using, dd and bonnie++ , bonnie++ shows about 10MB/s less write performence (more rigorous test):<br>------------------------------------------------------------------------------------------------------------------<br>
time dd if=/dev/zero of=./testfile bs=16384 count=500000 <br>500000+0 records in<br>500000+0 records out<br>8192000000 bytes (8.2 GB) copied, 78.5591 seconds, 104 MB/s<br><br>real 1m18.971s<br>user 0m0.376s<br>sys 0m32.726s<br>
<br>bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./<br>Using uid:0, gid:0.<br>Writing intelligently...done<br>Rewriting...done<br>Reading intelligently...done<br>start 'em...done...done...done...<br>Version 1.03 ------Sequential Output------ --Sequential Input- --Random-<br>
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--<br>Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP<br>cluster2.loca 7180M 89458 46 61011 29 157652 15 658.3 0<br>
cluster2.local,7180M,,,89458,46,61011,29,,,157652,15,658.3,0,,,,,,,,,,,,,<br><br>89MB/s~ write, 157MB/s~ read<br><br>------------------------------------------------------------------------------------------------------------------<br>
***** Now the bottleneck is when in **** primary/primary , or primary/secondary *** :<br>-------------------------------------------------------------------------------------------------------------------<br><br>time dd if=/dev/zero of=./testfile bs=16384 count=500000 <br>
500000+0 records in<br>500000+0 records out<br>8192000000 bytes (8.2 GB) copied, 100.704 seconds, 81.3 MB/s<br><br>bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./<br><br>Using uid:0, gid:0.<br>Writing intelligently...done<br>Rewriting...done<br>
Reading intelligently...done<br>start 'em...done...done...done...<br>Version 1.03 ------Sequential Output------ --Sequential Input- --Random-<br> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--<br>
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP<br>cluster1.loca 7180M 54283 17 59925 20 158998 15 583.0 0<br>cluster1.local,7180M,,,54283,17,59925,20,,,158998,15,583.0,0,,,,,,,,,,,,,<br>
<br>55MB/s~ write / 159MB/s~ read<br>-----------------------------------------------------------------------------------------------------------------------------------------<br>why the 30-40MB/s difference , compared to resync or standalone/WFconnection mode speed? <br>
<br>what operations in normal I/O activity can affect performance VS drbd resync operation? and how can i fix them ?<br>if resync is done via the network and it operates at speeds equal to standalone mode , what could hinder performance in normal primary/secondary , primary/primary mode like this?<br>
<br>btw - I have no-md-flushes and no-disk-flushes options on, since without that i am lucky to even get more than 10MB/s write speed , but you probably know about that...<br><br>All the best , Marcelo.<br>
</blockquote></div><br>