Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
# /etc/drbd.conf
common {
protocol C;
syncer {
rate 33M;
}
}
resource xendrive {
on cluster1.local {
device /dev/drbd0;
disk /dev/sda3;
address 10.10.10.1:7788;
meta-disk internal;
}
on cluster2.local {
device /dev/drbd0;
disk /dev/sda3;
address 10.10.10.2:7788;
meta-disk internal;
}
net {
sndbuf-size 137k;
timeout 50;
allow-two-primaries;
cram-hmac-alg sha1;
shared-secret TeleWebDrbdCluster2008;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri call-pri-lost-after-sb;
rr-conflict disconnect;
}
disk {
on-io-error call-local-io-error;
fencing resource-and-stonith;
no-disk-flushes;
no-md-flushes;
}
syncer {
al-extents 3833;
}
startup {
wfc-timeout 0;
degr-wfc-timeout 10;
}
handlers {
local-io-error "echo BAD | mail -s 'DRBD Alert Local-io-error'
root";
outdate-peer /usr/local/sbin/obliterate;
split-brain "echo split-brain. drbdadm -- --discard-my-data
connect $DRBD_RESOURCE ? | mail -s 'DRBD Alert' root";
On Sun, Jun 22, 2008 at 9:17 AM, Marcelo Azevedo <marzevd at gmail.com> wrote:
> drbd ver : version: 8.2.6 (api:88/proto:86-88)
>
> Tests performed:
> ipref shows 125MB/s~ , pureftpd also shows 125MB/s~
>
> physical -> drbd : full 4GB resync = 105MB/s~ which also equals to ,
> physical -> drbd -> ext3 , in cs=standalone/WFconnection mode = 105MB/s~
>
> standalone/WFconnection test was done using, dd and bonnie++ , bonnie++
> shows about 10MB/s less write performence (more rigorous test):
>
> ------------------------------------------------------------------------------------------------------------------
> time dd if=/dev/zero of=./testfile bs=16384 count=500000
> 500000+0 records in
> 500000+0 records out
> 8192000000 bytes (8.2 GB) copied, 78.5591 seconds, 104 MB/s
>
> real 1m18.971s
> user 0m0.376s
> sys 0m32.726s
>
> bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./
> Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> cluster2.loca 7180M 89458 46 61011 29 157652 15
> 658.3 0
> cluster2.local,7180M,,,89458,46,61011,29,,,157652,15,658.3,0,,,,,,,,,,,,,
>
> 89MB/s~ write, 157MB/s~ read
>
>
> ------------------------------------------------------------------------------------------------------------------
> ***** Now the bottleneck is when in **** primary/primary , or
> primary/secondary *** :
>
> -------------------------------------------------------------------------------------------------------------------
>
> time dd if=/dev/zero of=./testfile bs=16384 count=500000
> 500000+0 records in
> 500000+0 records out
> 8192000000 bytes (8.2 GB) copied, 100.704 seconds, 81.3 MB/s
>
> bonnie++ -u 0 -n 0 -s 7180 -f -b -d ./
>
> Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> cluster1.loca 7180M 54283 17 59925 20 158998 15
> 583.0 0
> cluster1.local,7180M,,,54283,17,59925,20,,,158998,15,583.0,0,,,,,,,,,,,,,
>
> 55MB/s~ write / 159MB/s~ read
>
> -----------------------------------------------------------------------------------------------------------------------------------------
> why the 30-40MB/s difference , compared to resync or
> standalone/WFconnection mode speed?
>
> what operations in normal I/O activity can affect performance VS drbd
> resync operation? and how can i fix them ?
> if resync is done via the network and it operates at speeds equal to
> standalone mode , what could hinder performance in normal primary/secondary
> , primary/primary mode like this?
>
> btw - I have no-md-flushes and no-disk-flushes options on, since without
> that i am lucky to even get more than 10MB/s write speed , but you probably
> know about that...
>
> All the best , Marcelo.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080622/c86c9bea/attachment.htm>