Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
thanks, Chris.
I had turned off disk-barrier and disk-flushes in config file, but there is no difference.
And I use the oflag=direct to test, there is not faster than ever.
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 conv=fdatasync oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 25.4056 seconds, 41.3 MB/s
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 27.502 seconds, 38.1 MB/s
[para]# echo 3 > /proc/sys/vm/drop_caches
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 25.6661 seconds, 40.9 MB/s
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 oflag=direct
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 25.5987 seconds, 41.0 MB/s
From: Chris Dickson
Date: 2012-04-25 22:35
To: feng zheng
CC: drbd-user
Subject: Re: [DRBD-user] drbd write performance slow, per disk 40M/s by dd command
Try turning off disk-barrier and disk-flushes and see if that makes a difference.
2012/4/25 feng zheng <zf5984599 at gmail.com>
hi, dear all:
When I use drbd, I found the write performance very slow, against
testing without drbd module.
1. the environment:
-) CentOS 5.6
-) 2.6.18 kernel
-) drbd 8.4.1
-) drbd.conf:
resource r0
{
protocol B;
net
{
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 512K;
}
disk
{
al-extents 3389;
}
on OSS211
{
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.100.231:7788;
meta-disk internal;
}
on OSS213
{
device /dev/drbd0;
disk /dev/sde1;
address 192.168.100.213:7788;
meta-disk internal;
}
}
2. Test scenario:
*) without drbd module,
dd to write 1G stream into one disk, which formatted to ext3:
[para]# !echo
echo 3 > /proc/sys/vm/drop_caches
[para]# !dd
dd if=/dev/zero of=test1 bs=1M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 10.9905 seconds, 95.4 MB/s
*) with drbd module,
dd 1G stream to the disk, which is ext3 too:
[para]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by
root at OSS213, 2012-04-16 21:38:36
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate B r-----
ns:1260036 nr:0 dw:1260036 dr:297 al:330 bm:0 lo:0 pe:0 ua:0 ap:0
ep:1 wo:b oos:0
[para]#
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 26.7392 seconds, 39.2 MB/s
[para]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by
root at OSS213, 2012-04-16 21:38:36
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate B r-----
All the upper test writing disk are same. From the upper result, if i
use DRBD to test, the performance
is 39 MB/s; while if i do not use, the performance is about 95M/s.
3. My question is:
-) this write performance decays so large is normal or not?
I had read the following from the DRBD website:
"15.1. Hardware considerations:
.... A single, reasonably recent, SCSI or SAS disk will
typically allow streaming writes of roughly 40MB/s to the single disk."
But this is very slow.
-)if this is not normal, how can i turn this? is the config file
something not correct?
thanks a lot
BRs,
feng
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120426/1864c346/attachment.htm>