Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
yes, the network is ok. I used netperf to test the network.
# netperf -H 192.168.100.231 -L 30
TCP STREAM TEST from 30 (0.0.0.30) port 0 AF_INET to 192.168.100.231 (192.168.100.231) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.03 941.37
the network is ok.
then enable 'jumbo frames' on, set both machine MTU = 9000, but there is no difference.
the dd result is still about 40 MBps.
From: Trevor Hemsley
Date: 2012-04-25 23:13
To:
CC: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] drbd write performance slow, per disk 40M/s by dd command
Did you test your network connection to make sure that it can transfer at a greater speed than that? Maybe it is the bottleneck - jumbo frames on?
On 25/04/12 15:40, Chris Dickson wrote:
Also use oflag=direct in both tests and perform them a few times, sometimes high speeds are the result of caching.
On Wed, Apr 25, 2012 at 10:35 AM, Chris Dickson <chrisd1100 at gmail.com> wrote:
Try turning off disk-barrier and disk-flushes and see if that makes a difference.
2012/4/25 feng zheng <zf5984599 at gmail.com>
hi, dear all:
When I use drbd, I found the write performance very slow, against
testing without drbd module.
1. the environment:
-) CentOS 5.6
-) 2.6.18 kernel
-) drbd 8.4.1
-) drbd.conf:
resource r0
{
protocol B;
net
{
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 512K;
}
disk
{
al-extents 3389;
}
on OSS211
{
device /dev/drbd0;
disk /dev/sdb1;
address MailScanner warning: numerical links are often malicious: 192.168.100.231:7788;
meta-disk internal;
}
on OSS213
{
device /dev/drbd0;
disk /dev/sde1;
address MailScanner warning: numerical links are often malicious: 192.168.100.213:7788;
meta-disk internal;
}
}
2. Test scenario:
*) without drbd module,
dd to write 1G stream into one disk, which formatted to ext3:
[para]# !echo
echo 3 > /proc/sys/vm/drop_caches
[para]# !dd
dd if=/dev/zero of=test1 bs=1M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 10.9905 seconds, 95.4 MB/s
*) with drbd module,
dd 1G stream to the disk, which is ext3 too:
[para]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by
root at OSS213, 2012-04-16 21:38:36
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate B r-----
ns:1260036 nr:0 dw:1260036 dr:297 al:330 bm:0 lo:0 pe:0 ua:0 ap:0
ep:1 wo:b oos:0
[para]#
[para]# dd if=/dev/zero of=test1 bs=1M count=1000 conv=fdatasync
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 26.7392 seconds, 39.2 MB/s
[para]# cat /proc/drbd
version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by
root at OSS213, 2012-04-16 21:38:36
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate B r-----
All the upper test writing disk are same. From the upper result, if i
use DRBD to test, the performance
is 39 MB/s; while if i do not use, the performance is about 95M/s.
3. My question is:
-) this write performance decays so large is normal or not?
I had read the following from the DRBD website:
"15.1. Hardware considerations:
.... A single, reasonably recent, SCSI or SAS disk will
typically allow streaming writes of roughly 40MB/s to the single disk."
But this is very slow.
-)if this is not normal, how can i turn this? is the config file
something not correct?
thanks a lot
BRs,
feng
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120426/91aa753d/attachment.htm>