Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello list:
Could someone please assist in finding my performance bottleneck... (~90
MB/sec raw write to ~20 MB/sec connected drbd)
Of course, I expected to see better write performance than what I obtained.
I have read a few archived posts by Lars, and followed his steps at:
http://lists.linbit.com/pipermail/drbd-user/2006-August/005495.html
http://www.gossamer-threads.com/lists/drbd/users/10689#10689
Both servers are Dell 2650 with identical hardware connected via 1G
ethernet.
drbd resource is a single (meaning one per server) SCSI 36G 15K U320
(ST336754LC) drive
There is no file system on the drives, just a maximum size primary linux
partition.
The dm bechmark provides these results:
Tested READ and WRITE of bare block device:
echo 3 > /proc/sys/vm/drop_caches && ./dm -o /dev/null -b 1M -s
500M -m -p -i /dev/sdb
91.65 MB/sec (524288000 B / 00:05.455296)
(results similiar on both servers ~90 MB/sec READ)
./dm -a 0 -b 1M -s 500M -y -m -p -o /dev/sdb
90.24 MB/sec (524288000 B / 00:05.540775)
(results similiar on both servers, ranged from ~80 MB/sec to ~90 MB/sec
WRITE)
Tested READ and WRITE drbd block device (disconnected):
cat /proc/drbd
version: 8.0.1 (api:86/proto:86)
SVN Revision: 2784 build by root at mailbox1, 2007-03-26 12:09:17
0: cs:StandAlone st:Primary/Unknown ds:UpToDate/Inconsistent r---
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
echo 3 > /proc/sys/vm/drop_caches && ./dm -o /dev/null -b 1M -s
500M -m -p -i /dev/drbd0
91.26 MB/sec (524288000 B / 00:05.478794)
(results similiar on both servers, ~90 MB/sec READ on /dev/drbd0)
./dm -a 0 -b 1M -s 500M -y -m -p -o /dev/drbd0
55.08 MB/sec (524288000 B / 00:09.078228)
(results similiar on both severs, ~50 MB/sec to ~60 MB/sec WRITE, sometimes
as low as ~40 MB/sec)
it gets worse...
Tested READ and WRITE drbd block device (connected):
cat /proc/drbd
version: 8.0.1 (api:86/proto:86)
SVN Revision: 2784 build by root at mailbox1, 2007-03-26 12:09:17
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
ns:35831816 nr:0 dw:0 dr:35831816 al:0 bm:2188 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:2237302 misses:2188 starving:0 dirty:0
changed:2188
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0
echo 3 > /proc/sys/vm/drop_caches && ./dm -o /dev/null -b 1M -s
500M -m -p -i /dev/drbd0
85.40 MB/sec (524288000 B / 00:05.854730)
(results similiar on both servers, ~85 MB/sec READ on /dev/drbd0)
./dm -a 0 -b 1M -s 500M -y -m -p -o /dev/drbd0
19.79 MB/sec (524288000 B / 00:25.260552)
(results similiar on both servers, ~20 MB/sec WRITE on /dev/drbd0)
stock kernel-2.6.20.3
drbd-8.0.1 built as module
ethernet MTU = 9000
/etc/drbd.conf :
global {
minor-count 32;
dialog-refresh 1;
usage-count no;
}
common {
}
resource mail {
protocol C;
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ;
halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ;
halt -f";
local-io-error "echo o > /proc/sysrq-trigger ;
halt -f";
outdate-peer "/usr/sbin/drbd-peer-outdater";
}
startup {
wfc-timeout 0;
degr-wfc-timeout 120;
}
disk {
on-io-error detach;
fencing dont-care;
}
net {
allow-two-primaries;
#timeout 60;
#connect-int 10;
#ping-int 10;
#ping-timeout 5;
sndbuf-size 512k;
max-buffers 8192;
unplug-watermark 8192;
max-epoch-size 8192;
ko-count 4;
cram-hmac-alg "sha1";
shared-secret "FooFunFactory";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
rate 125M;
al-extents 257;
}
on mailbox1 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.10.10.10:7788;
meta-disk internal;
}
on mailbox2 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.10.10.20:7788;
meta-disk internal;
}
}
Any advice???
Thanks
Duane Cox