<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Hi,<br>
<br>
I just want to say that, we were deploying our servers with 3ware's
until some time ago. 3ware's are VERY slow at least on linux.<br>
<br>
I don't know if your problem it's the 3ware, but 3ware performance it's
very very bad, you should use areca cards like we do.<br>
<br>
You should use iozone or something to hard test your fs before going to
drbd, with 3ware controllers you will notice that with high I/O the OS
will be so slow, i don't know why but i think as something to do with
the 3ware hardware and the Linux drivers, even with the latest bios and
with latest drivers.<br>
<br>
All tests we have done were in centos 5, and I will also take this to
tell you that 9650 it's a lot better the other ones behind, but even
9650 it's nothing compared with areca cards. The difference it's from 1
(3ware) to 20 (areca).<br>
<br>
And by the way, we have similar setups like the one you have, with no
problems at all, even with 3ware cards, but this ones are much slower
then the ones with areca.<br>
<br>
Use iperf to test your network bandwith, check write cache on the
controller. Good luck.<br>
<br>
Rudolph Bott wrote:
<blockquote cite="mid:11227578.161229513885228.JavaMail.rudi@toronto"
type="cite">
<pre wrap="">----- "Lars Ellenberg" <a class="moz-txt-link-rfc2396E" href="mailto:lars.ellenberg@linbit.com"><lars.ellenberg@linbit.com></a> schrieb:
</pre>
<blockquote type="cite">
<pre wrap="">On Tue, Dec 16, 2008 at 08:23:39PM +0000, Rudolph Bott wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi List,
I was wondering if anyone might be able to share some performance
information about his/her DRBD setup. Ours comes along with the
following Hardware:
Hardware: Xeon QuadCore CPU, 2GB RAM, Intel Mainboard with 2
</pre>
</blockquote>
<pre wrap="">Onboard
</pre>
<blockquote type="cite">
<pre wrap="">e1000 NICs and one additional plugged into a regular PCI slot,
</pre>
</blockquote>
<pre wrap="">3ware
</pre>
<blockquote type="cite">
<pre wrap="">9650SE (PCI-Express) with 4 S-ATA Disks in a RAID-10 array
Software: Ubuntu Hardy LTS with DRBD 8.0.11 (from the ubuntu
</pre>
</blockquote>
<pre wrap="">repository), Kernel 2.6.24
</pre>
<blockquote type="cite">
<pre wrap="">one NIC acts as "management interface", one as the DRBD Link, one
</pre>
</blockquote>
<pre wrap="">as
</pre>
<blockquote type="cite">
<pre wrap="">the heartbeat interface. On top of DRBD runs LVM to allow the
</pre>
</blockquote>
<pre wrap="">creation
</pre>
<blockquote type="cite">
<pre wrap="">of volumes (which are in turn exported via iSCSI). Everything seems
</pre>
</blockquote>
<pre wrap="">to
</pre>
<blockquote type="cite">
<pre wrap="">run smoothly - but I'm not quite satisfied with the write speed
available on the DRBD device (locally, I don't care about the iSCSI
part yet).
All tests were done with dd (either copying from /dev/zero or to
/dev/null with 1, 2 or 4GB sized files). Reading gives me speeds at
around 390MB/sec which is way more than enough - but writing does
</pre>
</blockquote>
<pre wrap="">not
</pre>
<blockquote type="cite">
<pre wrap="">exceed 39MB/sec. Direct writes to the raid controller (without
</pre>
</blockquote>
<pre wrap="">DRBD)
</pre>
<blockquote type="cite">
<pre wrap="">are at around 95MB/sec which is still below the limit of
</pre>
</blockquote>
<pre wrap="">Gig-Ethernet.
</pre>
<blockquote type="cite">
<pre wrap="">I spent the whole day tweaking various aspects (Block-Device
</pre>
</blockquote>
<pre wrap="">tuning,
</pre>
<blockquote type="cite">
<pre wrap="">TCP-offload-settings, DRBD net-settings etc.) and managed to raise
</pre>
</blockquote>
<pre wrap="">the
</pre>
<blockquote type="cite">
<pre wrap="">write speed from initially 25MB/sec to 39MB/sec that way.
Any suggestions what happens to the missing ~60-50MB/sec that the
3ware controller is able to handle? Do you think the PCI bus is
"overtasked"? Would it be enough to simply replace the onboard NICs
with an additional PCI-Express Card or do you think the limit is
elsewhere? (DRBD settings, Options set in the default Distro Kernel
etc.).
</pre>
</blockquote>
<pre wrap="">drbdadm dump all
</pre>
</blockquote>
<pre wrap=""><!---->
common {
syncer {
rate 100M;
}
}
resource storage {
protocol C;
on nas03 {
device /dev/drbd0;
disk /dev/sda3;
address 172.16.15.3:7788;
meta-disk internal;
}
on nas04 {
device /dev/drbd0;
disk /dev/sda3;
address 172.16.15.4:7788;
meta-disk internal;
}
net {
unplug-watermark 1024;
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
disk {
on-io-error detach;
}
syncer {
rate 100M;
al-extents 257;
}
startup {
wfc-timeout 20;
degr-wfc-timeout 120;
}
handlers {
pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
}
}
</pre>
<blockquote type="cite">
<pre wrap="">drbdsetup /dev/drbd0 show
</pre>
</blockquote>
<pre wrap=""><!---->disk {
size 0s _is_default; # bytes
on-io-error detach;
fencing dont-care _is_default;
}
net {
timeout 60 _is_default; # 1/10 seconds
max-epoch-size 2048 _is_default;
max-buffers 2048 _is_default;
unplug-watermark 1024;
connect-int 10 _is_default; # seconds
ping-int 10 _is_default; # seconds
sndbuf-size 131070 _is_default; # bytes
ko-count 0 _is_default;
after-sb-0pri disconnect _is_default;
after-sb-1pri disconnect _is_default;
after-sb-2pri disconnect _is_default;
rr-conflict disconnect _is_default;
ping-timeout 5 _is_default; # 1/10 seconds
}
syncer {
rate 102400k; # bytes/second
after -1 _is_default;
al-extents 257;
}
protocol C;
_this_host {
device "/dev/drbd0";
disk "/dev/sda3";
meta-disk internal;
address 172.16.15.3:7788;
}
_remote_host {
address 172.16.15.4:7788;
}
</pre>
<blockquote type="cite">
<pre wrap="">what exactly does your micro benchmark look like?
</pre>
</blockquote>
<pre wrap=""><!---->dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
dd if=/mnt/testfile of=/dev/null
</pre>
<blockquote type="cite">
<pre wrap="">how do "StandAlone" and "Connected" drbd compare?
</pre>
</blockquote>
<pre wrap=""><!---->Standalone:
root@nas03:/mnt# dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2,1 GB) copied, 54,1473 s, 39,7 MB/s
Connected:
root@nas03:/mnt# dd if=/dev/zero of=/mnt/testfile bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2,1 GB) copied, 60,1652 s, 35,7 MB/s
</pre>
<blockquote type="cite">
<pre wrap="">what thoughput does the drbd resync achieve?
</pre>
</blockquote>
<pre wrap=""><!---->~ 63MB/sec
hmm...when I take the information above into account I would say...maybe LVM is the bottleneck? The speed comparison to local writes (achieving ~95mb/sec) were done on the root fs, which is direct on the sda device, not ontop of LVM.
</pre>
<blockquote type="cite">
<pre wrap="">--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting <a class="moz-txt-link-freetext" href="http://www.linbit.com">http://www.linbit.com</a>
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>
<a class="moz-txt-link-freetext" href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a>
</pre>
</blockquote>
<pre wrap=""><!---->_______________________________________________
drbd-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>
<a class="moz-txt-link-freetext" href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a>
</pre>
</blockquote>
</body>
</html>