Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Wed, Jul 30, 2008 at 08:41:27PM -0500, nathan at robotics.net wrote: > On Wed, 30 Jul 2008, nathan at robotics.net wrote: > >> What is the max performance anyone has seen with DRBD? With the default >> config I am seeing 80 MB/s write and 159 MB/s read. I googled around >> and found Florian's blog and set al-extents 3833, max-buffers 8192, >> unplug-watermark 128 and was able to get that up to 118 MB/s write and >> 180 MB/s read. rum & kugel @ linbit's lab OS: Debian etch (4.0) Kernel: Backports Linux rum 2.6.22-4-amd64 #1 SMP Tue Feb 12 10:29:27 UTC 2008 x86_64 GNU/Linux (yeah, well, it is slightly stale. we are going on a 2.6.26 now) CPUs: 2 x Intel(R) Xeon(R) CPU E5310 @ 1.60GHz (= 8 Cores) RAM: 4 GByte RAM (DIMM Synchronous 667 MHz) DRBD version: somewhere in between 8.2.6 and 8.2.7 :) Troughput on LV: 239.9 MiByte/sec DRBD standalone: 239.8 MiByte/sec DRBD connected: 163.8 MiByte/sec (protocol C, over 2x 1GbE bonding balance-rr) We used two e1000 ethernet cards as a bond for the network connection in this test. using a Dolphinics connect, with nominal bandwidth of 2.500 GBit/s, we got DRBD connected: 235 MiByte/sec So we basically max out our hardware here. Latency figures are fine as well. We did not yet try Infiniband or 10 GbE in that lab yet, but for more throughput, we'd need a faster storage subsystem in our lab anyways. We do have a few faster storage systems in production, but they are read-mostly, with not so much write load on average, so they are "only" connected with 1GbE, which obviously limits the write throughput there to about 111 MiByte/sec. > Hate to reply to my own post, but wanted to show that I am getting 2.19 > Gbits/sec between hosts over Infiniband and reads from device DRBD is > using is 543 MB/s. I guess you have a decent non-volatile, battery backed, write cache, and you have it enabled. right? so you can safely say "no-disk-flushes", "no-md-flushes" in drbd.conf. you may experiment more with the "unplug-watermark", maybe your storage likes it better in the order of the max-buffers. you may check wether current drbd-8.2.git gives you an advantage, as we changed some socket options and the send buffer handling. > [root at xen1 src]# iperf -c 172.16.0.220 > ------------------------------------------------------------ > Client connecting to 172.16.0.220, TCP port 5001 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 172.16.0.221 port 33300 connected with 172.16.0.220 port 5001 > [ 3] 0.0-10.0 sec 2.54 GBytes 2.18 Gbits/sec > > [root at xen0 src]# sync ;dd if=/dev/sdb of=/dev/null bs=4096 count=1M > 1048576+0 records in > 1048576+0 records out > 4294967296 bytes (4.3 GB) copied, 7.91526 seconds, 543 MB/s half (all?) of that may have been in cache already. for READ performance tests, to start cache cold, do echo 3 > /proc/sys/vm/drop_caches dd if=/dev/drbd ... or, to bypass the cache dd if=/dev/drbd ... iflag=direct also, for write throughput tests on the block device, use dd if=/dev/zero of=/dev/XYZ bs=500M count=1 oflag=direct,dsync where the "*flag=direct" is the important part. for some reason, when writing to a block device instead of into a file on a file system on that block device, the buffer cache is slowing things down. for throughput tests on a file system, to get rid of the allocation overhead, ignore the first path, then do "dd if=/dev/zero of=/mnt/point/some/file \ bs=500M count=1 conv=fsync,notrunc" where the important part is the conv=fsync, and the notrunc leaves the file layout as is, so the file system does not have to do the extra work of de-allocating and re-allocating the data blocks. -- : Lars Ellenberg http://www.linbit.com : : DRBD/HA support and consulting sales at linbit.com : : LINBIT Information Technologies GmbH Tel +43-1-8178292-0 : : Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 : __ please don't Cc me, but send to list -- I'm subscribed