Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Greetings,
I'm currently working on a new server that I plan to use for an iSCSI
SAN. Because of a hardware availability issue I have to initially
configure this as a single node DRBD system until I complete the
migration of data from another server configured with similar hardware.
In a nutshell, my problem is that I'm seeing horrible write performance
when writing to the DRBD device versus writing to the underlying Raid array.
root at san01-a:~# dd if=/dev/zero of=/dev/drbd1 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB) copied, 25.5897 s, 410 MB/s
root at san01-a:~# dd if=/dev/zero of=/dev/sda2 bs=1M count=10000 oflag=direct
10485760000 bytes (10 GB) copied, 7.07579 s, 1.5 GB/s
Quick Hardware description:
SuperMicro Mainboard w/ Xeon E5-2603
16GB Ram
Dual 120G SSD drives in a Linux Raid 1 for OS - Debian Wheezy
Adaptec ASR7805 w/BBU - latest firmware (32033) w/ BBU
22 Toshiba 1TB SAS drives (MG03SCA100) in a Raid 60 array w/ 128k stripe
Dual 10G links (Intel card) to the second server when it is ready
drbdsetup show:
root at san01-a:~# drbdsetup show
resource meta {
options {
cpu-mask "ff";
}
net {
max-epoch-size 20000;
max-buffers 131072;
unplug-watermark 131072;
after-sb-0pri discard-least-changes;
after-sb-1pri consensus;
}
_remote_host {
address ipv4 1.1.200.2:7788;
}
_this_host {
address ipv4 1.1.200.1:7788;
volume 0 {
device minor 0;
disk "/dev/sda1";
meta-disk internal;
disk {
disk-flushes no;
md-flushes no;
al-extents 3389;
}
}
}
}
resource data {
options {
cpu-mask "ff";
}
net {
max-epoch-size 20000;
max-buffers 131072;
unplug-watermark 131072;
after-sb-0pri discard-least-changes;
after-sb-1pri consensus;
}
_remote_host {
address ipv4 1.1.200.2:7789;
}
_this_host {
address ipv4 1.1.200.1:7789;
volume 0 {
device minor 1;
disk "/dev/sda2";
meta-disk internal;
disk {
disk-flushes no;
md-flushes no;
al-extents 3389;
}
}
}
}
=====================
I've tried this with various settings and about the only setting that
really makes a difference is cpu-mask ff; The others (md-flushes,
al-extents, disk-flushes, max-buffers) can all be removed with no real
change in performance.
DRBD and drbd-utils were built from the git code
version: 8.4.5 (api:1/proto:86-101)
GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by root at san01,
2015-02-05 11:09:59
0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s
ns:0 nr:0 dw:0 dr:664 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d
oos:511948
1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s
ns:0 nr:0 dw:10240000 dr:996 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1
wo:d oos:17570968856
root at san01-a:~# drbdadm -v
Version: 8.9.2rc2 (api:1)
GIT-hash: faeb645ecbf334347e0512b4fa2d7549543b5b50 build by root at san01,
2015-02-10 13:41:16
root at san01-a:~# uname -a
Linux san01 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
The strangest thing is that I built a system with all the same hardware
8 months ago and I was seeing 700MB/s writes. The difference was that I
had both nodes to work on while creating the DRBD. This time I only
have 1 node to start with.
--
Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20150210/9870cd8f/attachment.htm>