Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi all,
Any ideas? I'm essentially blocked now.. I can't go into production with
these write speeds.
Am I the only one who experiences a >60% degradation of write speed when the
two nodes are connected to each other? With full disk speed only in
disconnected state?
Regards,
Bram.
Bram Matthys wrote, on 3-11-2013 11:26:
> Hi,
>
> Using a RAID10 array now and (following Arnolds advice) the metadata is now
> on another (SSD) disk.
> Write speeds are still bad, but are ONLY low when both nodes are connected.
>
> root at st2:/data# sync; echo 3 >/proc/sys/vm/drop_caches ;sync; dd
> if=/dev/zero of=5G bs=1M count=5000; rm -f 5G
> 5000+0 records in
> 5000+0 records out
> 5242880000 bytes (5.2 GB) copied, 57.1802 s, 91.7 MB/s
>
> root at st2:/data# drbdadm disconnect data
> root at st2:/data# sync; echo 3 >/proc/sys/vm/drop_caches ;sync; dd
> if=/dev/zero of=5G bs=1M count=5000; rm -f 5G
> 5000+0 records in
> 5000+0 records out
> 5242880000 bytes (5.2 GB) copied, 21.4724 s, 244 MB/s
>
> When I (re)connect data again, the resync speed is fine:
> st1# cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> srcversion: F937DCB2E5D83C6CCE4A6C9
>
> 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----
> ns:0 nr:5121168 dw:5121168 dr:2010340 al:0 bm:133 lo:0 pe:206 ua:0 ap:0
> ep:1 wo:d oos:3130232
> [======>.............] sync'ed: 38.9% (3056/4996)Mfinish: 0:00:14
> speed: 220,760 (220,760) want: 256,000 K/sec
>
> And, as mentioned earlier, netperf achieves near-10gE speed:
> # netperf -H 192.168.220.1
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.220.1
> (192.168.220.1) port 0 AF_INET : demo
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 10.00 9897.79
> (same in both directions)
>
> It doesn't matter whether I make server st1 or st2 primary (so either
> server) and do the tests... with both they achieve around 240 MB/s write
> speed in disconnected mode and around 80-90 MB/s when connected, so ~ 1/3rd.
>
> Any ideas?
>
> # drbdadm dump
> # /etc/drbd.conf
> common {
> protocol C;
> startup {
> degr-wfc-timeout 120;
> wfc-timeout 120;
> }
> }
>
> # resource data on st2: not ignored, not stacked
> resource data {
> on st1 {
> device /dev/drbd1 minor 1;
> disk /dev/md4;
> address ipv4 192.168.220.1:7789;
> meta-disk /dev/md3 [0];
> }
> on st2 {
> device /dev/drbd1 minor 1;
> disk /dev/md4;
> address ipv4 192.168.220.2:7789;
> meta-disk /dev/md3 [0];
> }
> net {
> data-integrity-alg sha256;
> max-buffers 8000;
> max-epoch-size 8000;
> sndbuf-size 512k;
> }
> disk {
> no-disk-barrier;
> no-disk-flushes;
> }
> syncer {
> csums-alg sha256;
> rate 250M;
> }
> }
>
> NOTE: I was using protocol A without no-disk-* before, just using
> C/no-disk-* to see if it made any significant difference: answer is no.
>
> root at st2:/data# uname -a
> Linux st2 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1+deb7u1 x86_64 GNU/Linux
> root at st2:/data# cat /proc/drbd
> version: 8.3.11 (api:88/proto:86-96)
> srcversion: F937DCB2E5D83C6CCE4A6C9
>
> 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
> ns:5121332 nr:0 dw:15358180 dr:5130289 al:3773 bm:377 lo:0 pe:0 ua:0
> ap:0 ep:1 wo:d oos:0
>
> root at st2:/data# dpkg --list|grep drbd
> ii drbd8-utils 2:8.3.13-2 amd64
> RAID 1 over tcp/ip for Linux utilities
>
> If you need anything else, just let me know.
>
> Thanks again,
>
> Bram.
>
> Arnold Krille wrote, on 7-10-2013 1:37:
>> Hi,
>>
>> On Sun, 06 Oct 2013 18:54:12 +0200 Bram Matthys <syzop at vulnscan.org>
>> wrote:
>>> I'm currently testing DRBD and am having write performance problems.
>>> On the local raid array I achieve 124MB/s, but with DRBD I get only
>>> 41MB/s out of it, or if the secondary node is down (to rule out
>>> network issues) then 53MB/s at best.
>>> Tried protocol A / B / C, with and without no-disk-barriers and
>>> no-disk-flushes, but this didn't change much (only +/- 2MB/s
>>> difference).
>> <snip>
>>
>> Yep, internal meta-disk: write block, seek to end, write log, seek to
>> front, write block, seek to end, write log...
>>
>> Put the meta-disk on a different hd and watch your write rate go up.
>>
>> - Arnold
>>
>>
>>
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
>
--
Bram Matthys
Software developer/IT consultant syzop at vulnscan.org
Website: www.vulnscan.org
PGP key: www.vulnscan.org/pubkey.asc
PGP fp: EBCA 8977 FCA6 0AB0 6EDB 04A7 6E67 6D45 7FE1 99A6