[DRBD-user] drbd 8.4.1 slow, <1/4 speed of backing device, <1/2 of network throughput

France mailinglists at isg.si
Tue Mar 13 00:16:39 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi all again,

if anyone cares it seems, that with 8.3.12 i _can_ get expected read 
performance even when connected. Also disabled barriers _do_ make a 
difference on writing performance while with 8.4.1 they didn't. Default 
write is still slow. Originally i thought the performance is the same, 
because i didn't check for read speed and didn't do any tuning.

Hopefully i can do some more tuning to increase the write performance.
Also with software raid 0 of SATA disks, i don't believe it's OK to have 
no disk flushes or barriers.

Does anyone care to comment my results?
On un unrelated note, 8.4.1 seemed to perform better when bs was set to 
10M instead of 1M. 8.3.12 works fine with either bs dd setting. Those 
tests are not included in this mail.

Here are the results for connected DRBD _with_ ext3 (i added FS):
Write: 84.3 MB/s  Read: 463 MB/s
Tunning just:
                 no-disk-flushes;
                 no-disk-barrier;
Write: 132 MB/s

Actual tests:
[root at s3 brisi]# dd if=/dev/zero of=BRISI bs=1M count=512 oflag=direct
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 6.36925 s, 84.3 MB/s
barriers disabled:
[root at s3 brisi]# dd if=/dev/zero of=BRISI bs=1M count=2512 oflag=direct
2512+0 records in
2512+0 records out
2634022912 bytes (2.6 GB) copied, 19.9935 s, 132 MB/s
[root at s3 brisi]# dd if=BRISI of=/dev/null bs=1M count=2512 iflag=direct
2512+0 records in
2512+0 records out
2634022912 bytes (2.6 GB) copied, 5.69369 s, 463 MB/s

Regards,
M.

On 9/3/12 8:59 PM, France wrote:
> Hi all,
>
> my latest drbd install seems awfully slow when resources are connected 
> and a bit better when disconnected. On the other hand, initial sync or 
> sync after i let one node fall behind and then sync, is fast as 
> expected and uses all available network bandwidth.
>
> LV backing device speed:
> Write: 446 MB/s Read: 477 MB/s
> DRBD disconnected:
> Write: 208 MB/s Read: 255 MB/s
> DRBD connected:
> Write: 73.4 MB/s Read: 255 MB/s
> Expected DRBD connected:
> Write: 200+MB/s Read: 400+MB/s
>
> Below is more info. Please help me achieve at least network throughput 
> for sequential write.
>
> CentOS 6.2: 2.6.32-220.7.1.el6.x86_64
> DRBD version: 8.4.1 (api:1/proto:86-100)
> (Seems like the same problem persists if i downgrade to 8.3.12)
> 2x1Ge network card bonded in round robin. MTU 9000.
> Backing device is LVM on Software RAID 0.
> No filesystems were used in the test.
>
> Network speed tests are always stable around 1.75Gb/s. Tested with iperf.
> [root at s2 drbd.d]# iperf -c 192.168.168.3
> ------------------------------------------------------------
> Client connecting to 192.168.168.3, TCP port 5001
> TCP window size: 27.8 KByte (default)
> ------------------------------------------------------------
> [  3] local 192.168.168.2 port 49739 connected with 192.168.168.3 port 
> 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-10.0 sec  2.07 GBytes  1.78 Gbits/sec
>
> Random drbd sync:
>  2: cs:SyncTarget ro:Secondary/Secondary ds:Inconsistent/UpToDate C 
> r-----
>     ns:0 nr:1999064 dw:1998040 dr:0 al:0 bm:125 lo:3 pe:13 ua:2 ap:0 
> ep:1 wo:b oos:142260
>         [=================>..] sync'ed: 93.5% (142260/2140300)K
>         finish: 0:00:00 speed: 222,004 (222,004) want: 191,920 K/sec
>  2: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----
>     ns:0 nr:1635328 dw:1633792 dr:0 al:0 bm:99 lo:4 pe:27 ua:3 ap:0 
> ep:1 wo:b oos:938496
>         [===========>........] sync'ed: 63.6% (938496/2572288)K
>         finish: 0:00:04 speed: 204,224 (204,224) want: 224,760 K/sec
>
> Speed test of backing device:
> [root at s3 ~]# dd if=/dev/zero of=/dev/diski/s2 bs=1M count=2512 
> oflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 5.9058 s, 446 MB/s
> [root at s3 ~]# dd if=/dev/diski/s2 of=/dev/null bs=1M count=2512 
> iflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 5.5232 s, 477 MB/s
>
> Speed test of drbd in disconnected mode:
> [root at s3 ~]# dd if=/dev/zero of=/dev/drbd2 bs=1M count=2512 oflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 12.6405 s, 208 MB/s
> [root at s3 ~]# dd if=/dev/drbd2 of=/dev/null bs=1M count=2512 iflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 10.3494 s, 255 MB/s
>
> Speed test of drbd in connected mode:
> [root at s3 ~]# dd if=/dev/zero of=/dev/drbd2 bs=1M count=2512 oflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 35.8805 s, 73.4 MB/s
> [root at s3 ~]# dd if=/dev/drbd2 of=/dev/null bs=1M count=2512 iflag=direct
> 2512+0 records in
> 2512+0 records out
> 2634022912 bytes (2.6 GB) copied, 10.3434 s, 255 MB/s
>
> Currently active settings:
>     disk {
>         on-io-error detach;
>         resync-rate 420M;
>                 #disk-barrier no;
>                 #disk-flushes no;
>                 #c-plan-ahead 0; #for syncer to work statically
>         c-max-rate 420M; #(222MB/s dosezeno)
>         al-extents 3389;
>     }
>
>     net {
> #               csums-alg crc32c;
>         sndbuf-size 0;
> #               max-buffers 8000;
> #               max-epoch-size 8000;
>         unplug-watermark 16;
>         after-sb-0pri discard-least-changes;
>         after-sb-1pri call-pri-lost-after-sb;
>         after-sb-2pri call-pri-lost-after-sb;
>     }
>
> I did try the optimizing advice from:
> http://www.drbd.org/users-guide/ch-latency.html
> but improvements were marginal, if any at all.
>
> Regards,
> France
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user




More information about the drbd-user mailing list