[DRBD-user] Very poor performance

Arnold Krille arnold at arnoldarts.de
Thu Aug 23 22:09:14 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Friday 24 August 2012 01:56:37 Adam Goryachev wrote:
> I have a pair of DRBD machines, and I'm getting very poor performance
> from the DRBD when the second server is connected. I've been working on
> resolving the performance issues for a few months, with not much luck.
> Here is some info on the current configuration:
> Both machines are identical (apart from drives):
> CPU Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
> RAM 8G
> One machine (the primary) has Intel 480G SSD drives (currently just 2)
> in RAID5 (one drive missing, will be added in another few days)
> The second machine (backup) has 2 x WD Caviar Black 2TB HDD (in RAID1,
> limited to 960GB)
> Actually, I created the partitions on the SSD a little smaller than max
> capacity because I read this can assist with performance, and matched
> the HDD's size to the SSD RAID block size.

Making partitions on ssd smaller then max-capacity is not for performance but 
for wear-leveling. The ssd lives longer when you only use like 75-80% of its 

> I am using all software from Debian Stable with all updates/security
> updates installed:
> drbd8-utils 2:8.3.7-2.1

I made good experience with compiling latest 8.3 from git and creating debs. 
Less bugs, more performance on debian stable.

> Linux san1 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64
> GNU/Linux
> My config file for DRBD is:
> resource storage2 {
>     protocol A;
>     device /dev/drbd2 minor 2;
>     disk /dev/md1;
>     meta-disk internal;
>     on san1 {
>         address;
>     }
>     on san2 {
>         address;
>     }
>     net {
>         after-sb-0pri discard-younger-primary;
>         after-sb-1pri discard-secondary;
>         after-sb-2pri call-pri-lost-after-sb;
>         max-buffers 8000;
>         max-epoch-size 8000;
>         unplug-watermark 4096;
>         sndbuf-size 512k;
>     }
>     startup {
>         wfc-timeout 10;
>         degr-wfc-timeout 20;
>     }
>     syncer {
>         rate 100M;
>     }
> }
> root at san1:/etc/drbd.d# cat /proc/drbd
> version: 8.3.7 (api:88/proto:86-91)
> srcversion: EE47D8BF18AC166BE219757
>  2: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate A r----
>     ns:30035906 nr:0 dw:29914667 dr:42547522 al:49271 bm:380 lo:0 pe:0
> ua:0 ap:0 ep:1 wo:b oos:0
> root at san1:/etc/drbd.d# dd if=/dev/zero of=/dev/mapper/vg0-testdisk
> oflag=direct bs=1M count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 2.48851 s, 42.1 MB/s

With reads of 4MB or 8MB blocks you might get better values.

But beware: continuous reading is a very bad indicator for performance unless 
you do video-playback only. Better test with a real filesystem and a real 
benchmark, I am using dbench with great success as it gives the typical usage 
patterns of business computers.

> However, if I stop DRBD on the secondary:
> Then I get good performance:
> Can anyone suggest how I might improve performance while the secondary
> is connected? In the worst case scenario, I would expect a write speed
> to these drives between 100 and 150M/s. I can't test writing at the
> moment, since they are used by DRBD, but read performance:
> root at san2:/etc/drbd.d# dd if=/dev/md1 of=/dev/null oflag=dsync bs=1M
> count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes (105 MB) copied, 0.706124 s, 148 MB/s
> I am using a single 1G crossover ethernet for DRBD sync
> Should I increase the connection between the servers for sync to 2 x 1G
> which would exceed max write speed of the disks on san2?

First: get a dual-link, that made my disk be the limiting factor.
Second: Use external metadata at least on the hdds and put the meta-data on a 
different disk. That made my dual-gigabit-link be the limiting factor again...

> Is there some other config option to say "I don't mind if san2 is behind
> san1, just cache the changes and make them when you can", similar to
> running DRBD over a slow remote connection?

This is afaik what drbd-proxy does, but I leave it to the sales-people to 
explain what it does.

Have fun,

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120823/2792787f/attachment.pgp>

More information about the drbd-user mailing list