[DRBD-user] multiple DRBD + solid state drive + 10G Ethernet performance tuning. Help!!

Julien Escario escario at azylog.net
Fri Oct 29 18:13:59 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I asked for the same thing (without SSD) a few weeks ago.

Someone answered me that these preformances are perfectly normal is dual master 
configuration.
Seems to be due to the network latency (first server I/O + network latency + 
second server I/O + network latency (ACK))

I finally decided that DRBD is unusable in dual primary setup because the 
performance drop.

Julien

Le 29/10/2010 18:09, wang xuchen a écrit :
> Hi all,
>
> I have encountered a DRBD write performance bottleneck issue.
>
> According to DRBD specification "DRBD then reduces that throughput maximum by
> its additional throughput overhead, which can be expected to beless than 3 percent."
>
> My current test environment is:
>
> (1) Hard-drive:  300G SSD with 8 partitions on it, each of which has a DRBD
> device create on top it. I use dd utility to test its performance: 97 MB/s with
> 4k block size.
>
>
> (2) netowork: dedicated 10G ethernet card for data replication:
> ethtool eth2
> Settings for eth2:
> ...
>          Speed: 10000Mb/s
> ...
>
> (3) DRBD configuration: (Here is one of them).
>
>      on Server1 {
>          device           /dev/drbd3 minor 3;
>          disk             /dev/fioa3;
>          address          ipv4 192.168.202.107:7793 <http://192.168.202.107:7793>;
>          meta-disk        internal;
>      }
>      on NSS_108 {
>          device           /dev/drbd3 minor 3;
>          disk             /dev/fioa3;
>          address          ipv4 192.168.202.108:7793 <http://192.168.202.108:7793>;
>          meta-disk        internal;
>      }
>      net {
>          allow-two-primaries;
>          after-sb-0pri    discard-zero-changes;
>          after-sb-1pri    consensus;
>          after-sb-2pri    call-pri-lost-after-sb;
>          rr-conflict      disconnect;
>          max-buffers      4000;
>          max-epoch-size   16000;
>          unplug-watermark 4000;
>          sndbuf-size       2M;
>          data-integrity-alg crc32c;
>      }
>      syncer {
>          rate             300M;
>          csums-alg        md5;
>          verify-alg       crc32c;
>          al-extents       3800;
>          cpu-mask           2;
>      }
> }
>
> (4) Test result:
>
> I have a simple script which use multiple instance of dd to their corresponding
> DRBD device
>
> dd if=/dev/zero of=/dev/drbd1 bs=4k count=10000 oflag=direct &
> ....
>
> For one device, I got roughly 8M/s. As the test goes, I increase the number of
> device to see if it helps the performance. Unfortunately, as the number of
> device grows, performance seems to be distributed on each of the device with the
> total add up to 10M/s.
>
> Can somebody give me a hint on what was going wrong?
>
> Many Thanks.
> Ben
>
>
> Commit yourself to constant self-improvement



More information about the drbd-user mailing list