[DRBD-user] Tunning DRBD for small writes

Lin Zhao lin at groupon.com
Thu Jan 10 22:09:09 CET 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


BTW we are using drbd 8.2

On Thu, Jan 10, 2013 at 1:08 PM, Lin Zhao <lin at groupon.com> wrote:

> All,
>
> I'm setting up a drbd setup for my system, but the poor disk write
> performance is really throttling my system, even with Protocol A. The write
> speed needs to be 20MB/s to not bottleneck my system. Anyone has advice
> tuning for small writes? 10kb writes is the most common operations my
> system does.
>
> It doesn't make much sense to me Protocol A performance is so much off
> native. Anyone has a theory?
>
> # for i in $(seq 5); do dd if=/dev/zero of=/dev/drbd1 bs=10K count=1000
> oflag=direct; done
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 8.43085 s, 1.2 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 8.48178 s, 1.2 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 8.99026 s, 1.1 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 8.80776 s, 1.2 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 10.2134 s, 1.0 MB/s
>
> for i in $(seq 5); do dd if=/dev/zero of=/dev/sda2 bs=10K count=1000
> oflag=direct; done
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 0.17393 s, 58.9 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 0.157572 s, 65.0 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 0.157324 s, 65.1 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 0.157506 s, 65.0 MB/s
> 1000+0 records in
> 1000+0 records out
> 10240000 bytes (10 MB) copied, 0.157329 s, 65.1 MB/s
>
>
> ---------------------------------------------------------------------------------
>
> dump of drbd.conf:
> global {
>   usage-count yes;
> }
>
> common {
>   protocol A;
>   syncer {
>     rate 400M;
>   }
> }
>
> resource r0 {
>
>   on mbus16 {
>     device    /dev/drbd1;
>     disk      /dev/sda2;
>     address   10.20.76.55:7789;
>     meta-disk internal;
>   }
>   on mbus16-backup {
>     device    /dev/drbd1;
>     disk      /dev/md2;
>     address   10.20.40.100:7789;
>     meta-disk internal;
>   }
>
> }
>
>
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Network latency is negligible:
> # ping mbus16-backup.snc1
> PING mbus16-backup.snc1 (10.20.40.100) 56(84) bytes of data.
> 64 bytes from mbus16-backup.snc1 (10.20.40.100): icmp_seq=1 ttl=62
> time=0.158 ms
> 64 bytes from mbus16-backup.snc1 (10.20.40.100): icmp_seq=2 ttl=62
> time=0.187 ms
> 64 bytes from mbus16-backup.snc1 (10.20.40.100): icmp_seq=3 ttl=62
> time=0.186 ms
> 64 bytes from mbus16-backup.snc1 (10.20.40.100): icmp_seq=4 ttl=62
> time=0.192 ms
> 64 bytes from mbus16-backup.snc1 (10.20.40.100): icmp_seq=5 ttl=62
> time=0.152 ms
>
> --
> Lin Zhao
> Data Platform Engineer
> 3101 Park Blvd, Palo Alto, CA 94306
>



-- 
Lin Zhao
Data Platform Engineer
3101 Park Blvd, Palo Alto, CA 94306
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20130110/12affd1d/attachment.htm>


More information about the drbd-user mailing list