[DRBD-user] slow disk throughput

Lee Christie Lee at titaninternet.co.uk
Mon Jun 16 23:45:26 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Heh we saw exactly the same - excellent disk performance using "dd" on a
"normal" local disk and terrible performance when writing to a drbd
volume, even if the node is standalone ie not replicating.

The fix for us was to add the no-disk-flushes and no-md-flushes
parameters. I didn't research too much what these parameters do, but
there is mention you can put your data at risk - we're using battery
backed up raid controllers so I figure there's no issue.

disk {
    on-io-error detach;
    no-disk-flushes;
    no-md-flushes;
     } 

Now we are synchronising between 2 nodes at about 120MB/s - full Gig-e
saturation. Tomorrow two 10Gb-e cards are arriving :)

We're also building a mail cluster. Designing for around 200,000 users
and starting capacity of 5TB - a little bigger than your project.

Mail me offlist if you want to share any strategies with regards
drbd/failover/backup/filesystem etc.

Regards
Lee.

> -----Original Message-----
> From: drbd-user-bounces at lists.linbit.com 
> [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of 
> William Francis
> Sent: 16 June 2008 21:48
> To: drbd-user at lists.linbit.com
> Subject: [DRBD-user] slow disk throughput
> 
> 
> 
> I'm trying to tune DRBD to be working between two new computers with 
> very fast RAID10 disk. I understand that there's a 
> performance hit when 
> using DRBD but this seems unusually high from what I've read 
> elsewhere.
> 
> My end usage is as a mail server serving about 100 people which means 
> lots of small reads and writes. I don't have a good test for 
> that so I'm 
> using dd to test basic throughput and I'm seeing only about 10-15%  
> performance on the DRBD partition compared with raw disk when using 
> vmstat to view. When we tried to bring the mail server up we 
> saw about 
> 90%+ iowait times very often.
> 
> Raw throughput on the same RAID10 array, non-drbd partition:
> 
> time dd if=/dev/zero of=/tmp/delete/out.file bs=1M count=5000
> 
> root at d243:/opt/delete# vmstat 1
> procs -----------memory---------- ---swap-- -----io---- -system-- 
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in  
>  cs us sy 
> id wa
>  1  2    264 2458700  24396 1526716    0    0     8 169552 
> 1141  195  1 
> 35 41 23
>  1  2    264 2276340  24584 1703764    0    0    12 182552 
> 1213  308  0 
> 37 12 50
>  1  3    264 2117804  24752 1860468    0    0     8 177404 
> 1109 1115  0 
> 39 10 51
> 
> 
> notice the 170+K block throughput
> 
> DRBD partition, same dd command onto the DRBD partition:
> 
> procs -----------memory---------- ---swap-- -----io---- -system-- 
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in  
>  cs us sy 
> id wa
> 1  1    304 157968  15292 3802460    0    0     0  8622 6641 
> 8026  0  4 
> 80 16
>  0  1    304 158256  15296 3802456    0    0     0  7208 
> 10241 11711  0  
> 6 30 64
>  0  0    304 159156  15024 3801788    0    0     4  9238 1293 
> 1073  0 16 
> 63 21
>  0  1    304 157912  15032 3803036    0    0     0 12273 8828 
> 10401  0  
> 6 86  8
>  0  1    304 159208  15044 3801588    0    0     0 12278 8964 
> 9651  0  9 
> 64 27
> 
> 
> now it does about 8K-12K blocks a second though it will do 25K for a 
> little while before settling down to this speed.
> 
> The network link is gige and runs about 0.060ms ping times (no 
> "crossover" cable).
> 
> The file system is ext3 with 4K blocks. if I stop drbd on the slave 
> machine I get about 20% performance increase. Should I expect more?
> 
> Here's my drbd.conf - thanks for any ideas
> 
> Will
> 
> 
> 
> global {
>     usage-count yes;
> }
> common {
>   syncer { rate 100M; }
> }
> resource drbd0 {
>   protocol C;
>   handlers {
>     pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
>     pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
>     local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
>     outdate-peer "/usr/sbin/drbd-peer-outdater";
>   }
>   startup {
>   }
>   disk {
>     on-io-error   detach;
>   }
>   net {
>     max-buffers     2048;
>     unplug-watermark   2048;
>     max-epoch-size  2048;
>     after-sb-0pri discard-younger-primary;
>     after-sb-1pri consensus;
>     after-sb-2pri disconnect;
>     rr-conflict disconnect;
>   }
>   syncer {
>     rate 50M;
>     al-extents 1009;
>   }
>   on d242 {
>     device     /dev/drbd0;
>     disk       /dev/sda3;
>     address    10.2.8.17:7788;
>     meta-disk  internal;
>   }
>   on d243 {
>     device    /dev/drbd0;
>     disk      /dev/sda3;
>     address   10.2.8.18:7788;
>     meta-disk internal;
>   }
> }
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 

-------------------------------------------------------------------------------

This email may contain legally privileged and/or confidential information. It is solely for and is confidential for use by the addressee. Unauthorised recipients must preserve, observe and respect this confidentiality. If you have received it in error please notify us and delete it from your computer. Do not discuss, distribute or otherwise copy it. 

Unless expressly stated to the contrary this e-mail is not intended to, and shall not, have any contractually binding effect on the Company and its clients. We accept no liability for any reliance placed on this e-mail other than to the intended recipient. If the content is not about the business of this Company or its clients then the message is neither from nor sanctioned by the Company. 

We accept no liability or responsibility for any changes made to this e-mail after it was sent or any viruses transmitted through this e-mail or any attachment. It is your responsibility to satisfy yourself that this e-mail or any attachment is free from viruses and can be opened without harm to your systems.




More information about the drbd-user mailing list