[DRBD-user] DRBD performance and heavy load

Mario Peschel mario at uni.de
Mon Mar 3 00:27:52 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Sam,

this is the result of our dd:

# dd if=/dev/zero bs=4096 count=10000 of=/tmp/testfile oflag=dsync
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 263.415 seconds, 155 kB/s

Very slow. :-( But it shows that drbd is not the cause for this poor 
performance.

We also got two new Dell R200 servers with the newer SAS 6iR RAID 
controller, this are the results:

# dd if=/dev/zero bs=4096 count=10000 of=/tmp/testfile oflag=dsync
10000+0 records in
10000+0 records out
40960000 bytes (41 MB) copied, 8.22052 seconds, 5.0 MB/s

Looks like the newer controller is working better with Linux.

But do you have any idea to get the 5iR controller working a bit faster?

Mario

Sam schrieb:
> Hi Mario,
> 
> I think the problem might be with the SAS 5iR. I see similarly poor
> performance for small writes on a Dell SC1435 with the same
> controller.
> 
> Try the following test and let me know your result:
> 
> dd if=/dev/zero bs=4096 count=10000 of=/some/file/on/your/raid/disk oflag=dsync
> 
> Sam
> 
> On Fri, Feb 29, 2008 at 1:22 AM, Mario Peschel <mario at uni.de> wrote:
>> Hello,
>>
>>  we have two brand new Dell Server PE860 Quad-Core Xeon X3220 2.4GHz with
>>  4GB memory, 2x250 GB SATA HDD and SAS 5iR RAID controller running RAID1
>>  each. Both servers are connected with a gigabit crossover cable for DRBD
>>  replication.
>>
>>  We're using Debian Linux with kernel 2.6.18-fza-5-amd64 (for use with
>>  OpenVZ) drbd 8.2.4.
>>
>>  We noticed a heavy load when running some performance tests with bonnie++:
>>
>>  rod:/root# bonnie++ -d /vz/tmp -u 1000
>>  Using uid:1000, gid:1000.
>>  Writing with putc()...done
>>  Writing intelligently...done
>>  Rewriting...done
>>  Reading with getc()...done
>>  Reading intelligently...done
>>  start 'em...done...done...done...
>>  Create files in sequential order...done.
>>  Stat files in sequential order...done.
>>  Delete files in sequential order...done.
>>  Create files in random order...done.
>>  Stat files in random order...done.
>>  Delete files in random order...done.
>>  Version  1.03       ------Sequential Output------ --Sequential Input-
>>  --Random-
>>                      -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>  --Seeks--
>>  Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
>>  /sec %CP
>>  rod              8G  1790   2  4477   1  5257   1 65267  83 63655   7
>>  151.5   0
>>                      ------Sequential Create------ --------Random
>>  Create--------
>>                      -Create-- --Read--- -Delete-- -Create-- --Read---
>>  -Delete--
>>                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>>                   16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
>>  +++++ +++
>>  rod,8G,1790,2,4477,1,5257,1,65267,83,63655,7,151.5,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
>>
>>  Our configuration (drbd.conf) on both servers:
>>
>>  global {
>>    usage-count yes;
>>  }
>>  common {
>>    syncer { rate 40M; }
>>  }
>>  resource r0 {
>>    protocol C;
>>    handlers {
>>      pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
>>      pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
>>      local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
>>      outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
>>    }
>>    startup {
>>      degr-wfc-timeout 120;
>>    }
>>    disk {
>>      on-io-error   detach;
>>    }
>>    net {
>>      after-sb-0pri disconnect;
>>      after-sb-1pri disconnect;
>>      after-sb-2pri disconnect;
>>      rr-conflict disconnect;
>>    }
>>    syncer {
>>      rate 40M;
>>      al-extents 257;
>>    }
>>    on rod {
>>      device     /dev/drbd0;
>>      disk       /dev/sda3;
>>      address    192.168.1.2:7788;
>>      meta-disk  internal;
>>    }
>>    on todd {
>>      device    /dev/drbd0;
>>      disk      /dev/sda3;
>>      address   192.168.1.1:7788;
>>      meta-disk internal;
>>    }
>>  }
>>
>>  This is very slow. Anyone have an idea why it is so or what we can try
>>  to boost up the performance?
>>
>>  Thanks,
>>
>>  Mario
>>  _______________________________________________
>>  drbd-user mailing list
>>  drbd-user at lists.linbit.com
>>  http://lists.linbit.com/mailman/listinfo/drbd-user
>>




More information about the drbd-user mailing list