[DRBD-user] DRBD performance and heavy load

Mario Peschel mario at uni.de
Fri Feb 29 10:22:53 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

we have two brand new Dell Server PE860 Quad-Core Xeon X3220 2.4GHz with 
4GB memory, 2x250 GB SATA HDD and SAS 5iR RAID controller running RAID1 
each. Both servers are connected with a gigabit crossover cable for DRBD 
replication.

We're using Debian Linux with kernel 2.6.18-fza-5-amd64 (for use with 
OpenVZ) drbd 8.2.4.

We noticed a heavy load when running some performance tests with bonnie++:

rod:/root# bonnie++ -d /vz/tmp -u 1000
Using uid:1000, gid:1000.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.03       ------Sequential Output------ --Sequential Input- 
--Random-
                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
/sec %CP
rod              8G  1790   2  4477   1  5257   1 65267  83 63655   7 
151.5   0
                     ------Sequential Create------ --------Random 
Create--------
                     -Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
/sec %CP
                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 
+++++ +++
rod,8G,1790,2,4477,1,5257,1,65267,83,63655,7,151.5,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

Our configuration (drbd.conf) on both servers:

global {
   usage-count yes;
}
common {
   syncer { rate 40M; }
}
resource r0 {
   protocol C;
   handlers {
     pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
     pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
     local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
     outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";
   }
   startup {
     degr-wfc-timeout 120;
   }
   disk {
     on-io-error   detach;
   }
   net {
     after-sb-0pri disconnect;
     after-sb-1pri disconnect;
     after-sb-2pri disconnect;
     rr-conflict disconnect;
   }
   syncer {
     rate 40M;
     al-extents 257;
   }
   on rod {
     device     /dev/drbd0;
     disk       /dev/sda3;
     address    192.168.1.2:7788;
     meta-disk  internal;
   }
   on todd {
     device    /dev/drbd0;
     disk      /dev/sda3;
     address   192.168.1.1:7788;
     meta-disk internal;
   }
}

This is very slow. Anyone have an idea why it is so or what we can try 
to boost up the performance?

Thanks,

Mario



More information about the drbd-user mailing list