[DRBD-user] drbd performance

joe_p joep at limelightnetworks.com
Fri Dec 8 23:22:00 CET 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

I hit a snag and was hoping for some help.  

I have two Gentoo machine with drbd running on them. 
Dedicated GbE links with crossover cable and jumbo frames enabled.
The drives are Fujitsu 147GB SAS (15k RPM) with Raid Setup is RAID10 using
md software raid to setup 7 RAID1 md devices md0-md6.  Then an md7 was
created to stripe the data across the mirrors.  We are using a LSI

When I run the testing with the following command from the primary machine
with DRBD running on both machines I seem to get very good results.
dm -x -a 0 -s 4g -b 20m -m -y -p -o  /db/tmp/hello
53.97 MB/sec (4294967296 B / 01:15.895806)

When I turn on Mysql database and start replication with DRBD off on the
secondary machine everything runs well.
When I turn on DRBD on the secondary machine I see latency/hang problems.

When I see the latency/hang I ran a very simple vi test on a file on the
DRBD raid and as you will see hangs.
When the process hangs you could see the process was stuck in a "D" state
running iostats and the process was seeing timeouts during the fsync().

drbd.conf -
 resource database {
   protocol A;
   incon-degr-cmd "exit 1";
   startup {
     degr-wfc-timeout 900;
   on db1 {
     device    /dev/drbd0;
     disk      /dev/vg1/db;
     meta-disk  /dev/vg1/db[0];

   on db2 {
     device    /dev/drbd0;
     disk      /dev/vg1/db;
     meta-disk  /dev/vg1/db[0];
   syncer {
      rate 100000;
      al-extents 3833;
View this message in context: http://www.nabble.com/drbd-performance-tf2783462.html#a7766407
Sent from the DRBD - User mailing list archive at Nabble.com.

More information about the drbd-user mailing list