[DRBD-user] DRBD write speed

Maxim Ianoglo dotnox at gmail.com
Sun May 8 19:46:18 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

Getting some "strange" results on write testing on DRBD Primary.
Every time I get more data written than 1Gb link can handle.
Get about 133 MB/s with 1Gb link saturated and both Nodes in sync.
Also If I make a test with files of size less that 1GB ( for example 900MB ) I always get results between 650-750MB/s and does not meter which sync protocol 
I use in DRBD.

Does this have something to do with DRBD realization ? Buffers or something ... 

Here is my configuration file:
global { 
  usage-count no; 
}
resource repdata {
  protocol B;

  startup { 
    wfc-timeout 0; 
    degr-wfc-timeout 100; 
  }
  disk { 
    on-io-error detach; 
    no-disk-barrier;
    no-disk-flushes;
    no-md-flushes;
  }
  net {
    max-buffers 20000;
    max-epoch-size 20000;
    unplug-watermark 1024;
    sndbuf-size 0;
    cram-hmac-alg "sha1"; 
    shared-secret "secret"; 
    data-integrity-alg "crc32c"; 
  }
  syncer {
    rate 35M;
    verify-alg "crc32c";
    csums-alg "crc32c";
  }
  on node1 {
    device /dev/drbd0;
    disk /dev/sda9;
    address 10.10.1.141:7788;
    meta-disk internal;
  }
  on node2 {
    device /dev/drbd0;
    disk /dev/sda8;
    address 10.10.1.142:7788;
    meta-disk internal;
  }
}

OS: CentOS 5.6 x86_64
RAID controller H700 with 1GB cache.
--
Maxim Ianoglo



More information about the drbd-user mailing list