[DRBD-user] DRBD 9 Peack CPU load

Mats Ramnefors mats at ramnefors.com
Tue May 17 00:21:26 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

I am testing a DRBD 9 and 8.4 in simple 2 node active - passive clusters with NFS.

Copying files form a third server to the NFS share using dd, I typically see an average of 20% CPU load (with v9) on the primary during transfer of larger files, testing with 0,5 and 2 GB.

At the very end of the transfer DRBD process briefly peaks at 70 - 100% CPU. 

This causes occasional problems with Corosync believing the node is down. Increasing the token time for Corosync to 2000 ms fixes the symptom but I am wondering about the root cause and any possible fixes? 

This is the DRBD configuration.

resource san_data {
  protocol C;
  meta-disk internal;
  device /dev/drbd1;
  disk   /dev/nfs/share;
  net {
    verify-alg sha1;
    cram-hmac-alg sha1; 
    shared-secret ”****************";
    after-sb-0pri discard-zero-changes;
    after-sb-1pri discard-secondary;
    after-sb-2pri disconnect;
  on san1 { 
  on san2 { 

The nodes are two VM on different ESXi hosts (Dell T620). Hosts are very lightly loaded. Network is 1 Gb at the moment through a Catalyst switch. Network appears not saturated.

BTW when can we expect a DRBD resource agent for v9? It took me a while to figure out why DRBD9 was not working with Pacemaker and then finding a patch to the agent :) 

Cheers Mats

More information about the drbd-user mailing list