[DRBD-user] DRBD9 slow performance

Rudolf Kasper rkasper at sonog.de
Mon Mar 21 22:13:36 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


i've got a third setup running with drbd9 right now and i'm wondering 
that performance in this new version of drbd is that slow. I see a lot 
of traffic going on and then suddenly the traffic is slowing down. 
drbdadm show then "congested:yes" on Secondard and "blocker:lower" on 
primary device. the network is a gigabit network and with the same 
servers and drbd8.3 i was able to get at least 80mb/s out of it. When 
the status is like this the performance is near to 2mb/s for arround 2-5 
minutes, then it boosts up for 20sec and slowing down again.
I also tested it with a crosslink connection.

The setup is quit normal and nothing tweaked yet.

Couldn't find the bottleneck.

global {
         #usage-count yes;
         # minor-count dialog-refresh disable-ip-verification
         # cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 

common {
         handlers {
                 # These are EXAMPLE handlers only.
                 # They may have severe implications,
                 # like hard resetting the node under certain 
                 # Be careful when chosing your poison.

                 # pri-on-incon-degr 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; 
reboot -f";
                 # pri-lost-after-sb 
/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; 
reboot -f";
                 # local-io-error "/usr/lib/drbd/notify-io-error.sh; 
/usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger 
; halt -f";
                 # fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
                 # split-brain "/usr/lib/drbd/notify-split-brain.sh 
                 # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh 
                 # before-resync-target 
"/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
                 # after-resync-target 

         startup {
                 wfc-timeout 20;
                 degr-wfc-timeout 20;
                 # outdated-wfc-timeout wait-after-sb

         options {
                 # cpu-mask on-no-data-accessible

         disk {
                 # size on-io-error fencing disk-barrier disk-flushes
                 # disk-drain md-flushes resync-rate resync-after 
                 # c-plan-ahead c-delay-target c-fill-target c-max-rate
                 # c-min-rate disk-timeout

         net {
                 # protocol timeout max-epoch-size max-buffers 
                 # connect-int ping-int sndbuf-size rcvbuf-size ko-count
                 # allow-two-primaries cram-hmac-alg shared-secret 
                 # after-sb-1pri after-sb-2pri always-asbp rr-conflict
                 # ping-timeout data-integrity-alg tcp-cork on-congestion
                 # congestion-fill congestion-extents csums-alg 
                 # use-rle
                 protocol C;

resource r0 {
       net {
               cram-hmac-alg sha1;
               shared-secret "Cwzvaerfyficjdh6";
       volume 0 {
               device    /dev/drbd0;
               disk      /dev/sda3;
               meta-disk internal;
       on node1 {
               node-id   0;
       on node2 {
               node-id   1;
       connection {
               host      node1 port 7000;
               host      node2 port 7000;
               net {
                         protocol C;

would be nice if anyone can help me with this issue.

kind regards

More information about the drbd-user mailing list