[DRBD-user] Performance issue about DRBD transport

Mark Wu wudx05 at gmail.com
Fri Feb 26 02:38:05 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi guys,

I am testing the performance of DRBD transport added in DRBD9.  The test
result seems problematic because
it's even worse than ISCSI transport.

Here's the test result of 4k random write on SSD disk with fio.

                   Raw disk    DRBD8 Local    DRBD9 Local    DRBD Transport
   ISCSI Transport with DRBD9
IOPS              79756           55549              28624               16343
                   33435
Latency(us)     199.65          286.36            138.43
977.73                   463.08

Please note the "DRBD8 Local" is tested with a tuned configuration while
all DRBD9 tests are tested with the configurations generated by drbdmanage.

It's assumed that DRBD transport should give better performance than ISCSI
transport.  I don't know why it's even worse. Any suggestion to tune the
DRBD transport?   Thanks!


The following is my test setup and configurations:

Disk: SSD Samsung SSD 845DC EVO 480GB
NIC:  10Gb
OS Kernel: 3.10.0-327.10.1.el7
DRBD9 kernel module: 9.0.1-1 (GIT-hash:
f57acfc22d29a95697e683fb6bbacd9a1ad4348e)
drbdmanage-0.92
drbd-utils-8.9.6

DRBD8 tuned configurations are copied from
http://www.linbit.com/en/resources/technical-publications/101-performance/545-intel-sata-ssd-testing-with-drbd-2

DRBD9 volume configuration (The nodes drbd1 and drbd2 provide the storage
of volume vol2 and the drbd3 is the diskless client)

[root at drbd1 ~]# drbdsetup show vol2
resource vol2 {
    _this_host {
        node-id 0;
        volume 0 {
            device minor 102;
            disk "/dev/drbdpool/vol2_00";
            meta-disk internal;
            disk {
                size             39062500s; # bytes
                disk-flushes     no;
                md-flushes       no;
                al-extents       6433;
                al-updates       no;
                read-balancing   least-pending;
            }
        }
    }
    connection {
        _peer_node_id 1;
        path {
            _this_host ipv4 192.168.253.131:7001;
            _remote_host ipv4 192.168.253.132:7001;
        }
        net {
            max-epoch-size   20000;
            sndbuf-size      524288; # bytes
            cram-hmac-alg    "sha1";
            shared-secret    "q9Ku9G/Z/fhG1b3aemcD";
            verify-alg       "sha1";
            max-buffers      81920;
            _name            "drbd2";
        }
        volume 0 {
            disk {
                resync-rate      409600k; # bytes/second
                c-plan-ahead     10; # 1/10 seconds
                c-fill-target    88s; # bytes
                c-max-rate       614400k; # bytes/second
                c-min-rate       10240k; # bytes/second
            }
        }
    }
}


[root at drbd3 ~]# drbdsetup show vol2
resource vol2 {
    _this_host {
        node-id 2;
        volume 0 {
            device minor 102;
        }
    }
    connection {
        _peer_node_id 0;
        path {
            _this_host ipv4 192.168.253.133:7001;
            _remote_host ipv4 192.168.253.131:7001;
        }
        net {
            cram-hmac-alg    "sha1";
            shared-secret    "q9Ku9G/Z/fhG1b3aemcD";
            _name            "drbd1";
        }
    }
    connection {
        _peer_node_id 1;
        path {
            _this_host ipv4 192.168.253.133:7001;
            _remote_host ipv4 192.168.253.132:7001;
        }
        net {
            cram-hmac-alg    "sha1";
            shared-secret    "q9Ku9G/Z/fhG1b3aemcD";
            _name            "drbd2";
        }
    }
}

The iscsi target is exported on drbd2 via LIO.  I switched the primary from
drbd3 to drbd2 when testing the iscsi transport.



FIO test configuration
[global]
bs=4k
ioengine=libaio
iodepth=16
size=10g
direct=1
runtime=300
directory=/mnt/ssd1
filename=ssd.test.file

[rand-write]
rw=randwrite
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160226/88ff1efe/attachment.htm>


More information about the drbd-user mailing list