[DRBD-user] Performance issues / noticable DRBD overhead

Marco Fischer MFischer at brainbits.net
Thu Jul 16 10:35:48 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Folks!

Has anyone measured the drbd overhead???

I have two serious Sun servers syncing a 50gig partition via drbd with
protocol C, in active/active configuration.
The servers have a dedicated 1000mbit interface for drbd connected via a
cross-cable.
The synced volume has an ocfs2 filesystem using the dedicated link, too.
The sync-speed is limited to 100M, so there is a little bit left for the
ocfs2-locking.

One of the two nodes is my "application-active", meaning apache running
and mysql running on the drbd volume, the other node is my
"application-passive", which will be activated, if the first is down.

Yesterday I had to benchmark the volume, because I had some performance
issues.

I made 3 tests with Unixbench - fs

A. on drbd-volume as it is. Replication in protocol C activated
B. on drbd-volume, replication stopped (other node shut down)
C. on physical ext3 partition, to test the subsystems performance

My (selected) results are:

  Results
===========
File Read 1024 bufsize, 2000 blocks
-----------------------------------
   1 thread      4 threads
A:  72,10 MB/s    69,73 MB/s
B:  73,84 MB/s    77,62 MB/s
C: 102,15 MB/s   231,92 MB/s


File Write 1024 bufsize, 2000 blocks
------------------------------------
   1 thread      4 threads
A:  16,48 MB/s    11,62 MB/s
B:  17,65 MB/s    12,70 MB/s
C:  46,19 MB/s    28,08 MB/s


File Read 4096 bufsize, 8000 blocks
-----------------------------------
   1 thread      4 threads
A: 177,32 MB/s   307,13 MB/s
B: 178,84 MB/s   301,32 MB/s
C: 198,46 MB/s   469,85 MB/s


File Write 4096 bufsize, 8000 blocks
------------------------------------
   1 thread      4 threads
A:  57,69 MB/s    43,69 MB/s
B:  63,44 MB/s    45,38 MB/s
C: 161,90 MB/s    90,30 MB/s


As you can see, the syncing with protocol C has no noticeable overhead,
maybe 7 or 8% compared to drbd-standalone.

But I think that the drbd subsystem has a serious read overhead of 40%
and write overhead of around 150% compared to non-drbd'd volumes. 

Does anybody notice performance results like mine?

I use:
Drbd 8.0.14 shipped with Debian Lenny
Ocfs2 1.4.1 shipped with Debian Lenny

Two Dual-Opteron 2216 Sun servers with 73G SAS 15K disks in RAID 1
configuration

Kind regards
Marco Fischer
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Marco Fischer (mf at brainbits.net).vcf
Type: text/x-vcard
Size: 436 bytes
Desc: Marco Fischer (mf at brainbits.net).vcf
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090716/5d781523/attachment.vcf>


More information about the drbd-user mailing list