[DRBD-user] Performance issues / noticable DRBD overhead

Lars Ellenberg lars.ellenberg at linbit.com
Thu Jul 16 18:04:24 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Jul 16, 2009 at 10:35:48AM +0200, Marco Fischer wrote:
> Hi Folks!
> 
> Has anyone measured the drbd overhead???
> 
> I have two serious Sun servers syncing a 50gig partition via drbd with
> protocol C, in active/active configuration.
> The servers have a dedicated 1000mbit interface for drbd connected via a
> cross-cable.
> The synced volume has an ocfs2 filesystem using the dedicated link, too.
> The sync-speed is limited to 100M, so there is a little bit left for the
> ocfs2-locking.

the drbd "sync-rate" is a throttle only for resynchronisation
(after having been disconnected, thus degraded, for some time;
 thinks "raid rebuild")

it has absolutely no effect for live replication ("normal operation").

> One of the two nodes is my "application-active", meaning apache running
> and mysql running on the drbd volume, the other node is my
> "application-passive", which will be activated, if the first is down.
> 
> Yesterday I had to benchmark the volume, because I had some performance
> issues.
> 
> I made 3 tests with Unixbench - fs

hm.
I prefer simple dd ;)
more control.

> 
> A. on drbd-volume as it is. Replication in protocol C activated
> B. on drbd-volume, replication stopped (other node shut down)
> C. on physical ext3 partition, to test the subsystems performance

are you comparing OCFS2 vs ext3 results here?

> My (selected) results are:
> 
>   Results
> ===========
> File Read 1024 bufsize, 2000 blocks
> -----------------------------------
>    1 thread      4 threads
> A:  72,10 MB/s    69,73 MB/s
> B:  73,84 MB/s    77,62 MB/s
> C: 102,15 MB/s   231,92 MB/s

badly tuned system, I'd say.
usually we have negligible overhead on read,

> File Write 1024 bufsize, 2000 blocks
> ------------------------------------
>    1 thread      4 threads
> A:  16,48 MB/s    11,62 MB/s
> B:  17,65 MB/s    12,70 MB/s
> C:  46,19 MB/s    28,08 MB/s

and about 5 to 10 % overhead on writes,
(unless you have a bottleneck).

> File Read 4096 bufsize, 8000 blocks
> -----------------------------------
>    1 thread      4 threads
> A: 177,32 MB/s   307,13 MB/s
> B: 178,84 MB/s   301,32 MB/s
> C: 198,46 MB/s   469,85 MB/s
> 
> File Write 4096 bufsize, 8000 blocks
> ------------------------------------
>    1 thread      4 threads
> A:  57,69 MB/s    43,69 MB/s
> B:  63,44 MB/s    45,38 MB/s
> C: 161,90 MB/s    90,30 MB/s
> 
> 
> As you can see, the syncing with protocol C has no noticeable overhead,
> maybe 7 or 8% compared to drbd-standalone.
> 
> But I think that the drbd subsystem has a serious read overhead of 40%
> and write overhead of around 150% compared to non-drbd'd volumes. 
> 
> Does anybody notice performance results like mine?
> 
> I use:
> Drbd 8.0.14 shipped with Debian Lenny
> Ocfs2 1.4.1 shipped with Debian Lenny
> 
> Two Dual-Opteron 2216 Sun servers with 73G SAS 15K disks in RAID 1
> configuration
> 
> Kind regards
> Marco Fischer


-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list