[DRBD-user] performance issues with DRBD :(

Jakov Sosic jakov.sosic at srce.hr
Wed Dec 16 03:48:10 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Thu, 2009-12-10 at 19:11 +0100, Lars Ellenberg wrote:

> Then maybe you are simply looking in the wrong direction.
> exactly same hardware?
> exactly same kernel and stuff?
> io scheduler? -> use deadline.
> otherwise: we sell "DRBD Healthchecks" ;)

Hello to all!

I've found the bottlenecks, so let me start over.

I used 3ware controllers and RAID10 over 4 SATA Barracudas. I've googled
around and I've found that 3ware RAID performance really sucks for a
controller of that type. I did have the write cache turned on, although
I didn't have BBU.

So first step first - I've switched from hardware raid 10 do Linux
software raid 10. My performance increased noticeably, for almost 30% on
all bonnie++ tests, and in one test even 400%. That was it - I was
convinced. I've reconfigured both my drbd nodes to software RAID-10.

Next, I've noticed that when drbd is disconnected, my performance
doubles. So I've started to investigate why was that. I've found that
problem was that drbd replication was going through the same 4
round-robin bonded interfaces that iSCSI export was going. So, I've
split that 4 interfaces to two bonds, both round-robin. One was for
iSCSI, other for DRBD replication only. Now, I've got the same
performance as when drbd was disconnected, with minor decrease (~5-10%).

One thing that boders me left - it seems to me that I cannot utilize my
disks to 100% over the network, no matter what do I do with them or how
do I trash them. I would like to find out where's the next bottleneck,
although the results I get on Xen domU are now fantastic, 4x faster than
with hardware raid and drbd replication on the same bond as iSCSI. Also,
I had RAID6 on this crappy controller before, so this is like 40x
increase of performance :) But still, if I got this far, I want to go
even beyond... I would be really happy if I could thrash the disks to
the maximum, if that's possible. Although Xen over (C)LVM over iSCSI
over DRBD over mdraid is too much overlays, and any of them could cause
the slowdowns (although I suspect at iSCSI).

|    Jakov Sosic    |    ICQ: 28410271    |   PGP: 0x965CAE2D   |
|                                                               |

More information about the drbd-user mailing list