[DRBD-user] DRBD deadly slow

Lars Ellenberg lars.ellenberg at linbit.com
Thu Nov 13 16:59:54 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Nov 13, 2008 at 04:13:42PM +0100, Felix Zachlod wrote:
> Hello Group!
> 
> I have been trying to set up a drbd configuration with two storage servers
> running under rpath Linux (openfiler). The Kernel version is 2.6.26. The
> Systems are each running on a q-core Xeon processor, 4GB memory and have a
> 16 channel icp raid controller with BBWC and are configured with raid 60
> underlying. Trying to write to a sample partition /dev/sdbX on each of the
> machines will reach about 120MByte/s write performance which seems is poor
> for a raid of 16 1TB SATA drives but okay that is not the point here.
>
> Trying
> to write to the drbd1 device gives a performance of ~ 3MByte/s if both nodes
> are online and ~8Mbyte if writing to the primary node where the secondary is
> offline.
>
> I watched the webinar about performance tuning for drbd and tried
> several options such as max-buffers, al-extents and no-disk-flushes which
> all in all did not help at all. Performance increases with this tuning
> parameters are not more than 10-15 percent whereby this might be measuring
> inaccuracy too.

as your controller cache has a working BBU,
you can safely use "no-disk-flushes;"
starting with drbd 8.2.7,
you should then also say "no-disk-barriers;".

verify that the settings are actually used with
 drbdsetup /dev/drbd0 show

> Both machines are connected with two crossover cables and a balance-rr
> bonding device which gives a throughput of ~ 210MByte/s using tcp. But the
> fact that writing on a local disk is very slow using the drbd driver either
> I think the fault has to found somewhere in the io-configuration.

in that setup, I'd expect to be able to saturate your raid controller.

> I tested performance using:
> 
> dd if=/dev/zero of=/dev/drbd1 bs=512M count=1
> 
> giving me a performance of ~ 8Mbyte /s
> 
> and 
> 
> dd if=/dev/zero of/dev/sdb3 bs=512M count=1
> 
> giving me a performance of ~ 110-120Mbyte/s
> 
> (where drbd1 is pointing to /dev/sdb1 on each node).
> 
> Read performance is not as bad as write performance so I get 230Mbyte/s vs
> 125Mbyte/s (but its bad enough)

does "oflag=direct" resp. "iflag=direct" change anything in those micro
benchmarks?

> 
> If anyone has a hint what to try next it would be greatly appreciated.
> 
> 
> Thank you in advanced 
> With kind regards, Felix
> 
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list