[DRBD-user] write performance

Lars Ellenberg Lars.Ellenberg at linbit.com
Wed Aug 23 12:45:00 CEST 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


/ 2006-08-23 11:20:56 +0200
\ Anselm Strauss:
> Hi.
> 
> On my drbd system the write performance is far below what i expected. I have two opteron servers connected with 
> gbit and each one has 2 sata 500gb disks. drbd runs on a dedicated cross-connect gbit link. I'm pretty sure that 
> neither the cpu, network, ram nor the disks are a bottleneck.
> For a start i created 2 drbd devices that are primary on one server. each device on a separate disk. I exported the 
> devices with lustre to another host. the second drbd host just receives the mirrored data over the dedicated link 
> and doesn't export anything.
> I get about 20mb/s for a single drbd device and about 28mb/s when i stripe over both. I also tried multiple drbd 
> devices per disk. I can reach 35mb/s with 4 drbd devices, 2 devices on each disk, but that's it. I tried to modify 
> buffers and the protocol, but it seems to be the limit.
> When I disable mirroring data in drbd ("drbdadm down all" on second server) I can reach rates of 40mb/s, 85mb/s and 
> 88mb/s as in the cases above.

> Is this the performance impact i have to expect from drbd?

no.

> Has anybody reached higher write performance with drbd?

yes.  but you do not have to take my word on it.

maybe some happy user on this list can confirm that typically drbd
overhead is more in the range of 1 to 3%, not 50%.

> Where could be the bottleneck?

test the stack from boottom to top. single component tests.

you could follow some hints in
 http://www.gossamer-threads.com/lists/drbd/users/10689#10689

first, test the read and write performance of the disks,
using the bare block device.
on both servers. inner and outer cylinders. several times.

put drbd on top of that, disconnected.

do the full set of benchmarks again, now on /dev/drbdX

connect drbd.

do the full set of benchmarks again, still on /dev/drbdX

put a file system on /dev/drbdX

test again for no-drbd, disconnected-drbd, connected-drbd,
this time using a file in the file system, not the block device.

put lustre on the stack,
test with that on no-drbd, disconnected-drbd, connected-drbd.

then we have the data,
and can start to interpret the findings.

yes, there may be some performance penalties especially when using
networked file systems on top of a networked block device.  but until
now, we have always been able to tune this to the point where the drbd
overhead was well within tolerable limits.

as a side note:
we have a deployment where we cluster an iSCSI server using drbd.
we had a hard time trying to tune DRBD, until it turned out that one of
the ("identical") hardware raids used as storage delivered inconsistent
performance for some unknown reason.
it was replaced with an other one.
things went smoothly once again.

-- 
: Lars Ellenberg                                  Tel +43-1-8178292-0  :
: LINBIT Information Technologies GmbH            Fax +43-1-8178292-82 :
: Schoenbrunner Str. 244, A-1120 Vienna/Europe   http://www.linbit.com :
__
please use the "List-Reply" function of your email client.



More information about the drbd-user mailing list