[DRBD-user] DRBD Performance

Ralf Schenk rs at databay.de
Fri Nov 17 12:25:02 CET 2006

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello !

I've sucsessfully set up DRBD 8.0pre6 on 2 Ubuntu Server and XEN based
Xeon Machines.

I did some benchmarks and I'm not fully content with the performance.
The machines are connected crossover via Fast Gigabit Adapters (PCI-X)
with Jumboframes (MTU 9000) and netio shows near wirespeed throupout of
about 116000 KByte/s in each direction.

While benchmarking I saw that there was less data transferred over the
wire than the benchmark had to write. Is there any compression used in
the network protocol of drbd ?

Is it possible to switch this off because my Hardware RAID 1 internal
SATA storages are not capable of either reading or writing with Gigbit
performance.

Is there best practice setting of
max-buffers
max-epoch-size
sndbuf-size
or other settings for a fast Gigabit interconnect and Protocol C usage ?

At the moment performance decrease in writing speed of underlying
device/drbd device is about 25%. Is that ok ?

-- 
__________________________________________________

Ralf Schenk
fon (02 41) 9 91 21-0
fax (02 41) 9 91 21-59
rs at databay.de

Databay AG
Hüttenstraße 7
D-52068 Aachen
www.databay.de
_________________________________________________





More information about the drbd-user mailing list