Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
/ 2006-11-17 12:25:02 +0100 \ Ralf Schenk: > Hello ! > > I've sucsessfully set up DRBD 8.0pre6 on 2 Ubuntu Server and XEN based > Xeon Machines. > > I did some benchmarks and I'm not fully content with the performance. > The machines are connected crossover via Fast Gigabit Adapters (PCI-X) > with Jumboframes (MTU 9000) and netio shows near wirespeed throupout of > about 116000 KByte/s in each direction. jumbo frames not necessarily increase drbd throughput. benchmark this. > While benchmarking I saw that there was less data transferred over the > wire than the benchmark had to write. Is there any compression used in > the network protocol of drbd ? > Is there best practice setting of > max-buffers > max-epoch-size > sndbuf-size > or other settings for a fast Gigabit interconnect and Protocol C usage ? they are more dependant on the underlying storage device than on the network link. there are no "best practices", each setup behaves differently. > At the moment performance decrease in writing speed of underlying > device/drbd device is about 25%. Is that ok ? whether it is "ok", you have to decide. we have seen in our test setups less than 2%, and expect this to become better. -- : Lars Ellenberg Tel +43-1-8178292-0 : : LINBIT Information Technologies GmbH Fax +43-1-8178292-82 : : Schoenbrunner Str. 244, A-1120 Vienna/Europe http://www.linbit.com : __ please use the "List-Reply" function of your email client.