Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Tuesday 08 April 2008 05:27:25 Christian Balzer wrote: > These are (except for the 2 primaries) nearly exactly the same settings I > came up with after a lot of testing. > > In my case it's a dual quad core box with 24GB RAM and 8 1TB SATA drives > in a RAID-5 that give me about 120MB/s writes on the "bare" MD device. > The first try with default values on DRBD gave me about 41MB/s writes and > a big "WTH?" feeling. If your local I/O subsystem pulls 120MB/s, the expected max DRBD throughput is around 105 MB/s: - disk does 120, - Gigabit Ethernet realistically does 110, - so the network is your bottleneck, - so deduct about 5% DRBD throughput penalty, - and you end up around 105 MB/s. And that's a throughput we routinely tune DRBD to. > In addition to that I turned on "use-bmbv" since nobody here actually > managed to give me a good reason (other than cargo-cult quoting the manual) > not to with my particular setup (identical disks and all on Linux MD) and > it made about a 5MB/s difference in writes... The general mantra here is "if it works for you, fine". > My personal conclusion is that if one wants to build a high-speed > (writes) DRBD setup where speed is definitely more important than > storage capacity, go for a RAID-10 (preferably the Linux MD RAID-10 > with far or offset replication) with as many and as fast drives you > can fit. For extra oomph, consider a battery backed ramdisk or solid > state drive to hold the meta-data. Or just use hardware RAID with BBWC. Also, don't confuse throughout and latency, and measure block device performance before file system performance. Cheers, Florian -- : Florian G. Haas : LINBIT Information Technologies GmbH : Vivenotgasse 48, A-1120 Vienna, Austria Please note: when replying, there is no need to CC my personal address. Replying to the list is fine. Thank you.