Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Tue, Sep 18, 2007 at 07:17:49AM -0700, Kelly Byrd wrote: > > Reading this: > > " To get best performance out of DRBD on top of software raid (or any > other driver with a merge_bvec_fn() function) you might enable this > function, iff you know for sure that the merge_bvec_fn() function will > deliver the same results on all nodes of your cluster. I.e. the physical > disks of the software raid are of the exact same type. USE THIS OPTION ONLY > IF YOU KNOW WHAT YOU ARE DOING." > > If I'm using a drbd on top of the md driver, it seems ok to turn on > 'use-bmvm'. I currently have all identical drives underneath my raid0 md. I > am concerned about the last couple of sentences. Do the drives under md > really need to be identical? software raid needs to split a bio on chunk boundaries, and on device boundaries. chunk size is configurable, or is irrelevant for e.g. raid1. device boundaries is the one to be worried here. drbd on the receiving side just puts together the bio as it receives it, we do not have anything in place to break it up into several requests there. so if your md on primary acepts a bio, where the md on the other node would need to split it due to device boundaries, drbd will go "protocoll error" and drop the connection. so yes, you better make sure the device boundaries are at the exact same places, if you want to use this. and I very much doubt that this really measurably increases the performance in a real life scenario. but I'm interessted to see your benchmarks, then. btw, actually drbd expects to run on something with local redundancy, too. so please consider raid10 instead. -- : Lars Ellenberg http://www.linbit.com : : DRBD/HA support and consulting sales at linbit.com : : LINBIT Information Technologies GmbH Tel +43-1-8178292-0 : : Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 : __ please use the "List-Reply" function of your email client.