Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 05/08/2011 07:46 PM, Maxim Ianoglo wrote: > Hello, > > Getting some "strange" results on write testing on DRBD Primary. > Every time I get more data written than 1Gb link can handle. > Get about 133 MB/s with 1Gb link saturated and both Nodes in sync. > Also If I make a test with files of size less that 1GB ( for example 900MB ) I always get results between 650-750MB/s and does not meter which sync protocol > I use in DRBD. > > Does this have something to do with DRBD realization ? Buffers or something ... > > Here is my configuration file: <snip> > disk { > on-io-error detach; > no-disk-barrier; > no-disk-flushes; > no-md-flushes; > } Hi, as far as I can tell, DRBD is happily stuffing your write cache without doing on-the-spot syncing. That's what you're basically telling it to do. (It *is* syncing of course, but the writes are acknowledged by the local RAID controller before the Secondary has received them.) I remember you telling us about a BBU on your RAID controller, so this is probably what you want. If you want to know the raw performance of DRBD, I think you can a) enable disk flushes or b) disable your write cache Depending on your workload, you may be going to write at full cache speed at about all times, so that value is valid. A question to those more adept at the concepts: Come to think of it, I'm not sure why this is actually a good idea. If the primary crashes in this setup, won't the Secondary come up with up to 900MB of missing writes? When the Primary is restored, won't it mark the data that's salvaged from the BBU'ed cache as dirty based on the activity log? (Yes, that's two questions actually.) Regards, Felix