Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Ross S. W. Walker schrieb: > > It seems that > > > > a. changing the net parameters did help > > > > sndbuf-size 240k; > > max-buffers 20480; > > max-epoch-size 16384; > > unplug-watermark 20480; > > > > b. changing the bonding mode of the interfaces from balance-rr to > > balance-xor did help too. > > > > I now get about 85MB/s. Maybe that's by accident, but I could watch > > the write performance go down when increasing the sndbuf-size. > > Well I can see increasing sndbuf as increasing latency, so it makes > sense that decreasing it would also decrease latency a bit, why not try > 128K and see where that puts you. If you have direct connections and a > fast network, there is no real need to have a large sndbuf. If I was > using Prot A and a slow network of T1s then I would use a very large > sndbuf. > > Statistically speaking though when doing a benchmark over a short period > of time 82-83-85 MB/s are about the same. I find that a 15 minute run > will normally get rid of the 3-5 MB/s swings between runs and narrow it > down to 1-2 MB/s swings. > > It looks like you are approaching the part of tuning were you are > receiving diminishing returns and will need to do more and more tuning > to squeeze less and less out, so I would say that 85 MB/s is what your > gonna see unless you can find a way to run drbd with multiple paths, > which I don't think it has the capability to do. > > Well let me know if you can squeeze any more out of it. You might want > to see if there is any filesystem optimizations you can do now to get > some extra performance out of it. Ok, now I changed the fs of the 300GB lvm lv to xfs. Sequential Writes File Blk Num Avg Maximum Lat% Lat% CPU Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff ---- ----- --- ------ ------ --------- ---------- -------- ------- ---- 8000 4096 1 92.29 40.35% 0.152 1435.65 0.00000 0.00000 229 Random Writes File Blk Num Avg Maximum Lat% Lat% CPU Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff ----- ----- --- ------ ------ -------- --------- -------- -------- ---- 8000 4096 1 22.11 23.88% 0.031 0.13 0.00000 0.00000 93 I did some tests at the end of last year and xfs seemed to be faster than ext3. But I didn't expect that this would impact the performance of drbd in connected mode that much. Especially the random writes are much higher than with ext3. I've to think about that... ralf