Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> -----Original Message----- > From: drbd-user-bounces at lists.linbit.com > [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Ralf Gross > Sent: Monday, January 15, 2007 4:46 PM > To: drbd-user at lists.linbit.com > Subject: Re: [DRBD-user] drbd performance with GbE in connected mode > > Ross S. W. Walker schrieb: > > > It seems that > > > > > > a. changing the net parameters did help > > > > > > sndbuf-size 240k; > > > max-buffers 20480; > > > max-epoch-size 16384; > > > unplug-watermark 20480; > > > > > > b. changing the bonding mode of the interfaces from balance-rr to > > > balance-xor did help too. > > > > > > I now get about 85MB/s. Maybe that's by accident, but I > could watch > > > the write performance go down when increasing the sndbuf-size. > > > > Well I can see increasing sndbuf as increasing latency, so it makes > > sense that decreasing it would also decrease latency a bit, > why not try > > 128K and see where that puts you. If you have direct > connections and a > > fast network, there is no real need to have a large sndbuf. If I was > > using Prot A and a slow network of T1s then I would use a very large > > sndbuf. > > > > Statistically speaking though when doing a benchmark over a > short period > > of time 82-83-85 MB/s are about the same. I find that a 15 > minute run > > will normally get rid of the 3-5 MB/s swings between runs > and narrow it > > down to 1-2 MB/s swings. > > > > It looks like you are approaching the part of tuning were you are > > receiving diminishing returns and will need to do more and > more tuning > > to squeeze less and less out, so I would say that 85 MB/s > is what your > > gonna see unless you can find a way to run drbd with multiple paths, > > which I don't think it has the capability to do. > > > > Well let me know if you can squeeze any more out of it. You > might want > > to see if there is any filesystem optimizations you can do > now to get > > some extra performance out of it. > > Ok, now I changed the fs of the 300GB lvm lv to xfs. > > Sequential Writes > File Blk Num Avg Maximum Lat% > Lat% CPU > Size Size Thr Rate (CPU%) Latency Latency >2s > >10s Eff > ---- ----- --- ------ ------ --------- ---------- -------- > ------- ---- > 8000 4096 1 92.29 40.35% 0.152 1435.65 0.00000 > 0.00000 229 > > Random Writes > File Blk Num Avg Maximum Lat% > Lat% CPU > Size Size Thr Rate (CPU%) Latency Latency >2s > >10s Eff > ----- ----- --- ------ ------ -------- --------- -------- > -------- ---- > 8000 4096 1 22.11 23.88% 0.031 0.13 0.00000 > 0.00000 93 > > > I did some tests at the end of last year and xfs seemed to be > faster than ext3. > But I didn't expect that this would impact the performance of > drbd in connected > mode that much. Especially the random writes are much higher > than with ext3. > > I've to think about that... Looks good, only seeing around 25% loss instead of 42% loss, but you are now comparing apples with oranges, who knows maybe you would have got 150MB/s seq write on xfs to begin with... I don't know if someone has put up any figures on estimated performance loss due to drbd backend. If I were a guessing man I would probably say 30%... But that is made up, and just a guess, it would need a more scientific approach to be more definitive. -Ross ______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.