Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Mar 19, 2009 at 10:16:57AM +1000, Andrew (Anything) wrote: > Hi Gordan & Lars > Thanks for your reply. > > I set drbd's rate to something ridiculous just to ensure it wasn't part of > the problem. > The bs=512 count=1000 tests ive been doing only use 5-6mbit > > I guess I wont try no-disk-drain then ;P > > They're 100mbit connected via crossover cable. And cause its vmware they are > AMD pcnet32 interfaces, which as far as I can tell doesn't support > coalescing tuning. > > DRBD has no trouble pushing exactly 10MB/sec while syncing or doing large > files. > > 64 bytes from 192.168.0.40: icmp_seq=7 ttl=64 time=0.462 ms > 64 bytes from 192.168.0.40: icmp_seq=8 ttl=64 time=0.484 ms > 64 bytes from 192.168.0.40: icmp_seq=9 ttl=64 time=0.467 ms > Rtt isn't fantastic in this 2 x vmware sample, didn't know it was this bad > actually. But, definitely better than 2ms. well. that has been a rough guess only. do a flood ping with large packets. ping -w5 -f -s 5000 and see what that gives. > Ill see what I can do about setting up a 2 x gige systems I can put linux on > natively to test with. > I had hoped I could just see the result of no-disk-flushes straight away, > evaluate it, and put it on our live servers, seems nothing can go the easy > way for me. ;) > Do you think this 0.400 ms latency is the reason why I see no change > when I use the no-disk-flushes options? most likely it hides any such effect. -- : Lars Ellenberg : LINBIT HA-Solutions GmbH : DRBD®/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed