Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello Lars! > Maybe this nice post from 2012, > helps to realize what congestion is? > Pasted here for your convenience, even though it is in the archives. THX for answering, it explained it pretty good. > But even with massive buffers (funnels), > the sustained mid term/long term average write rate > obviously cannot exceed the minimum bandwith within the whole system. This depends how you implement Protocol A. It seems that it slows down the write speed when the "funnel" is full. It seems the main goal is to keep both in sync, even if this costs write speed. The original from 2012 poster asked this: >> I was expecting that if I switched to protocol A, I would be able to let >> the SSD drive write at it's full speed (e.g. 250MB/s) only at the price of >> the secondary potentially falling a little bit behind I guess he meant here the peer going out of sync. I know (because of the explanation) it isn't implemented that way. But wouldn't it be possible to implement Protocol A2 (or D), which simply writes with full speed to the local disk regardless if the peer can follow, get out of sync and let the already implemented sync mechanism resync the peer with the configured sync parameter settings during the heavy write load and also continue syncing until it is consistent again? There is already a bitmap of inconsistent blocks implemented. Instead of trying to update the peer with each write the driver could simply set the bit in the bitmap, if the buffer (funnel) gets full and change back to the already implemented sync mode (which is currently used only after a peer connecting again). BR, Jasmin