<div dir="ltr">Dear Lars,<div><br><div>Thank you for your answer.</div><div>First, I know you proxy mechanism.</div><div>Please see my questions below,</div><div><br></div><div><div class="gmail_extra"><br><div class="gmail_quote">2016-02-15 23:42 GMT+09:00 Lars Ellenberg <span dir="ltr"><<a href="mailto:lars.ellenberg@linbit.com" target="_blank">lars.ellenberg@linbit.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Sun, Feb 14, 2016 at 07:34:55PM +0900, 김재헌 wrote:<br>
> Hi,<br>
><br>
> With the async congestion mode, local disk I/O performance is too slow than<br>
> sync replication mode.<br>
><br>
><br>
> 1.version<br>
> - V9.0.1-1, GIT-hash: f57acfc22d29a95697e683fb6bbacd9a1ad4348e<br>
> - VM: CentOS 7<br>
><br>
> 2. conf<br>
> protocol A;<br>
> sndbuf-size 256K;<br>
> on-congestion pull-ahead;<br>
> congestion-fill 128K;<br>
<br>
</span>That is a nonsense configuration.<br>
<br>
These congestion parameters are intended to be used<br>
with a "DRBD-Proxy" in between, or long fat pipes.<br>
<br>
Useful values would be several hundred megabyte,<br>
with a proxy memlimit of those several hundred megabyte<br>
plus some.<br>
<br></blockquote><div><br></div><div>Yes. You are right.</div><div><br></div><div>But, this conf is just test!</div><div>I wanna see the local disk I/O performance at A-mode replication during heavy congestion situation.</div><div>So sndbuf-size is meaningless, small congestion value is just needed in this test.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If it does not behave "nicely" with very low values,<br>
then that's expected.<br>
<br></blockquote><div><br></div><div>"expected"?</div><div><br></div><div>You mean if there is heavy congestion (heavy ahead-behind repetition) then A-mode should be slower than C-mode?</div><div>If so, that is my question. Why?</div><div>I think the local disk I/O performance should not be affected whether or not congestion situation occurs in A-mode.<br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
In any case, even with protocol A,<br>
IO completion occurs when we have both<br>
a) local disk completion<br>
b) successfully sent any data (or out-of-sync information)<br>
<span class=""><br>
> I think there seems to be a problem in the following areas:<br>
> - Before congestion, completion for local disk I/O is treated at<br>
> complete_master_bio function in drbd_sender thread.<br>
> - But even if the congestion occured, I think, it may be treated at the<br>
> same position.<br>
> - In other words although local disk write it is already finished, the<br>
> copy application is not receiving this completion signal and pending.<br>
> - This application waits for this completion until got_BarrierAck receives<br>
> just requested-block from the peer.<br>
> - I think the local I/O completion should be done as soon as detecting<br>
> congestion without waiting peer ack.<br>
><br>
> Is there any my misunderstand about drbd congestion mechanism?<br>
<br>
</span>Completion is not supposed to wait for any actual DRBD protocol<br>
ACKs, not even "barrier acks".<br>
<br>
But it waits at least until send() returns.<br>
send() of data while not yet switched to congested, or<br>
send() of "this block is now out-of-sync" when already congested.<br>
<br></blockquote><div><br></div><div>I see...</div><div>I will review code again, later.</div><div> </div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If that is too much for your network, then *disconnect*.<br>
Or use periodic file level rsync instead of DRBD.<br>
Or something like that.</blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888">
--<br>
: Lars Ellenberg<br>
: <a href="http://www.LINBIT.com" rel="noreferrer" target="_blank">http://www.LINBIT.com</a> | Your Way to High Availability<br>
: DRBD, Linux-HA and Pacemaker support and consulting<br>
<br>
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.<br>
__<br>
please don't Cc me, but send to list -- I'm subscribed<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</font></span></blockquote></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div>Thanks.</div><div class="gmail_extra"><br><div class="gmail_signature"><div dir="ltr"><div dir="ltr"><p style="margin:0cm 0cm 0.0001pt;line-height:21px"><span lang="EN-US" style="font-size:8pt;line-height:16px;font-family:'\00b9d1\00c740 \00ace0\00b515';color:rgb(127,127,127)"><br></span></p><p style="margin:0cm 0cm 0.0001pt;line-height:21px"><br></p></div></div></div>
</div></div></div></div>