<div dir="ltr"><br><div>Also I triggered the sync by "drbdadm invalidate-remote wandk0" on vmA. Seeing same bandwidth limit around 30MB/s.</div><div><br></div><div><div>[root@vmC ~]# cat /proc/drbd</div><div>version: 8.4.7-1 (api:1/proto:86-101)</div><div>GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil@Build64R7, 2016-01-12 14:29:40</div><div><br></div><div> 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate A r-----</div><div> ns:0 nr:8636123 dw:8635099 dr:0 al:0 bm:0 lo:1 pe:69 ua:1 ap:0 ep:1 wo:d oos:44851772</div><div> [>...................] sync'ed: 5.0% (43800/46076)M</div><div> finish: 0:24:52 speed: 30,052 (32,852) want: 4,194,304 K/sec</div><div>[root@vmC ~]#</div></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jul 1, 2016 at 11:18 AM, T.J. Yang <span dir="ltr"><<a href="mailto:tjyang2001@gmail.com" target="_blank">tjyang2001@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks Igor for another tip by switching over to asynchronous A protocol, for WAN network.<div>There was no improvement as suggested by <a href="http://www.gossamer-threads.com/lists/drbd/users/27510" target="_blank">other email thread</a> that we should see 2x write bandwidth increase.<br><div><br></div><div><div>[root@vmA2vmC ~]# ./drbdtest.bash</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.854164 s, 629 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.578418 s, 928 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.631658 s, 850 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.577289 s, 930 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.773748 s, 694 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.635504 s, 845 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.63825 s, 841 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.602557 s, 891 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.583016 s, 921 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 0.606236 s, 886 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 19.3083 s, 27.8 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 17.7925 s, 30.2 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.5809 s, 32.4 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 17.4038 s, 30.8 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.5223 s, 32.5 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.7767 s, 32.0 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 15.1224 s, 35.5 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.668 s, 32.2 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.5105 s, 32.5 MB/s</div><span class=""><div>1+0 records in</div><div>1+0 records out</div></span><div>536870912 bytes (537 MB) copied, 16.6164 s, 32.3 MB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.39062 s, 368 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.40759 s, 364 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.6254 s, 315 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.61219 s, 318 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 2.18549 s, 234 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.22317 s, 419 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.42974 s, 358 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.7516 s, 292 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.72105 s, 297 kB/s</div><div>1000+0 records in</div><div>1000+0 records out</div><div>512000 bytes (512 kB) copied, 1.57352 s, 325 kB/s</div><div>[root@vmA2vmC ~]# cat ./drbdtest.bash</div><div>TEST_FILE=drbd-write-test.img</div><div># write to local disk</div><div>for i in $(seq 10); do</div><div> dd if=/dev/zero of=/root/$TEST_FILE bs=512M count=1 oflag=direct</div><div> sleep 5</div><div>done</div><div>rm -f /root/$TEST_FILE</div><div><br></div><div># write to /dev/drbd1 mounted as /pub</div><div>for i in $(seq 10); do</div><div> dd if=/dev/zero of=/pub/$TEST_FILE bs=512M count=1 oflag=direct</div><div> sleep 5</div><div>done</div><div>rm -f /pub/$TEST_FILE</div><div><br></div><div><br></div><div>for i in $(seq 10); do</div><div> dd if=/dev/zero of=/pub/$TEST_FILE bs=512 count=1000 oflag=direct</div><div> sleep 5</div><div>done</div><div>rm -f /pub/$TEST_FILE</div><div><br></div><div><br></div><div><div>[root@vmA2vmC ~]# cat /proc/drbd</div><div>version: 8.4.7-1 (api:1/proto:86-101)</div><div>GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil@Build64R7, 2016-01-12 14:29:40</div><div><br></div><div> 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate A r-----</div><div> ns:5253671 nr:0 dw:5253671 dr:1577 al:51 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0</div><div>[root@vmA2vmC ~]#</div></div><div><br></div></div></div></div><div class="gmail_extra"><div><div class="h5"><br><div class="gmail_quote">On Fri, Jul 1, 2016 at 10:46 AM, Igor Cicimov <span dir="ltr"><<a href="mailto:icicimov@gmail.com" target="_blank">icicimov@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr"><span><br>
On 1 Jul 2016 3:48 pm, "T.J. Yang" <<a href="mailto:tjyang2001@gmail.com" target="_blank">tjyang2001@gmail.com</a>> wrote:<br>
><br>
> Hi All<br>
><br>
> I am new to drbd performance turning and I have been study (R0). <br>
> Aso I am browsing others effort in drbd-user archive (R1).<br>
> I was able to get 350MB/s rsync rate (R2) for two Centos 7.2 VMs(A and B) when they are on same LAN with turning from (R1) thread. <br>
><br>
> My goal is to have C Centos 7.2 VM paired with A VM that go over a fast WAN pipe(R3). but when I reuse B's drbd config and change the IP info. The rsync rate back to 30-40M rsync rate(See R4).<br>
><br>
> I tried the jumbo frame turning to raise MTU from 1500 to 8000(RH Support article recommend 8000, not 9000). But this change on VM A and C doesn't not improve rsync rate.<br>
><br>
><br>
> Does networking team need do any change on their swith/router for drbd case ?<br>
><br></span>
You need jumbo frames enabled on the switch too.</p>
<p dir="ltr"></p><div><div>><br>
> References:<br>
> R0: <a href="https://www.drbd.org/en/doc/users-guide-84/p-performance" target="_blank">https://www.drbd.org/en/doc/users-guide-84/p-performance</a><br>
> R1: <a href="http://lists.linbit.com/pipermail/drbd-user/2016-January/022611.html" target="_blank">http://lists.linbit.com/pipermail/drbd-user/2016-January/022611.html</a><br>
> R2:<br>
><br>
> #this is between vmA(10.65.184.1) and vmB(10.65.184.3), same subnet.<br>
><br>
> [root@vmA ~]# ./drbd-pm-test.bash wandk0 # script from R0<br>
><br>
> testing wandk0 on /dev/drbd1<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 8.00925 s, 67.0 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 1.72338 s, 312 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 1.84181 s, 291 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 1.66079 s, 323 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 1.69359 s, 317 MB/s<br>
><br>
> testing wandk0 on backing device:/dev/centos/wandk0<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.504366 s, 1.1 GB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.550144 s, 976 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.502675 s, 1.1 GB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.473032 s, 1.1 GB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.470139 s, 1.1 GB/s<br>
><br>
> [root@vmA ~]#<br>
><br>
><br>
><br>
> R3:<br>
><br>
> [root@vmC ~]# iperf3 -s -p 5900<br>
><br>
> warning: this system does not seem to support IPv6 - trying IPv4<br>
><br>
> -----------------------------------------------------------<br>
><br>
> Server listening on 5900<br>
><br>
> -----------------------------------------------------------<br>
><br>
> Accepted connection from 10.65.184.1(vmA), port 56750<br>
><br>
> [ 5] local 10.64.5.245 port 5900 connected to 10.65.184.1 port 56754<br>
><br>
> [ ID] Interval Transfer Bandwidth<br>
><br>
> [ 5] 0.00-1.00 sec 73.9 MBytes 620 Mbits/sec<br>
><br>
> [ 5] 1.00-2.00 sec 100 MBytes 842 Mbits/sec<br>
><br>
> [ 5] 2.00-3.00 sec 107 MBytes 895 Mbits/sec<br>
><br>
> [ 5] 3.00-4.00 sec 113 MBytes 947 Mbits/sec<br>
><br>
> [ 5] 4.00-5.00 sec 117 MBytes 984 Mbits/sec<br>
><br>
> [ 5] 5.00-6.00 sec 120 MBytes 1.01 Gbits/sec<br>
><br>
> [ 5] 6.00-7.00 sec 123 MBytes 1.03 Gbits/sec<br>
><br>
> [ 5] 7.00-8.00 sec 124 MBytes 1.04 Gbits/sec<br>
><br>
> [ 5] 8.00-9.00 sec 124 MBytes 1.04 Gbits/sec<br>
><br>
> [ 5] 9.00-10.00 sec 125 MBytes 1.04 Gbits/sec<br>
><br>
> [ 5] 10.00-10.04 sec 5.25 MBytes 1.25 Gbits/sec<br>
><br>
> - - - - - - - - - - - - - - - - - - - - - - - - -<br>
><br>
> [ ID] Interval Transfer Bandwidth<br>
><br>
> [ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender<br>
><br>
> [ 5] 0.00-10.04 sec 1.11 GBytes 947 Mbits/sec receiver<br>
><br>
> -----------------------------------------------------------<br>
><br>
> Server listening on 5900<br>
><br>
> -----------------------------------------------------------<br>
><br>
> ^Ciperf3: interrupt - the server has terminated<br>
><br>
> [root@vmC ~]# date<br>
><br>
> Wed Jun 29 12:26:37 EDT 2016<br>
><br>
> [root@vmC ~]#<br>
><br>
><br>
><br>
> R4: only getting 35MB/s when cross WAN network.<br>
><br>
> [root@vmA ~]# ./scratch-test.bash wandk0 # from R1<br>
><br>
> 1: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----<br>
><br>
> Writing via wandk0 on /dev/drbd1<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 15.1859 s, 35.4 MB/s<br>
><br>
> <snipped><br>
><br>
> 536870912 bytes (537 MB) copied, 15.598 s, 34.4 MB/s<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 16.9145 s, 31.7 MB/s<br>
><br>
> Writing directly into backing device:/dev/centos/wandk0<br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.625202 s, 859 MB/s<br>
><br>
> <snipped><br>
><br>
> 1+0 records in<br>
><br>
> 1+0 records out<br>
><br>
> 536870912 bytes (537 MB) copied, 0.566128 s, 948 MB/s<br>
><br>
> [root@vmA ~]# date<br>
><br>
> Thu Jun 30 13:07:16 EDT 2016<br>
><br>
> [root@vmA ~]#<br>
><br>
><br>
><br>
> -- <br>
> T.J. Yang<br>
><br></div></div>
> _______________________________________________<br>
> drbd-user mailing list<br>
> <a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
><br>
<p></p>
</blockquote></div><br><br clear="all"><div><br></div></div></div><span class="HOEnZb"><font color="#888888">-- <br><div data-smartmail="gmail_signature">T.J. Yang</div>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">T.J. Yang</div>
</div>