Hi Andreas<br><br>Thank you.<br><br>Couldn't try "drbdadm invalidate-remote mysql" because when both nodes are connected it forces a sync (SyncSource > SyncTarget)...<br>If I disconnect the primary first then issue "drbdadm invalidate-remote mysql" I get:<br>
<br>0: State change failed: (-15) Need a connection to start verify or resync<br>Command 'drbdsetup invalidate-remote 0' terminated with exit code 11<br><br>Kind regards,<br>Fred<br><br><div class="gmail_quote">On Wed, Feb 1, 2012 at 9:18 PM, Andreas Kurz <span dir="ltr"><<a href="mailto:andreas@hastexo.com">andreas@hastexo.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On 02/01/2012 05:15 PM, Frederic DeMarcy wrote:<br>
> Hi Andrea<br>
><br>
> Commenting out "csum-alg" doesn't seem to make any noticeable difference...<br>
> However commenting out "data-integrity-alg" and running Test #2 again<br>
> increases the throughput from ~ 61MB/s to ~ 97MB/s !<br>
> Note that I may well run into the 1Gb/s crossover link limit here since<br>
> my network tests showed ~ 0.94 Gb/s<br>
><br>
> Also Test #1 was wrong in my email... It should have been split in 2:<br>
> Test #1<br>
> On non-DRBD device (/dev/sda)<br>
> # dd if=/dev/zero of=/home/userxxx/disk-test.xxx bs=1M count=4096<br>
> oflag=direct<br>
> Throughput ~ 420MB/s<br>
><br>
> DRBD partition (/dev/sdb) on primary (secondary node disabled)<br>
> Using Base DRBD config<br>
> # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>
> oflag=direct<br>
> Throughput ~ 205MB/s<br>
<br>
</div>Is the result the same if you execute a "drbdadm invalidate-remote<br>
mysql" on the primary before doing the "single node" test? .... that<br>
would disable activity log updates ...<br>
<div class="im"><br>
Regards,<br>
Andreas<br>
<br>
--<br>
Need help with DRBD?<br>
</div><a href="http://www.hastexo.com/services/remote" target="_blank">http://www.hastexo.com/services/remote</a><br>
<div class="im"><br>
><br>
> With the above -alg commented out, disabling the secondary node and<br>
> running Test #1 again (correctly split this time) shows the same<br>
> throughputs of ~ 420MB/s and ~ 205MB/s<br>
><br>
> Fred<br>
><br>
> On Wed, Feb 1, 2012 at 1:48 PM, Andreas Kurz <<a href="mailto:andreas@hastexo.com">andreas@hastexo.com</a><br>
</div><div><div class="h5">> <mailto:<a href="mailto:andreas@hastexo.com">andreas@hastexo.com</a>>> wrote:<br>
><br>
> Hello,<br>
><br>
> On 02/01/2012 01:04 PM, Frederic DeMarcy wrote:<br>
> > Hi<br>
> ><br>
> > Note 1:<br>
> > Scientific Linux 6.1 with kernel 2.6.32-220.4.1.el6.x86_64<br>
> > DRBD 8.4.1 compiled from source<br>
> ><br>
> > Note 2:<br>
> > server1 and server2 are 2 VMware VMs on top of ESXi 5. However<br>
> they reside on different physical 2U servers.<br>
> > The specs for the 2U servers are identical:<br>
> > - HP DL380 G7 (2U)<br>
> > - 2 x Six Core Intel Xeon X5680 (3.33GHz)<br>
> > - 24GB RAM<br>
> > - 8 x 146 GB SAS HD's (7xRAID5 + 1s)<br>
> > - Smart Array P410i with 512MB BBWC<br>
><br>
> Have you tried to change the I/O scheduler to deadline or noop in<br>
> the VMs?<br>
><br>
> ... see below ..<br>
><br>
> ><br>
> > Note 3:<br>
> > I've tested the network throughput with iperf which yields close<br>
> to 1Gb/s<br>
> > [root@server1 ~]# iperf -c 192.168.111.11 -f g<br>
> > ------------------------------------------------------------<br>
> > Client connecting to 192.168.111.11, TCP port 5001<br>
> > TCP window size: 0.00 GByte (default)<br>
> > ------------------------------------------------------------<br>
> > [ 3] local 192.168.111.10 port 54330 connected with<br>
> 192.168.111.11 port 5001<br>
> > [ ID] Interval Transfer Bandwidth<br>
> > [ 3] 0.0-10.0 sec 1.10 GBytes 0.94 Gbits/sec<br>
> ><br>
> > [root@server2 ~]# iperf -s -f g<br>
> > ------------------------------------------------------------<br>
> > Server listening on TCP port 5001<br>
> > TCP window size: 0.00 GByte (default)<br>
> > ------------------------------------------------------------<br>
> > [ 4] local 192.168.111.11 port 5001 connected with 192.168.111.10<br>
> port 54330<br>
> > [ ID] Interval Transfer Bandwidth<br>
> > [ 4] 0.0-10.0 sec 1.10 GBytes 0.94 Gbits/sec<br>
> ><br>
> > Scp'ing a large file from server1 to server2 yields ~ 57MB/s but I<br>
> guess it's due to the encryption overhead.<br>
> ><br>
> > Note 4:<br>
> > MySQL was not running.<br>
> ><br>
> ><br>
> ><br>
> > Base DRBD config:<br>
> > resource mysql {<br>
> > startup {<br>
> > wfc-timeout 3;<br>
> > degr-wfc-timeout 2;<br>
> > outdated-wfc-timeout 1;<br>
> > }<br>
> > net {<br>
> > protocol C;<br>
> > verify-alg sha1;<br>
> > csums-alg sha1;<br>
><br>
> using csums based resync is only interesting for WAN setups where you<br>
> need to sync via a rather thin connection<br>
><br>
> > data-integrity-alg sha1;<br>
><br>
> using data-integrity-alg is definitely not recommended (slow) for live<br>
> setups, only if you have to assume there is buggy hardware on the way<br>
> between your nodes ... like nics pretending csums are ok while they<br>
> are not<br>
><br>
> and out of curiosity ... did you gave DRBD 8.3.12 already a try?<br>
><br>
> Regards,<br>
> Andreas<br>
><br>
> --<br>
> Need help with DRBD?<br>
> <a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
><br>
><br>
> > cram-hmac-alg sha1;<br>
> > shared-secret "MySecret123";<br>
> > }<br>
> > on server1 {<br>
> > device /dev/drbd0;<br>
> > disk /dev/sdb;<br>
</div></div>> > address <a href="http://192.168.111.10:7789" target="_blank">192.168.111.10:7789</a> <<a href="http://192.168.111.10:7789" target="_blank">http://192.168.111.10:7789</a>>;<br>
<div class="im">> > meta-disk internal;<br>
> > }<br>
> > on server2 {<br>
> > device /dev/drbd0;<br>
> > disk /dev/sdb;<br>
</div>> > address <a href="http://192.168.111.11:7789" target="_blank">192.168.111.11:7789</a> <<a href="http://192.168.111.11:7789" target="_blank">http://192.168.111.11:7789</a>>;<br>
<div><div class="h5">> > meta-disk internal;<br>
> > }<br>
> > }<br>
> ><br>
> ><br>
> > After any change in the /etc/drbd.d/mysql.res file I issued a<br>
> "drbdadm adjust mysql" on both nodes.<br>
> ><br>
> > Test #1<br>
> > DRBD partition on primary (secondary node disabled)<br>
> > Using Base DRBD config<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Throughput ~ 420MB/s<br>
> ><br>
> > Test #2<br>
> > DRBD partition on primary (secondary node enabled)<br>
> > Using Base DRBD config<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Throughput ~ 61MB/s<br>
> ><br>
> > Test #3<br>
> > DRBD partition on primary (secondary node enabled)<br>
> > Using Base DRBD config with:<br>
> > Protocol B;<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Throughput ~ 68MB/s<br>
> ><br>
> > Test #4<br>
> > DRBD partition on primary (secondary node enabled)<br>
> > Using Base DRBD config with:<br>
> > Protocol A;<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Throughput ~ 94MB/s<br>
> ><br>
> > Test #5<br>
> > DRBD partition on primary (secondary node enabled)<br>
> > Using Base DRBD config with:<br>
> > disk {<br>
> > disk-barrier no;<br>
> > disk-flushes no;<br>
> > md-flushes no;<br>
> > }<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Disk throughput ~ 62MB/s<br>
> ><br>
> > No difference from Test #2 really. Also cat /proc/drbd still shows<br>
> wo:b in both cases so I'm not even sure<br>
> > these disk {..} parameters have been taken into account...<br>
> ><br>
> > Test #6<br>
> > DRBD partition on primary (secondary node enabled)<br>
> > Using Base DRBD config with:<br>
> > Protocol B;<br>
> > disk {<br>
> > disk-barrier no;<br>
> > disk-flushes no;<br>
> > md-flushes no;<br>
> > }<br>
> > # dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M<br>
> count=4096 oflag=direct<br>
> > Disk throughput ~ 68MB/s<br>
> ><br>
> > No difference from Test #3 really. Also cat /proc/drbd still shows<br>
> wo:b in both cases so I'm not even sure<br>
> > these disk {..} parameters have been taken into account...<br>
> ><br>
> ><br>
> > What else can I try?<br>
> > Is it worth trying DRBD 8.3.x?<br>
> ><br>
> > Thx.<br>
> ><br>
> > Fred<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On 1 Feb 2012, at 08:35, James Harper wrote:<br>
> ><br>
> >>> Hi<br>
> >>><br>
> >>> I've configured DRBD with a view to use it with MySQL (and later on<br>
> >>> Pacemaker + Corosync) in a 2 nodes primary/secondary<br>
> >>> (master/slave) setup.<br>
> >>><br>
> >>> ...<br>
> >>><br>
> >>> No replication over the 1Gb/s crossover cable is taking place<br>
> since the<br>
> >>> secondary node is down yet there's x2 lower disk performance.<br>
> >>><br>
> >>> I've tried to add:<br>
> >>> disk {<br>
> >>> disk-barrier no;<br>
> >>> disk-flushes no;<br>
> >>> md-flushes no;<br>
> >>> }<br>
> >>> to the config but it didn't seem to change anything.<br>
> >>><br>
> >>> Am I missing something here?<br>
> >>> On another note is 8.4.1 the right version to use?<br>
> >>><br>
> >><br>
> >> If you can do it just for testing, try changing to protocol B<br>
> with one primary and one secondary and see how that impacts your<br>
> performance, both with barrier/flushes on and off. I'm not sure if<br>
> it will help but if protocol B makes things faster then it might<br>
> hint as to where to start looking...<br>
> >><br>
> >> James<br>
> ><br>
> > _______________________________________________<br>
> > drbd-user mailing list<br>
</div></div>> > <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a> <mailto:<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>><br>
<div class="im">> > <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> drbd-user mailing list<br>
</div>> <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a> <mailto:<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a>><br>
<div class="HOEnZb"><div class="h5">> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> drbd-user mailing list<br>
> <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
> <a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
<br>
<br>
<br>
<br>
</div></div><br>_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
<br></blockquote></div><br>