<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=iso-8859-1"><meta name=Generator content="Microsoft Word 12 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
        {font-family:Wingdings;
        panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:Tahoma;
        panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
span.EmailStyle17
        {mso-style-type:personal-reply;
        font-family:"Calibri","sans-serif";
        color:#1F497D;}
.MsoChpDefault
        {mso-style-type:export-only;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 70.85pt 70.85pt 70.85pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=FR link=blue vlink=purple><div class=WordSection1><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Frederic,<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Paravirtual SCSI is supposed to be fully efficient passed a given level of IOPS, under that level of activity an LSI SAS adapter used to be reputed being more performant (It was in v4, don’t have any update regarding v5, I suspect it’s still the case though). However, since both disks are plugged onto the same adapter and you issue the same test command for both disks, this can’t explain what you’re seeing… Therefore it seems like your problem is effectively whithin your VM, and not “around”. Let’s keep searching, then… </span><span lang=EN-US style='font-size:11.0pt;font-family:Wingdings;color:#1F497D'>J</span><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Best regards,<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'>Pascal.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US style='font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D'><o:p> </o:p></span></p><div style='border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm'><p class=MsoNormal><b><span style='font-size:10.0pt;font-family:"Tahoma","sans-serif"'>De :</span></b><span style='font-size:10.0pt;font-family:"Tahoma","sans-serif"'> Frederic DeMarcy [mailto:fred.demarcy.ml@gmail.com] <br><b>Envoyé :</b> mercredi 1 février 2012 17:20<br><b>À :</b> Pascal BERTON (EURIALYS)<br><b>Cc :</b> drbd-user@lists.linbit.com<br><b>Objet :</b> Re: [DRBD-user] Slower disk throughput on DRBD partition<o:p></o:p></span></p></div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal style='margin-bottom:12.0pt'>Hi Pascal<br><br>1) Both vdisks for /dev/sda and /dev/sdb are on the same datastore which is made of the entire RAID5 array capacity (7xHDs + 1 spare). Minus space used by the ESXi installation.<br>2) HD1 (/dev/sda) is SCSI (0:1) and HD2 (/dev/sdb) is SCSI (0:2). Both initialized with Thick Provisioning Eager Zeroed. The SCSI controller type is paravirtual.<br><br>Fred<o:p></o:p></p><div><p class=MsoNormal>On Wed, Feb 1, 2012 at 2:13 PM, Pascal BERTON (EURIALYS) <<a href="mailto:pascal.berton@eurialys.fr">pascal.berton@eurialys.fr</a>> wrote:<o:p></o:p></p><p class=MsoNormal>Frederic,<br><br>Let's take care of the virtualisation layer wich might induce significant<br>side effects<br>Are sda and sdb :<br>1) vdisk files located on the same datastore ?<br>2) vdisks plugged on the same virtual SCSI interface ? What type of SCSI<br>interface ?<br><br>Best regards,<br><br>Pascal.<br><br>-----Message d'origine-----<br>De : <a href="mailto:drbd-user-bounces@lists.linbit.com">drbd-user-bounces@lists.linbit.com</a><br>[mailto:<a href="mailto:drbd-user-bounces@lists.linbit.com">drbd-user-bounces@lists.linbit.com</a>] De la part de Frederic DeMarcy<br>Envoyé : mercredi 1 février 2012 13:05<br>À : <a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>Objet : Re: [DRBD-user] Slower disk throughput on DRBD partition<o:p></o:p></p><div><div><p class=MsoNormal style='margin-bottom:12.0pt'><br>Hi<br><br>Note 1:<br>Scientific Linux 6.1 with kernel 2.6.32-220.4.1.el6.x86_64<br>DRBD 8.4.1 compiled from source<br><br>Note 2:<br>server1 and server2 are 2 VMware VMs on top of ESXi 5. However they reside<br>on different physical 2U servers.<br>The specs for the 2U servers are identical:<br> - HP DL380 G7 (2U)<br> - 2 x Six Core Intel Xeon X5680 (3.33GHz)<br> - 24GB RAM<br> - 8 x 146 GB SAS HD's (7xRAID5 + 1s)<br> - Smart Array P410i with 512MB BBWC<br><br>Note 3:<br>I've tested the network throughput with iperf which yields close to 1Gb/s<br>[root@server1 ~]# iperf -c 192.168.111.11 -f g<br>------------------------------------------------------------<br>Client connecting to 192.168.111.11, TCP port 5001<br>TCP window size: 0.00 GByte (default)<br>------------------------------------------------------------<br>[ 3] local 192.168.111.10 port 54330 connected with 192.168.111.11 port<br>5001<br>[ ID] Interval Transfer Bandwidth<br>[ 3] 0.0-10.0 sec 1.10 GBytes 0.94 Gbits/sec<br><br>[root@server2 ~]# iperf -s -f g<br>------------------------------------------------------------<br>Server listening on TCP port 5001<br>TCP window size: 0.00 GByte (default)<br>------------------------------------------------------------<br>[ 4] local 192.168.111.11 port 5001 connected with 192.168.111.10 port<br>54330<br>[ ID] Interval Transfer Bandwidth<br>[ 4] 0.0-10.0 sec 1.10 GBytes 0.94 Gbits/sec<br><br>Scp'ing a large file from server1 to server2 yields ~ 57MB/s but I guess<br>it's due to the encryption overhead.<br><br>Note 4:<br>MySQL was not running.<br><br><br><br>Base DRBD config:<br>resource mysql {<br> startup {<br> wfc-timeout 3;<br> degr-wfc-timeout 2;<br> outdated-wfc-timeout 1;<br> }<br> net {<br> protocol C;<br> verify-alg sha1;<br> csums-alg sha1;<br> data-integrity-alg sha1;<br> cram-hmac-alg sha1;<br> shared-secret "MySecret123";<br> }<br> on server1 {<br> device /dev/drbd0;<br> disk /dev/sdb;<br> address <a href="http://192.168.111.10:7789" target="_blank">192.168.111.10:7789</a>;<br> meta-disk internal;<br> }<br> on server2 {<br> device /dev/drbd0;<br> disk /dev/sdb;<br> address <a href="http://192.168.111.11:7789" target="_blank">192.168.111.11:7789</a>;<br> meta-disk internal;<br> }<br>}<br><br><br>After any change in the /etc/drbd.d/mysql.res file I issued a "drbdadm<br>adjust mysql" on both nodes.<br><br>Test #1<br>DRBD partition on primary (secondary node disabled)<br>Using Base DRBD config<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Throughput ~ 420MB/s<br><br>Test #2<br>DRBD partition on primary (secondary node enabled)<br>Using Base DRBD config<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Throughput ~ 61MB/s<br><br>Test #3<br>DRBD partition on primary (secondary node enabled)<br>Using Base DRBD config with:<br> Protocol B;<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Throughput ~ 68MB/s<br><br>Test #4<br>DRBD partition on primary (secondary node enabled)<br>Using Base DRBD config with:<br> Protocol A;<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Throughput ~ 94MB/s<br><br>Test #5<br>DRBD partition on primary (secondary node enabled)<br>Using Base DRBD config with:<br> disk {<br> disk-barrier no;<br> disk-flushes no;<br> md-flushes no;<br> }<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Disk throughput ~ 62MB/s<br><br>No difference from Test #2 really. Also cat /proc/drbd still shows wo:b in<br>both cases so I'm not even sure<br>these disk {..} parameters have been taken into account...<br><br>Test #6<br>DRBD partition on primary (secondary node enabled)<br>Using Base DRBD config with:<br> Protocol B;<br> disk {<br> disk-barrier no;<br> disk-flushes no;<br> md-flushes no;<br> }<br># dd if=/dev/zero of=/var/lib/mysql/TMP/disk-test.xxx bs=1M count=4096<br>oflag=direct<br>Disk throughput ~ 68MB/s<br><br>No difference from Test #3 really. Also cat /proc/drbd still shows wo:b in<br>both cases so I'm not even sure<br>these disk {..} parameters have been taken into account...<br><br><br>What else can I try?<br>Is it worth trying DRBD 8.3.x?<br><br>Thx.<br><br>Fred<br><br><br><br><br><br><br>On 1 Feb 2012, at 08:35, James Harper wrote:<br><br>>> Hi<br>>><br>>> I've configured DRBD with a view to use it with MySQL (and later on<br>>> Pacemaker + Corosync) in a 2 nodes primary/secondary<br>>> (master/slave) setup.<br>>><br>>> ...<br>>><br>>> No replication over the 1Gb/s crossover cable is taking place since the<br>>> secondary node is down yet there's x2 lower disk performance.<br>>><br>>> I've tried to add:<br>>> disk {<br>>> disk-barrier no;<br>>> disk-flushes no;<br>>> md-flushes no;<br>>> }<br>>> to the config but it didn't seem to change anything.<br>>><br>>> Am I missing something here?<br>>> On another note is 8.4.1 the right version to use?<br>>><br>><br>> If you can do it just for testing, try changing to protocol B with one<br>primary and one secondary and see how that impacts your performance, both<br>with barrier/flushes on and off. I'm not sure if it will help but if<br>protocol B makes things faster then it might hint as to where to start<br>looking...<br>><br>> James<o:p></o:p></p></div></div><div><div><p class=MsoNormal style='margin-bottom:12.0pt'>_______________________________________________<br>drbd-user mailing list<br><a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br><a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><o:p></o:p></p></div></div></div><p class=MsoNormal><o:p> </o:p></p></div></body></html>