<div>I would think that it is a huge benefit to do so. I don't have specific numbers for Gigabit ethernet yet (I need to build two crossover cables to do the test, but I do have some preliminary data using 10Gbps Infiniband. I used netperf to generate these numbers.</div>
<div><br></div><div>ipoib, datagram mode (with an associated maximum MTU of 2044) standard tcp parameters: 2467 Mbps</div><div>ipoib, connected mode (with an associated maximim MTU of 65520) standard tcp parameters: 5488 Mbps</div>
<div><div>ipoib, connected mode (with an associated maximim MTU of 65520) modified tcp parameters: 7560 Mbps one way, 6226 Mbps in reverse</div></div><div><div><div>ipoib, connected mode (with an associated maximim MTU of 65520) modified tcp parameters, after updating the outdated firmware on one node: 7862 Mbps both directions.</div>
</div></div><div><br></div><div>Clearly, there are huge advantages to adjusting the MTU and send/receive buffers. While an MTU of 65520 is huge by ethernet standards, I only saw about a 3 to 5% reduction in performance with an MTU of 10000</div>
<div><br></div><div>BTW, my total expenditure for the point-to-point Infiniband link was US$184.</div><div><br></div><div><br></div><br><div class="gmail_quote">On Sat, May 22, 2010 at 11:09 AM, Lee Riemer <span dir="ltr"><<a href="mailto:lriemer@bestline.net">lriemer@bestline.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Also, have people been enabling Jumbo frames and tweaking TCP Windows?<div><div></div><div class="h5"><br>
<br>
On 5/22/2010 3:00 AM, Phil Stricker wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi!<br>
Am 22.05.2010 00:15, schrieb Matteo Tescione:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
i've been using 802.3ad Dynamic link aggregation and been able to achieve 1.3/1.4gbit with a dual gigabit intel port connected to a managed switch (netgear).<br>
</blockquote>
sorry, but that is not possible for DRBD-replication. 802.3ad does just improve performance over 1GBit, if you transfer data to multiple hosts/mac adresses, because it does balance the streams via modulo operations of the mac adresses of both peers. As Ben Timby wrote, the one mode that provides multiple link aggregation for a single tcp stream is balance-rr.<br>
<br>
Phil<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Dr. Michael Iverson<br>Director of Information Technology<br>Hatteras Printing<br>