<div>Igor, </div><div><br></div><div>I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't think you'll have any problems creating a similar setup with the MHES cards.</div><div><br></div><div>
I've not attempted to use infiniband sdr, just ipoib. I am running opensm on one of the nodes. I'm getting throughput numbers like this:</div><div><br></div><div><div>cirrus:~$ netperf -H stratus-ib</div><div>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to <a href="http://stratus-ib.focus1.com">stratus-ib.focus1.com</a> (172.16.24.1) port 0 AF_INET : demo</div>
<div>Recv Send Send </div><div>Socket Socket Message Elapsed </div><div>Size Size Size Time Throughput </div><div>bytes bytes bytes secs. 10^6bits/sec </div>
<div><br></div><div> 87380 16384 16384 10.00 7861.61 </div></div><div><br></div><div>A couple of things to watch out for:</div><div><br></div><div>1. Upgrade the firmware on the cards to the latest and greatest version. I saw about a 25% increase in throughput as a result. The firmware updater was a pain to compile, but that was mostly due to Ubuntu's fairly rigid default compiler flags.</div>
<div><br></div><div>2. Run the cards in connected mode, rather than datagram mode, and put the MTU at the max value of 65520. My performance benchmarks of drbd show that this is the best setup. </div><div><br></div><div>The replication rate on my setup is completely limited by the bandwidth of my disk subsystem, which is about 200 MB/s for writes. I can share some performance comparisons between this and bonded gigabit ethernet, if you would like. However, I won't be able to provide it until tomorrow, as it is a holiday in the US today, and I don't have ready access to the data.</div>
<div><br></div><br><div class="gmail_quote">On Mon, May 31, 2010 at 6:17 AM, Igor Neves <span dir="ltr"><<a href="mailto:igor@3gnt.net">igor@3gnt.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi,<br>
<br>
I'm looking for a 10Gbit backend for storage drbd replication. I'm expecting to setup infiniband solution connected back to back, this means both nodes will be connected together without a switch.<br>
<br>
I wonder if I bought two of this cards MHES14-xtc and a cable, I will be able to produce such setup?<br>
<br>
Link to the cards: <a href="http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41" target="_blank">http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41</a><br>
<br>
Another question, I intend to use this with infiniband sdr support added to drbd in 8.3.3, and I found this on the spec's of the card.<br>
<br>
"In addition, the card includes internal Subnet Management Agent (SMA) and General Service Agents, eliminating the requirement for an external management agent CPU."<br>
<br>
This means I don't need to run openSM in any nodes? I will just need to setup two cards, a cable, connect them, and setup IPoIB to start replicating in 10Gbit?<br>
<br>
Thanks very much,<br>
<br>
-- <br>
Igor Neves<<a href="mailto:igor.neves@3gnt.net" target="_blank">igor.neves@3gnt.net</a>><br>
3GNTW - Tecnologias de Informação, Lda<br>
<br>
SIP: <a href="mailto:igor@3gnt.net" target="_blank">igor@3gnt.net</a><br>
MSN: <a href="mailto:igor@3gnt.net" target="_blank">igor@3gnt.net</a><br>
JID: <a href="mailto:igor@3gnt.net" target="_blank">igor@3gnt.net</a><br>
PSTN: 00351 252377120<br>
<br>
<br>
_______________________________________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" target="_blank">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>Dr. Michael Iverson<br>Director of Information Technology<br>Hatteras Printing<br>