[DRBD-user] Infiniband card support and help

Robert Dunkley Robert at saq.co.uk
Tue Jun 1 09:16:47 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi Igor,

 

 

SDP performance can be measured with the Qperf utility included with the OFED drivers/utilities. I was testing with single port 3rd Gen (Same series as the ones you link) Mellanox DDR (20/16Gbit) cards on Opterons 2 years ago and was getting 11Gbit/sec. The newer 4th gen ConnectX mellanox cards will achieve 13-14Gbit/sec in DDR so if you are buying new get ConnectX.

 

For comparison in datagram mode I was getting 3.2Gbit/sec and in connected mode with a 32Kb MTU in connected mode I was getting 7Gbit/sec. This was 2 years ago so new version of OFED may well have improved performance.

 

 

Hope this helps,

 

 

Rob

 

From: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Igor Neves
Sent: 31 May 2010 14:33
To: Michael Iverson
Cc: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] Infiniband card support and help

 

Hi,



On 05/31/2010 12:45 PM, Michael Iverson wrote: 

Igor, 

 

I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't think you'll have any problems creating a similar setup with the MHES cards.

 

I've not attempted to use infiniband sdr, just ipoib. I am running opensm on one of the nodes. I'm getting throughput numbers like this:

 


Would be very nice if you could test your setup with drbd infiniband sdr support, probably you will not need to re-sync anything.




cirrus:~$ netperf -H stratus-ib

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to stratus-ib.focus1.com (172.16.24.1) port 0 AF_INET : demo

Recv   Send    Send                          

Socket Socket  Message  Elapsed              

Size   Size    Size     Time     Throughput  

bytes  bytes   bytes    secs.    10^6bits/sec  

 

 87380  16384  16384    10.00    7861.61 


Nice.




 

A couple of things to watch out for:

 

1. Upgrade the firmware on the cards to the latest and greatest version. I saw about a 25% increase in throughput as a result. The firmware updater was a pain to compile, but that was mostly due to Ubuntu's fairly rigid default compiler flags.


Will watch that!




2. Run the cards in connected mode, rather than datagram mode, and put the MTU at the max value of 65520. My performance benchmarks of drbd show that this is the best setup. 


If I use infiniband sdr support from drbd, should I care about MTU?




The replication rate on my setup is completely limited by the bandwidth of my disk subsystem, which is about 200 MB/s for writes. I can share some performance comparisons between this and bonded gigabit ethernet, if you would like. However, I won't be able to provide it until tomorrow, as it is a holiday in the US today, and I don't have ready access to the data.


We have a couple of setups that have I/O performances greater than 500MB/sec, so we really need 10Gbit trunks. 

Thanks for the help, but I don't need performance results from Gbit setup's, we have a couple, and we know the problems! :) Anyway if you want to paste it here, I guess no one will complain.




On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor at 3gnt.net> wrote:

Hi,

I'm looking for a 10Gbit backend for storage drbd replication. I'm expecting to setup infiniband solution connected back to back, this means both nodes will be connected together without a switch.

I wonder if I bought two of this cards MHES14-xtc and a cable, I will be able to produce such setup?

Link to the cards: http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41

Another question, I intend to use this with infiniband sdr support added to drbd in 8.3.3, and I found this on the spec's of the card.

"In addition, the card includes internal Subnet Management Agent (SMA) and General Service Agents, eliminating the requirement for an external management agent CPU."

This means I don't need to run openSM in any nodes? I will just need to setup two cards, a cable, connect them, and setup IPoIB to start replicating in 10Gbit?

Thanks very much,


Thanks, once again.



-- 
Igor Neves <igor.neves at 3gnt.net> <mailto:igor.neves at 3gnt.net> 
3GNTW - Tecnologias de Informação, Lda
 
 SIP: igor at 3gnt.net
 MSN: igor at 3gnt.net
 JID: igor at 3gnt.net
 PSTN: 00351 252377120
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100601/affa5ea6/attachment.htm>


More information about the drbd-user mailing list