[DRBD-user] Infiniband card support and help

Igor Neves igor at 3gnt.net
Mon May 31 15:32:40 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,



On 05/31/2010 12:45 PM, Michael Iverson wrote:
> Igor,
>
> I'm basically doing the same thing, only with MHEA28-XTC cards. I 
> wouldn't think you'll have any problems creating a similar setup with 
> the MHES cards.
>
> I've not attempted to use infiniband sdr, just ipoib. I am running 
> opensm on one of the nodes. I'm getting throughput numbers like this:
>

Would be very nice if you could test your setup with drbd infiniband sdr 
support, probably you will not need to re-sync anything.

> cirrus:~$ netperf -H stratus-ib
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
> stratus-ib.focus1.com <http://stratus-ib.focus1.com> (172.16.24.1) 
> port 0 AF_INET : demo
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
>
>  87380  16384  16384    10.00    7861.61

Nice.

>
> A couple of things to watch out for:
>
> 1. Upgrade the firmware on the cards to the latest and greatest 
> version. I saw about a 25% increase in throughput as a result. The 
> firmware updater was a pain to compile, but that was mostly due to 
> Ubuntu's fairly rigid default compiler flags.

Will watch that!

> 2. Run the cards in connected mode, rather than datagram mode, and put 
> the MTU at the max value of 65520. My performance benchmarks of drbd 
> show that this is the best setup.

If I use infiniband sdr support from drbd, should I care about MTU?

> The replication rate on my setup is completely limited by the 
> bandwidth of my disk subsystem, which is about 200 MB/s for writes. I 
> can share some performance comparisons between this and bonded gigabit 
> ethernet, if you would like. However, I won't be able to provide it 
> until tomorrow, as it is a holiday in the US today, and I don't have 
> ready access to the data.

We have a couple of setups that have I/O performances greater than 
500MB/sec, so we really need 10Gbit trunks.

Thanks for the help, but I don't need performance results from Gbit 
setup's, we have a couple, and we know the problems! :) Anyway if you 
want to paste it here, I guess no one will complain.

> On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor at 3gnt.net 
> <mailto:igor at 3gnt.net>> wrote:
>
>     Hi,
>
>     I'm looking for a 10Gbit backend for storage drbd replication. I'm
>     expecting to setup infiniband solution connected back to back,
>     this means both nodes will be connected together without a switch.
>
>     I wonder if I bought two of this cards MHES14-xtc and a cable, I
>     will be able to produce such setup?
>
>     Link to the cards:
>     http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
>     <http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41>
>
>     Another question, I intend to use this with infiniband sdr support
>     added to drbd in 8.3.3, and I found this on the spec's of the card.
>
>     "In addition, the card includes internal Subnet Management Agent
>     (SMA) and General Service Agents, eliminating the requirement for
>     an external management agent CPU."
>
>     This means I don't need to run openSM in any nodes? I will just
>     need to setup two cards, a cable, connect them, and setup IPoIB to
>     start replicating in 10Gbit?
>
>     Thanks very much,
>

Thanks, once again.

-- 
Igor Neves<igor.neves at 3gnt.net>
3GNTW - Tecnologias de Informação, Lda

  SIP: igor at 3gnt.net
  MSN: igor at 3gnt.net
  JID: igor at 3gnt.net
  PSTN: 00351 252377120


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100531/31e00edf/attachment.htm>


More information about the drbd-user mailing list