[DRBD-user] Infiniband card support and help

Michael Iverson miverson at 4hatteras.com
Mon May 31 13:45:22 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Igor,

I'm basically doing the same thing, only with MHEA28-XTC cards. I wouldn't
think you'll have any problems creating a similar setup with the MHES cards.

I've not attempted to use infiniband sdr, just ipoib. I am running opensm on
one of the nodes. I'm getting throughput numbers like this:

cirrus:~$ netperf -H stratus-ib
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
stratus-ib.focus1.com (172.16.24.1) port 0 AF_INET : demo
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

 87380  16384  16384    10.00    7861.61

A couple of things to watch out for:

1. Upgrade the firmware on the cards to the latest and greatest version. I
saw about a 25% increase in throughput as a result. The firmware updater was
a pain to compile, but that was mostly due to Ubuntu's fairly rigid default
compiler flags.

2. Run the cards in connected mode, rather than datagram mode, and put the
MTU at the max value of 65520. My performance benchmarks of drbd show that
this is the best setup.

The replication rate on my setup is completely limited by the bandwidth of
my disk subsystem, which is about 200 MB/s for writes. I can share some
performance comparisons between this and bonded gigabit ethernet, if you
would like. However, I won't be able to provide it until tomorrow, as it is
a holiday in the US today, and I don't have ready access to the data.


On Mon, May 31, 2010 at 6:17 AM, Igor Neves <igor at 3gnt.net> wrote:

> Hi,
>
> I'm looking for a 10Gbit backend for storage drbd replication. I'm
> expecting to setup infiniband solution connected back to back, this means
> both nodes will be connected together without a switch.
>
> I wonder if I bought two of this cards MHES14-xtc and a cable, I will be
> able to produce such setup?
>
> Link to the cards:
> http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=19&menu_section=41
>
> Another question, I intend to use this with infiniband sdr support added to
> drbd in 8.3.3, and I found this on the spec's of the card.
>
> "In addition, the card includes internal Subnet Management Agent (SMA) and
> General Service Agents, eliminating the requirement for an external
> management agent CPU."
>
> This means I don't need to run openSM in any nodes? I will just need to
> setup two cards, a cable, connect them, and setup IPoIB to start replicating
> in 10Gbit?
>
> Thanks very much,
>
> --
> Igor Neves<igor.neves at 3gnt.net>
> 3GNTW - Tecnologias de Informação, Lda
>
>  SIP: igor at 3gnt.net
>  MSN: igor at 3gnt.net
>  JID: igor at 3gnt.net
>  PSTN: 00351 252377120
>
>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>



-- 
Dr. Michael Iverson
Director of Information Technology
Hatteras Printing
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100531/409876c9/attachment.htm>


More information about the drbd-user mailing list