[DRBD-user] ib or 10gbe?

Marcus Sorensen shadowsor at gmail.com
Mon Apr 30 17:04:08 CEST 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


If the IB is Mellanox, the newer ConnectX (2 and 3) can be run as
either a native 10Gbit ethernet or Infiniband. It's just a matter if
which driver you load. That may allow you to experiment and decide
which is best.

On Mon, Apr 30, 2012 at 8:29 AM, Florian Haas <florian at hastexo.com> wrote:
> Hi James,
>
> On Mon, Apr 30, 2012 at 1:11 PM, James Harper
> <james.harper at bendigoit.com.au> wrote:
>> I'm considering configurations for a pair of new servers - a 2 node Xen cluster with shared storage.
>>
>> It looks like I can build a HP server with direct connected 10gbe or ib for approximately the same price. Given the choice, what is the preference these days? The link will be dedicated to DRBD so other communications over the link are unimportant.
>>
>> And are the HP ib cards well supported under Linux? Anecdotal reports appreciated!
>
> We serve customers that use both, and in general recent distributions
> support both OFED (for IB) and 10 GbE quite well. If your main pain
> point is latency, you'll want to go with IB; if it's throughput,
> you're essentially free to pick and choose -- although of course _not_
> having to install any of the OFED libraries may be a plus for 10 GbE.
> Cost of switches is usually not much of a factor in the decision, as
> most people tend to wire their DRBD clusters back-to-back, but if
> you're planning on a switched topology you may have to factor that in,
> also.
>
> Both IB and 10 GbE do require a fair amount of kernel and DRBD tuning
> so that DRBD can actually max them out. Don't expect to be able to use
> your distro's standard set of sysctls, and default DRBD config, and
> then everything magically goes a million times faster.
>
> Generally speaking, also don't expect too much of a performance boost
> when using SDP (Sockets Direct Protocol) over IB. In general, we've
> found that the performance effect in comparison to IPoIB is negligable
> or even negative, but that's fine -- chances are you'll likely max out
> your underlying storage hardware with IPoIB anyhow. :) SDP is also
> currently suffering from a module refcount issue that is fixed in git
> (http://git.drbd.org/gitweb.cgi?p=drbd-8.3.git;a=commit;h=c2c2067c661c7cba213b0301e2b39f17c1419e51)
> but as yet unreleased, so that's a bit of an SDP show-stopper too...
> but as pointed out, IPoIB does do the trick nicely.
>
> Hope this helps.
> Cheers,
> Florian
>
> --
> Need help with High Availability?
> http://www.hastexo.com/now
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user



More information about the drbd-user mailing list