[DRBD-user] DRBD utilize Infiniband inter-connection?

Gennadiy Nerubayev parakie at gmail.com
Sun Jan 4 19:42:58 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Mon, Dec 29, 2008 at 10:31 PM, Joe,Yu <smartjoe at gmail.com> wrote:

> On Tue, Dec 30, 2008 at 3:42 AM, Nathan Stratton <nathan at robotics.net>wrote:
>
>> On Tue, 30 Dec 2008, Joe,Yu wrote:
>>
>>  Hello guys,
>>>
>>> 'Coz DRBD nodes sync data all through the GigE network(normal topology).
>>> Does it make sense that use Infiniband hardware replace GigE as the
>>> inter-connection link for DRBD nodes to have outstanding performance
>>> boost?
>>>
>>
>> We use IPoIB, it's a performance hit, over verbs, but much better then
>> GigE.
>>
>>  We have deployed a 4 nodes Oracle RAC Cluster 1 years ago with HP's
>>> infiniband products and gained big performance improvement. This
>>> encourage
>>> us to think of this day-and-day cheaper hardware solution for a faster
>>> DRBD
>>> cluster.
>>>
>>
>> With all of the Infiniband parts on ebay, or even the cost new it is worth
>> it.
>>
>
>
> Any idea for Linbit and DRBD community to develop native DRBD driver upon
> IB?
> Just like we do in HPC cluster ? To achieve hight performance parallel I/O
> by utilizing lustre over IB.
>

I'd love to see DRBD over RDMA myself (see my earlier post about IPoIB sync
speed), but iirc it was stated that it's a long ways off if at all, possibly
for DRBD9.

-Gennadiy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090104/0cddb6a8/attachment.htm>


More information about the drbd-user mailing list