[DRBD-user] DRBD with NIC bonding and Crossover cables - no switch

Bart Coninckx bart.coninckx at telenet.be
Sat Sep 25 10:00:44 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Friday 24 September 2010 23:51:18 Matt Ball, IT Hardware Manager wrote:
>   I have setup NIC bonding with DRBD successfully but we need to try to
> reduce the amount of switch ports required as we need to remotely deploy
> several clusters. We have 2 bonded nics, total of 4 NICs/cables plus one
> for iLo. So we have 6 that must go into the switch (3 from each cluster)
> but want to avoid having those additional cables for drbd and that bond
> going into the switch.
> 
> We decided to try and connect the drbd resources together using that
> bond0 channel but using crossover cables instead of going through the
> switch. No joy. Cannot ping, no communications, destination on trace
> route shows as the 169.* hardware failure address...
> 
> We have all GB NICs and the cables are GB crossover.
> 
> We have tried mode-1, mode-5, and mode-6 which all claim to work without
> needing special switch configurations.
> 
> Is there any way to connect a bonded NIC (trunk) and have drbd
> communicate through it?
> 
> I apologize in advance if this is not appropriate for the drbd message
> board, I realize it may be outside the scope of this forum. If so please
> disregard.
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> 

We have this running with balance-rr. What is important, is to change tcp-
reordering, otherwise one gets poor performance. Gigabit does not need 
crossover by the way, maybe that's where things go wrong ...


B.



More information about the drbd-user mailing list