[DRBD-user] GigE vs Bonded NICs

CA Lists lists at creativeanvil.com
Thu Jul 5 18:19:44 CEST 2007

Graham Wood wrote:
>> My thought was that this may degrade the reliability of DRBD by it 
>> not having its own private communication channel.
> If you were to bond the NICs, then there is certainly the possibility 
> that other network access to the server could flood the network.  
> However, it's unlikely that enough data could be thrown at them to 
> flood a 2Gbps connection.
> Other problems relate to the data going out on the "public" network, 
> and the possibility of an IP conflict causing other problems (e.g. if 
> a 3rd machine on the network accidentally, or on purpose if someone 
> was trying to "attack" your storage, was set to the same IP as one of 
> the nodes).
Well, everything that has access to these servers is on the network, and I'm the only one with any access, so, 
something coming up on the same IP is possible, but unlikely. Good point 
>> Or is bonding everything together so that it all can run at 2Gbps a 
>> good idea?
> The extra bandwidth is only relevant if you are seeing a bottleneck 
> within your system.  If the filesystem is working well without any 
> delays, then the additional bandwidth is not that relevant.  The main 
> advantage of bonding the interfaces would be the increased redundancy. 
>  By the sound of it, your system has single connections between each 
> device in the environment - which means that a single NIC/cable 
> failure could cause one of the servers to disappear.
> Personally I think the best answer would be to bond the interfaces 
> (dual active), and then use VLANs on top to keep the traffic 
> segregated.  This gives you the additional redundancy and bandwidth, 
> as well as still keeping the internal data separate from public data 
> to prevent the accidents/attacks discussed above.  The downside to 
> this is that the switches would need to support it - and if you want 
> to keep the networking redundant you would need a pair of pretty 
> recent switches to support the dual active functionality - since with 
> so few nodes involved, the various methods that don't need switch 
> support probably wouldn't help much.
Yeah, one of the big benefits I saw to it was the redundancy it offered. 
Thanks for the info.

Also, someone earlier asked if the disks could keep up with the 2Gbps - 
from running hdparm -tT, it would appear they can outperform 1Gbps by 
just a little, but could not keep up with 2Gbps. Again, I was very 
interested in the redundancy offered, as well as the potential for a bit 
more speed, but didn't want to have issues with too much different 
traffic on the network.

Thanks again to all that responded.

More information about the drbd-user mailing list