Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> My thought was that this may degrade the reliability of DRBD by it > not having its own private communication channel. If you were to bond the NICs, then there is certainly the possibility that other network access to the server could flood the network. However, it's unlikely that enough data could be thrown at them to flood a 2Gbps connection. Other problems relate to the data going out on the "public" network, and the possibility of an IP conflict causing other problems (e.g. if a 3rd machine on the network accidentally, or on purpose if someone was trying to "attack" your storage, was set to the same IP as one of the nodes). > Or is bonding everything together so that it all can run at 2Gbps a > good idea? The extra bandwidth is only relevant if you are seeing a bottleneck within your system. If the filesystem is working well without any delays, then the additional bandwidth is not that relevant. The main advantage of bonding the interfaces would be the increased redundancy. By the sound of it, your system has single connections between each device in the environment - which means that a single NIC/cable failure could cause one of the servers to disappear. Personally I think the best answer would be to bond the interfaces (dual active), and then use VLANs on top to keep the traffic segregated. This gives you the additional redundancy and bandwidth, as well as still keeping the internal data separate from public data to prevent the accidents/attacks discussed above. The downside to this is that the switches would need to support it - and if you want to keep the networking redundant you would need a pair of pretty recent switches to support the dual active functionality - since with so few nodes involved, the various methods that don't need switch support probably wouldn't help much. Graham