[DRBD-user] Recommendations on Bonded and VLANed interfaces

Diego Julian Remolina diego.remolina at ibb.gatech.edu
Fri Mar 30 15:53:50 CEST 2007

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi List,

I wanted to ask your recommendation for the following:

Most of my servers come with two built-in Gigabit NICs. I have added some single, dual and Quad 
PCI/PCI-X/PCI-Express Gigabit network cards to them. In the past I used to configure them with 1 NIC 
dedicated to drbd, then the other NICs dedicated to each one of my subnets (I had 3 subnets and have 
just added a new one).

I am currently running raid 10, 5 and 6 on different servers and my write performance maxes out due 
to the 1 Gigabit link that I use for drbd, see the first 3 lines of the bencharmk table for a server 
with Raid10:

https://services.ibb.gatech.edu/wiki/index.php/Benchmarks:Storage#Benchmark_Results_3

The file servers are currently not heavily loaded and the 1 Gigabit links on each subnet seem fine. 
However, I want to configure them the best way possible. This is what I have though about doing and 
wanted your input:

My Cisco switches support trunking and are configured already to do this.

1. Bond all the interfaces on each server (Up to 6 in the one with 2 built-in cards and a Quad 
gigabit add-on card)

2. Add VLANs for the bonded interface on each one of my subnets and also add one VLAN for my drbd 
link. Basically, I would be mixing the traffic of the 5 subnets (my 4 subnets and the private 
10.0.0.0/24 for the drbd connection) in one bonded interface.

Has anyone tried this, have you seen any issues related to this setup?
Would you recommend this setup, against it?

Worst case scenario, I would just possible bond 2 interfaces for drbd, then the rest for my VLANs.

Thanks,

Diego



More information about the drbd-user mailing list