[DRBD-user] 10Gb ethernet ?

Lee Christie Lee at titaninternet.co.uk
Thu Jun 19 21:07:42 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Again, I believe that in 802.3ad when transmitting to the same destination, packets will go over the same link.
 
I think we tested ftp throughput between two servers (connected via a cisco switch) using the various bonding methods to see if any performance gains were there. I think the answer was no because it was point to point however I note with interest the snippet below which suggests the two different ports need to be in different vlans. Not quite sure why that would be but I'll take it on face value :)
 
The bottom line is that no aggregation of multiple "slow" links will ever beat a single "fast" link. And there are compromises/caveats/complexity along the way.
 
We use link bonding for resilience across cisco 3750 stacks as two switches can be treated as a single logical entity and when you drop a switch or link there is almost no packet loss.
 
10Gb-e is getting cheaper all the time. Intel are currently offering a 2 for 1 deal on the 10Gb-e adapters we bought in for testing, so we paid 700 GBP for a pair. Configuration was a simple matter of dropping in the cards, building the kernel module and a wee bit of modprobe/kudzu , configure of eth2 and bingo - in business.
 
 


________________________________

	From: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Sören Malchow
	Sent: 19 June 2008 17:05
	To: Ralf Gross
	Cc: drbd-user-bounces at lists.linbit.com; drbd-user at lists.linbit.com
	Subject: Re: [DRBD-user] 10Gb ethernet ?
	
	

	Hi, 
	
	i am sorry if i miss the point, but i did not read the older messages. 
	
	But why not use 802.3ad aka dynamic link aggregation, we have that successfully configured with HP and Nortel switches ( also with machines running drbd over those links ). 
	
	As far as i know cisco switches also support 802.3ad. 
	
	Regards 
	Soeren 
	
	
	
	
	
Ralf Gross <Ralf-Lists at ralfgross.de> 
Sent by: drbd-user-bounces at lists.linbit.com 

19.06.2008 17:49 

To
drbd-user at lists.linbit.com 
cc
Subject
Re: [DRBD-user] 10Gb ethernet ?

	




	Lars Ellenberg schrieb:
	> On Thu, Jun 19, 2008 at 01:03:11PM +0100, Lee Christie wrote:
	> > In any event, I'm no expert on channel bonding, but in a 2-server
	> > configuration, where the Ips and MAC addresses are fixed at either end,
	> > how can you use all 4 channels ? I was always under the impression that
	> > the bonding used an algorithm based on src/dest IP/Mac to choose which
	> > link to send data down, so in a point to point config it would always be
	> > the same link. 
	> 
	> "balance-rr" aka mode 0 for linux bonding schedules packets round robin
	> over the available links.
	
	balance-rr will not help in a cisco environment, because the switch
	will still use the same ports and does not perform rr load balancing.
	The only way to get rr working with cisco switches was to use 2
	different vlans.
	
	eth0 <---- vlan x ----> eth0
	eth1 <---- vlan y ----> eth1
	
	At least this is my experience and our CCNP's (or what they are
	called) told me the same.
	
	With the above trick I was able to get ~1.6x GbE throughput with
	the netpipe benchmark (after tuning the reorder kernel parameter). I
	didn't used the conenction for drbd, I tried to speed up our backup.
	But the funny thing was, it slowed down. Even the ftpd or samba was
	slower over this link as it was with the xor mode or just one GbE NIC.
	
	> but still, for a single tcp connection, given some tcp_reorder tuning,
	> the strong gain you get from 2x 1GbE (1.6 to 1.8 * that of one channel)
	> degrades again to effectively less than one channel if you try to use 4x.
	> 
	> again, "more" is not always "better".
	> for the usage pattern of drbd (single tcp connection with bulk data) the
	> throughput-optimum linux bonding seems to be 2x, with 3x you are back to
	> around the same throughput as 1x, with 4x you are even worse than 1x,
	> because packet reordering over bonded GbE and tcp congestion control
	> don't work well together for sinlge tcp links.
	
	Very true.
	
	Ralf
	_______________________________________________
	drbd-user mailing list
	drbd-user at lists.linbit.com
	http://lists.linbit.com/mailman/listinfo/drbd-user
	
	


-------------------------------------------------------------------------------

This email may contain legally privileged and/or confidential information. It is solely for and is confidential for use by the addressee. Unauthorised recipients must preserve, observe and respect this confidentiality. If you have received it in error please notify us and delete it from your computer. Do not discuss, distribute or otherwise copy it. 

Unless expressly stated to the contrary this e-mail is not intended to, and shall not, have any contractually binding effect on the Company and its clients. We accept no liability for any reliance placed on this e-mail other than to the intended recipient. If the content is not about the business of this Company or its clients then the message is neither from nor sanctioned by the Company. 

We accept no liability or responsibility for any changes made to this e-mail after it was sent or any viruses transmitted through this e-mail or any attachment. It is your responsibility to satisfy yourself that this e-mail or any attachment is free from viruses and can be opened without harm to your systems.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080619/3da38a0e/attachment.htm>


More information about the drbd-user mailing list