[DRBD-user] 10Gb ethernet ?

Sören Malchow Soeren.Malchow at interone.de
Thu Jun 19 18:05:12 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

i am sorry if i miss the point, but i did not read the older messages.

But why not use 802.3ad aka dynamic link aggregation, we have that 
successfully configured with HP and Nortel switches ( also with machines 
running drbd over those links ).

As far as i know cisco switches also support 802.3ad.

Regards
Soeren





Ralf Gross <Ralf-Lists at ralfgross.de> 
Sent by: drbd-user-bounces at lists.linbit.com
19.06.2008 17:49

To
drbd-user at lists.linbit.com
cc

Subject
Re: [DRBD-user] 10Gb ethernet ?






Lars Ellenberg schrieb:
> On Thu, Jun 19, 2008 at 01:03:11PM +0100, Lee Christie wrote:
> > In any event, I'm no expert on channel bonding, but in a 2-server
> > configuration, where the Ips and MAC addresses are fixed at either 
end,
> > how can you use all 4 channels ? I was always under the impression 
that
> > the bonding used an algorithm based on src/dest IP/Mac to choose which
> > link to send data down, so in a point to point config it would always 
be
> > the same link. 
> 
> "balance-rr" aka mode 0 for linux bonding schedules packets round robin
> over the available links.

balance-rr will not help in a cisco environment, because the switch
will still use the same ports and does not perform rr load balancing.
The only way to get rr working with cisco switches was to use 2
different vlans.

eth0 <---- vlan x ----> eth0
eth1 <---- vlan y ----> eth1

At least this is my experience and our CCNP's (or what they are
called) told me the same.

With the above trick I was able to get ~1.6x GbE throughput with
the netpipe benchmark (after tuning the reorder kernel parameter). I
didn't used the conenction for drbd, I tried to speed up our backup.
But the funny thing was, it slowed down. Even the ftpd or samba was
slower over this link as it was with the xor mode or just one GbE NIC.

> but still, for a single tcp connection, given some tcp_reorder tuning,
> the strong gain you get from 2x 1GbE (1.6 to 1.8 * that of one channel)
> degrades again to effectively less than one channel if you try to use 
4x.
> 
> again, "more" is not always "better".
> for the usage pattern of drbd (single tcp connection with bulk data) the
> throughput-optimum linux bonding seems to be 2x, with 3x you are back to
> around the same throughput as 1x, with 4x you are even worse than 1x,
> because packet reordering over bonded GbE and tcp congestion control
> don't work well together for sinlge tcp links.

Very true.

Ralf
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080619/636d837e/attachment.htm>


More information about the drbd-user mailing list