Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Thu, Jun 19, 2008 at 01:03:11PM +0100, Lee Christie wrote: > > But you have tested the raid-0 doing a lot more? > > Around 300-400MB/s if I recall... > > > Do you have tested both nodes with iperf? > > No. > > > This card have a problem, it's not fault tolerant, and that it's one > > advantage to Quad NIC Gbit Ethernet. > > This may provoke a strong reaction ;) but in 8 years of running a > hosting company I've never seen a NIC break in a running server. We use > multiple adapters to give fault tolerance over the switches they are > connected to , not themselves. > > In any event, I'm no expert on channel bonding, but in a 2-server > configuration, where the Ips and MAC addresses are fixed at either end, > how can you use all 4 channels ? I was always under the impression that > the bonding used an algorithm based on src/dest IP/Mac to choose which > link to send data down, so in a point to point config it would always be > the same link. "balance-rr" aka mode 0 for linux bonding schedules packets round robin over the available links. but still, for a single tcp connection, given some tcp_reorder tuning, the strong gain you get from 2x 1GbE (1.6 to 1.8 * that of one channel) degrades again to effectively less than one channel if you try to use 4x. again, "more" is not always "better". for the usage pattern of drbd (single tcp connection with bulk data) the throughput-optimum linux bonding seems to be 2x, with 3x you are back to around the same throughput as 1x, with 4x you are even worse than 1x, because packet reordering over bonded GbE and tcp congestion control don't work well together for sinlge tcp links. -- : Lars Ellenberg http://www.linbit.com : : DRBD/HA support and consulting sales at linbit.com : : LINBIT Information Technologies GmbH Tel +43-1-8178292-0 : : Vivenotgasse 48, A-1120 Vienna/Europe Fax +43-1-8178292-82 : __ please don't Cc me, but send to list -- I'm subscribed