[DRBD-user] Directly connected GigE ports bonded together no switch

Jake Smith jsmith at argotec.com
Wed Aug 10 23:20:12 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.



----- Original Message -----
> From: "Bart Coninckx" <bart.coninckx at telenet.be>
> To: drbd-user at lists.linbit.com
> Sent: Wednesday, August 10, 2011 3:34:48 PM
> Subject: Re: [DRBD-user] Directly connected GigE ports bonded together no switch
> 
> On 08/10/11 19:04, Herman wrote:
> >> On 2011-08-09 16:46, Herman wrote:
> >>> Sorry if this is covered elsewhere.
> >>>
> >>> I know the Linux Bonding FAQ is supposed to talk about this, but
> >>> I
> >>> didn't see anything specific in it on what parameters to use.
> >>>
> >>> Basically, I want to bond two GigE ports between two servers
> >>> which are
> >>> connected with straight cables with no switch and use them for
> >>> DRBD.
> >>>
> >>> I tried the various bonding modes with "miimon=100", but none of
> >>> them
> >>> worked. Say the eth1 ports on both servers were cabled together,
> >>> and the
> >>> same for eth5.  Then,  I could create the bond with eth1 and
> >>> eth5.
> >>> However, if I downed one of the ports on one server, say eth1, it
> >>> would
> >>> failover on that server to eth5, but the other server would not
> >>> failover  to eth5.
> >>>
> >>> Eventually, I decided to use "arp_interval=100" and
> >>> "arp_ip_target=<ip
> >>> of other bonded pair>"  instead of "miimon=100".  This seems to
> >>> work as
> >>> I expected, with the bond properly failing over.
> >>>
> >>> Is this the right way to do this kind of bonding?
> >>>
> >>> Also, right now I'm using "mode=active-backup".  Would one of the
> >>> other
> >>> modes allow higher throughput and still allow automatic failover
> >>> and
> >>> transparency to DRBD?
> >>
> >> use balance-rr and e.g. miimon=100, that should do fine
> >>
> >> Regards,
> >> Andreas
> >
> > Andreas and Andi,
> >
> > Thanks for your suggestions to use balance-rr.  I did try
> > balance-rr
> > with miimon=100; however, it didn't seem to work the way I wanted
> > it to.
> > Perhaps the way I was testing it isn't proper for miimon?
> >
> > I attempted to make one of the two links fail by doing "ifconfig
> > eth3
> > down"  This appeared to work find on the server I ran that on.  I
> > could
> > still ping the other server.  However, from the 2nd server, when I
> > ping
> > the 1st, I lost every other packet.
> > Checking /proc/networking/bonding/bond2 showed that it still
> > thought
> > that both links were up.
> >
> > Is this because miimon still thinks a port is good if there is a
> > cable
> > and a powered NIC on both ends, and it doesn't care if th other NIC
> > isn't responding?
> >
> > And arp monitoring works because it actually checks the
> > reachability of
> > the target IP.
> >
> > If this is the case, maybe arp monitoring is more reliable for
> > direct
> > connections since NIC failure (which may fail but still have link
> > up) is
> > more likely than cable failure?  Maybe I don't have a good
> > understanding
> > of this.
> >
> > In addition, I tried to use scp to test the throughput through the
> > bonded link, but I actually got almost the same results via
> > active-backup as with balance-rr.  Am I doing something wrong?
> >
> > Thanks,
> > Herman
> >
> >
> > 
> I noticed only improvement on SLES11 after tuning the tcp_reordering
> parameter.
> 
> B.
> 
> 

I tuned my MTU setting on the direct link bond to 9000 and saw a 10% improvement on throughput. Negligible on latency though.

I was getting consistent 180-185MB/s using the throughput testing script in the DRBD Users guide with mtu 1500. Iperf was 1.75-1.85Gb/s.
After changing MTU I get 198-99MB/s consistently and highs at 209-215MB/s.  Without DRBD my storage controller is delivering 225MB/s so now there's almost no cost on the throughput side.  Iperf was rock solid at 1.97-1.98Gb/s repeatedly.

Jake



More information about the drbd-user mailing list