[DRBD-user] Directly connected GigE ports bonded together no switch

Jake Smith jsmith at argotec.com
Wed Aug 10 19:23:23 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Herman, 

To test the throughput I would use iperf: 

Install iperf on both servers. 
Start iperf as server on node 1: 
$ iperf -s 
On node 2: 
$ iperf -c node1_ip_on_direct_bond_link -i2 

After a few seconds you should start seeing throughput info on node 2. 

Mine hovered around 1.75-1.85Gb/s. I use miimon=100 and balance-rr. 

Not sure about the NIC down... I can't ifdown my enslaved NIC's. However if I do a physical disconnect it behaves properly. 

HTH 

Jake 

----- Original Message -----

From: "Herman" <herman6x9 at ymail.com> 
To: drbd-user at lists.linbit.com 
Sent: Wednesday, August 10, 2011 1:04:11 PM 
Subject: Re: [DRBD-user] Directly connected GigE ports bonded together no switch 

> On 2011-08-09 16:46, Herman wrote: 
> > Sorry if this is covered elsewhere. 
> > 
> > I know the Linux Bonding FAQ is supposed to talk about this, but I 
> > didn't see anything specific in it on what parameters to use. 
> > 
> > Basically, I want to bond two GigE ports between two servers which are 
> > connected with straight cables with no switch and use them for DRBD. 
> > 
> > I tried the various bonding modes with "miimon=100", but none of them 
> > worked. Say the eth1 ports on both servers were cabled together, and the 
> > same for eth5. Then, I could create the bond with eth1 and eth5. 
> > However, if I downed one of the ports on one server, say eth1, it would 
> > failover on that server to eth5, but the other server would not 
> > failover to eth5. 
> > 
> > Eventually, I decided to use "arp_interval=100" and "arp_ip_target=<ip 
> > of other bonded pair>" instead of "miimon=100". This seems to work as 
> > I expected, with the bond properly failing over. 
> > 
> > Is this the right way to do this kind of bonding? 
> > 
> > Also, right now I'm using "mode=active-backup". Would one of the other 
> > modes allow higher throughput and still allow automatic failover and 
> > transparency to DRBD? 
> 
> use balance-rr and e.g. miimon=100, that should do fine 
> 
> Regards, 
> Andreas 

Andreas and Andi, 

Thanks for your suggestions to use balance-rr. I did try balance-rr 
with miimon=100; however, it didn't seem to work the way I wanted it to. 
Perhaps the way I was testing it isn't proper for miimon? 

I attempted to make one of the two links fail by doing "ifconfig eth3 
down" This appeared to work find on the server I ran that on. I could 
still ping the other server. However, from the 2nd server, when I ping 
the 1st, I lost every other packet. 
Checking /proc/networking/bonding/bond2 showed that it still thought 
that both links were up. 

Is this because miimon still thinks a port is good if there is a cable 
and a powered NIC on both ends, and it doesn't care if th other NIC 
isn't responding? 

And arp monitoring works because it actually checks the reachability of 
the target IP. 

If this is the case, maybe arp monitoring is more reliable for direct 
connections since NIC failure (which may fail but still have link up) is 
more likely than cable failure? Maybe I don't have a good understanding 
of this. 

In addition, I tried to use scp to test the throughput through the 
bonded link, but I actually got almost the same results via 
active-backup as with balance-rr. Am I doing something wrong? 

Thanks, 
Herman 


_______________________________________________ 
drbd-user mailing list 
drbd-user at lists.linbit.com 
http://lists.linbit.com/mailman/listinfo/drbd-user 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20110810/d7f1214c/attachment.htm>


More information about the drbd-user mailing list