<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 10pt; color: #000000'>Herman,<br><br>To test the throughput I would use iperf:<br><br>Install iperf on both servers.<br>Start iperf as server on node 1:<br>$ iperf -s<br>On node 2:<br>$ iperf -c node1_ip_on_direct_bond_link -i2<br><br>After a few seconds you should start seeing throughput info on node 2.<br><br>Mine hovered around 1.75-1.85Gb/s. I use miimon=100 and balance-rr.<br><br>Not sure about the NIC down... I can't ifdown my enslaved NIC's. However if I do a physical disconnect it behaves properly.<br><br>HTH<br><br>Jake<br><br><hr id="zwchr"><div style="color: rgb(0, 0, 0); font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Herman" <herman6x9@ymail.com><br><b>To: </b>drbd-user@lists.linbit.com<br><b>Sent: </b>Wednesday, August 10, 2011 1:04:11 PM<br><b>Subject: </b>Re: [DRBD-user] Directly connected GigE ports bonded together no switch<br><br>> On 2011-08-09 16:46, Herman wrote:<br>> > Sorry if this is covered elsewhere.<br>> > <br>> > I know the Linux Bonding FAQ is supposed to talk about this, but I<br>> > didn't see anything specific in it on what parameters to use.<br>> > <br>> > Basically, I want to bond two GigE ports between two servers which are<br>> > connected with straight cables with no switch and use them for DRBD.<br>> > <br>> > I tried the various bonding modes with "miimon=100", but none of them<br>> > worked. Say the eth1 ports on both servers were cabled together, and the<br>> > same for eth5. Then, I could create the bond with eth1 and eth5. <br>> > However, if I downed one of the ports on one server, say eth1, it would<br>> > failover on that server to eth5, but the other server would not<br>> > failover to eth5.<br>> > <br>> > Eventually, I decided to use "arp_interval=100" and "arp_ip_target=<ip<br>> > of other bonded pair>" instead of "miimon=100". This seems to work as<br>> > I expected, with the bond properly failing over.<br>> > <br>> > Is this the right way to do this kind of bonding?<br>> > <br>> > Also, right now I'm using "mode=active-backup". Would one of the other<br>> > modes allow higher throughput and still allow automatic failover and<br>> > transparency to DRBD?<br>> <br>> use balance-rr and e.g. miimon=100, that should do fine<br>> <br>> Regards,<br>> Andreas<br><br>Andreas and Andi,<br><br>Thanks for your suggestions to use balance-rr. I did try balance-rr<br>with miimon=100; however, it didn't seem to work the way I wanted it to.<br>Perhaps the way I was testing it isn't proper for miimon?<br><br>I attempted to make one of the two links fail by doing "ifconfig eth3<br>down" This appeared to work find on the server I ran that on. I could<br>still ping the other server. However, from the 2nd server, when I ping<br>the 1st, I lost every other packet.<br>Checking /proc/networking/bonding/bond2 showed that it still thought<br>that both links were up.<br><br>Is this because miimon still thinks a port is good if there is a cable<br>and a powered NIC on both ends, and it doesn't care if th other NIC<br>isn't responding?<br><br>And arp monitoring works because it actually checks the reachability of<br>the target IP.<br><br>If this is the case, maybe arp monitoring is more reliable for direct<br>connections since NIC failure (which may fail but still have link up) is<br>more likely than cable failure? Maybe I don't have a good understanding<br>of this.<br><br>In addition, I tried to use scp to test the throughput through the<br>bonded link, but I actually got almost the same results via<br>active-backup as with balance-rr. Am I doing something wrong?<br><br>Thanks,<br>Herman<br><br><br>_______________________________________________<br>drbd-user mailing list<br>drbd-user@lists.linbit.com<br>http://lists.linbit.com/mailman/listinfo/drbd-user<br><br></div><br></div></body></html>