Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, Sep 17, 2010 at 02:12:35PM -0500, J. Ryan Earl wrote: > > > > If that gets you connected, then its that bug. > > I think I even patched it in kernel once, > > but don't find that right now, > > and don't remember the SDP version either. > > I think it was > > drivers/infiniband/ulp/sdp/sdp_main.c:addr_resolve_remote() > > missing an (... || ... = AF_INET_SDP) > > > Is this the fix to which you refer? > http://www.mail-archive.com/general@lists.openfabrics.org/msg10615.html That's certainly relevant as well, but it would have returned EAFNOSUPPORT, which is 97. I doubt that this has anything to do with performance, btw, it is just the address lookup during connect. My guess is if you strace a netcat in userland, using your sdp preload thingy, you'll likely see that it only creates the socket as AF_INET_SDP, but all the rest of the network functions keep using AF_INET, so no-one ever noticed. If that is intentional, we'd have to adjust that in DRBD. If not, it needs to be fixed in the sdp stack. DRBD over SDP performance tuning is a bit tricky, and no, I don't remember the details, it's been a while. I think cpu usage dropped considerable, thats a plus. But neither single write latency nor sequential throughput of a single connection improved much or even degraded respective to IPoIB. If you have multiple DRBD, thus multiple connections, the cumulative throughput scaled better, though. But please go ahead and tune your stack, on your hardware, which may be more capable than the test lab hardware we used. Depending on your hardware, and the quality recommendations of your SDP tuning expert, your findings may be different. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed