Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Christian, I'd like to know : - what filesystem did you use for the test ??? - what tool did you use to measure the latency ??? - what network card did you use for the test ??? - what switch did you use for the test ??? - did you use jumbo frames ??? if yes, what was size of the frames ??? thanks Marcos ----- Mensagem original ---- De: Christian Balzer <chibi at gol.com> Para: drbd-user at lists.linbit.com Enviadas: Sábado, 2 de Outubro de 2010 3:01:00 Assunto: Re: [DRBD-user] bonding more than two network cards still a bad idea? On Thu, 30 Sep 2010 21:21:47 -0500 J. Ryan Earl wrote: [lots and lots of useful information and data in response to Bart Coninckx] In addition to this 2 more items of possible interest. I also tested and use GigE bonding (balance-rr) with more than 2 links, 4 to be precise. And while the last time I tested this it "only" achieved 3.51Gb/s bandwidth that is still better than nothing and also definitely not the loss of bandwidth compared to a dual link that gets preached here at times (and might well be true for certain HW/SW scenarios). And to hammer the point Ryan made somewhat in passing home, latency is king in this game. The reason I'm considering QDR Infiniband for my next DRBD clusters is not the need for more "speed" as in bandwidth. Most of my setups will do fine with what a dual or quad GigE link can provide. But roundtrip time, latency really becomes much much more of an issue in real life production environments than write speed (lots of parallel transactions as opposed to bonnie writing/reading one big file). Regards, Christian -- Christian Balzer Network/Systems Engineer chibi at gol.com Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ drbd-user mailing list drbd-user at lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user