Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi all, In the past I read somewhere that by bonding more than two NICs there is a severe penalty on speed as TCP re-ordering needs to happen. I'm currently building a two-node DRBD cluster that uses Infiniband for DRBD. The cluster offers SCST targets. I would like to offer the best speed possible to the iSCSI clients (which are on gigabit NICs). Has anyone tried 3 or 4 card bonding? What performance do you get out of this? Thanks!, BC