[DRBD-user] proxmox linstor network recomendations
Matthias Weigel
matthias.weigel at maweos.de
Mon Aug 28 16:14:17 CEST 2023
Hi Nicholas,
if your two 40G switches are not connected to each other, then either
- connect them
- or use only one, however then you have no redundandancy
- or get your own 40G switches
- or use an overly complicated routing setup
If the 40G switches are connected, you can use
Failover/Active-Backup/mode1 bonding. But only if they are connected!
Also there are some more things to consider:
- disable "green" features for the NIC in Bios, and Powersave in OS.
Otherwise some "green" software might disable the backup NIC, as it
seems to be not in use.
- check if IRQ or NIC driver CPU affinity gives you better performance
- NIC "Load sharing" probably does not give you any benefit here,
because of the hashing strategies. Either an IP or a MAC based strategy
always results in the same (only one) physical NIC port being used. So
one port idle, one port used.
- if your 40G connection is fiber optics, test for UDLD (Unidirectional
Link Detection). There is no standard for this, every vendor does
something different. Test your setup, how it detects and behaves if one
fiber out of a pair gets disconnected. Test both cases. Both sides
(switch and NIC) have to recognize the UDLD condition and put the port down.
- check the "flow control" setting. It should be the same on your NIC
and the switch port.
- if you use jumbos, both the switches and your NIC needs to use the
same config.
Best Regards
Matthias
Am 25.08.23 um 11:08 schrieb Nicholas Papadakos:
> Hello,
> i have a proxmox cluster i want to install linstor on it. The cluster
> has 3 nodes with 2x1G ports and 2 x 40G ports. Both the 40G ports go to
> separate switches. The obvious choice would be to use the 40G network
> for linstor sync but what would be the optimal config ? linux bond ?
> And if yes which mode ?
>
> I would prefer to avoid LCAP.
> The 40G switches are not connected between them (they are used in a
> multipath ISCSI configuration on a vmware cluster and i would prefer not
> to touch that config).
>
> Thank you in advance.
>
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user at lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
More information about the drbd-user
mailing list