Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello there. I'm a long time happy user of drbd - many thanks for the great work! I have a two node setup with two SMP servers, two S-ATA each, two gigabit each. One gigabit is the public network to internet, and the other one is a crossover between the two nodes carrying drbd data and internal (lan) traffic. I have the obvious setup with a primary drbd device on each node mirrored on the corresponding drive on the other node. FYI, I run lvm2 on the drbd devices and use logical volumes for Xen paravirtualized guests. I don't need live migration so I prefer the flexibility lvm between drbd and xen. Now I'm adding a 2x gigabit card in each node (Intel PRO/1000MT pci- x). The current LAN port will be connected to a switch to create a LAN for internal traffic between the two and other servers. The two dual- gigabits will be crossed between the two nodes and will carry drbd traffic (w/ jumbo frames). The simplest setup would be one drbd device per gigabit link. I use "normal" 7200rpm s-ata disks, not fancy 15k rpm ones or other strange things. I use the latest stable 8.0.x release. Is there any reason why I should consider a bonding setup with both drbd devices on the bonded link? Would you recommend such a setup? Talking about availability, I don't see any reason why I should loose only one of the two links... performance-wise, I think the gigabit link and a single 7200rpm sata disk should be more or less leveled. Also, I'm going to do some RTFM on this, but do you recommend any special drbd setting to make best use of the direct gigabit link with jumbo frames? Many thanks, -- Luca Lesinigo