Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Mon, Feb 27, 2017 at 05:57:54PM -0800, Sean M. Pappalardo wrote: > Hello. > > My first question is: how is the DRBD 9 on PVE 4 plugin supposed to > work? I get the idea that it will manage the creation of LVs for me as I > create/destroy VMs from the PVE GUI like PVE does with LVM. Is this > correct? Yes. > The doc also implies that it will handle live-migration as > well, even though dual-primary is not yet supported. Is this true? Yes it uses dual-primary, only during the migration. From what we saw it works reasonably well in these scenarios. > Before finding the "upgrading DRBD" page in the docs, I manually moved > all VMs off of one machine that was a DRBD node, upgraded it to Proxmox > VE 4.4, installed the Linbit drbdmanage-proxmox package, then blew away > the LVs on it and renamed the VG to drbdpool, following > https://www.drbd.org/en/doc/users-guide-90/s-proxmox-configuration. Then > I ran drbdmanage init <IP address of the direct-connected NIC> and it > hung (I assumed because the other node is still actively using DRBD 8.3 > and was probably confusing this DRBD 9 node) so I broke out of it, then > disabled the DRBD NIC on the other node and ran that command again on > this one. This time it successfully created the .drbdctrl LVs. So I > added the stuff to /etc/pve/storage.cfg with "redundancy 2". Strange, "init" is relatively easy, it should not fail there. I don't think the other node was the problem. Maybe the old kernel module was loaded or you did not give it enough time? Hard to tell now, but okay, it worked the second time. > Now the GUI is not showing a free space figure for that storage, which > is problem #1. I tried to create a test VM to see if it would work > anyway, but it fails with "drbd error: Deployment node count exceeds the > number of nodes in the cluster" which is problem #2. Yes, you configured "redundancy 2", obviously that is not possible with one node (drbdmange only knows about the drbdmanage nodes, which is 1 in your case). If you really want to go that road, see later, first make drbdmanage working on the command line. > So my second question is: should I be able to create a DRBD "cluster" > consisting of a single node to start? If so, how? (My plan is to get > this node working stand-alone, move the other node's VMs to it, upgrade > the other node to PVE 4.4/DRBD 9, reconfigure its storage to match, then > add it to the DRBD cluster and sync happily again.) There is no "import old resources" feature, and therefore "not supported". So you want to temporarily connect drbd8.3 and drbd9 managed by drbdmanage? That calls for troubles. Multiple page filling troubles. > Third question: Is DRBD 9 considered production stable yet? (This should > probably be my first question! :) ) >From what I see here, you were happy with an outdated 8.3 setup, why not be happy by upgrading to a current 8.4? This obviously depends on how dynamic your setup is. Regards, rck