[DRBD-user] Proxmox VE 4.x DRBD 9 plugin - how is it supposed to work?

Sean M. Pappalardo spappalardo at renegadetech.com
Tue Feb 28 02:57:54 CET 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello.

I have a 3-node Proxmox VE cluster with a pair of nodes running DRBD
8.3. I have the following storage hierarchy
- Physical disks
- Hardware RAID
- LVM Partitions
- LVM PVs from those
- VG called DRBD-Backing
- LVs for node1 and node2 (where the respective node will be primarily
the one using the volume named for it)
- DRBD devices using these LVs
- XFS file systems
- VM disk image files (qcow2. I know, I have to convert these to raw)

I set these up in Proxmox 3.x as Directory storage items (mapped to
/dev/drbdX), so it doesn't actually know anything about DRBD running in
the background.

My first question is: how is the DRBD 9 on PVE 4 plugin supposed to
work? I get the idea that it will manage the creation of LVs for me as I
create/destroy VMs from the PVE GUI like PVE does with LVM. Is this
correct? The doc also implies that it will handle live-migration as
well, even though dual-primary is not yet supported. Is this true?

Before finding the "upgrading DRBD" page in the docs, I manually moved
all VMs off of one machine that was a DRBD node, upgraded it to Proxmox
VE 4.4, installed the Linbit drbdmanage-proxmox package, then blew away
the LVs on it and renamed the VG to drbdpool, following
https://www.drbd.org/en/doc/users-guide-90/s-proxmox-configuration. Then
I ran drbdmanage init <IP address of the direct-connected NIC> and it
hung (I assumed because the other node is still actively using DRBD 8.3
and was probably confusing this DRBD 9 node) so I broke out of it, then
disabled the DRBD NIC on the other node and ran that command again on
this one. This time it successfully created the .drbdctrl LVs. So I
added the stuff to /etc/pve/storage.cfg with "redundancy 2".

Now the GUI is not showing a free space figure for that storage, which
is problem #1. I tried to create a test VM to see if it would work
anyway, but it fails with "drbd error: Deployment node count exceeds the
number of nodes in the cluster" which is problem #2.

So my second question is: should I be able to create a DRBD "cluster"
consisting of a single node to start? If so, how? (My plan is to get
this node working stand-alone, move the other node's VMs to it, upgrade
the other node to PVE 4.4/DRBD 9, reconfigure its storage to match, then
add it to the DRBD cluster and sync happily again.)

Third question: Is DRBD 9 considered production stable yet? (This should
probably be my first question! :) )

Thank you very much for any help anyone can provide!

Sincerely,
Sean M. Pappalardo

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4263 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20170227/a3a3f563/attachment.bin>


More information about the drbd-user mailing list