[DRBD-user] Proxmox VE 4.x DRBD 9 plugin - how is it supposed to work?

Roland Kammerer roland.kammerer at linbit.com
Wed Mar 1 09:41:35 CET 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Tue, Feb 28, 2017 at 07:52:03AM -0800, Sean M. Pappalardo wrote:
> Hello again.
> On 02/28/2017 12:44 AM, Roland Kammerer wrote:
> > Yes it uses dual-primary, only during the migration. From what we saw it
> > works reasonably well in these scenarios.
> Any risk of data loss? I plan to use this in production, though don't
> plan on live-migrating VMs often.

Sorry, but how do you expect me to answer such a question? Obviously.
Only code/functionality that is not there is 100% safe. I don't try to
be mean, but I can not answer that.

> >> is problem #1. I tried to create a test VM to see if it would work
> >> anyway, but it fails with "drbd error: Deployment node count exceeds the
> >> number of nodes in the cluster" which is problem #2.
> > 
> > Yes, you configured "redundancy 2", obviously that is not possible with
> > one node (drbdmange only knows about the drbdmanage nodes, which is 1 in
> > your case).
> Would setting it to "redundancy 1" work then? (I did try that setting to
> see if it would show free space but no. I didn't yet try adding a VM in
> that state.)

Yes, that is what I used for testing most of the time.

> > There is no "import old resources" feature,
> Interesting. The docs say that DRBD 9 can use DRBD 8.x config files with
> a few adjustments. (E.g. syncer rate is no longer recognized.) But I'm
> no longer trying to do that.

That was meant from a drbdmanage point of view. There is no "drbdmanage
import-my-manual-resources --all --version=8". drbd8 and drbd9 should be
able to talk to each other, but that is a not too well tested setup to
be honest.

> > and therefore "not
> > supported". So you want to temporarily connect drbd8.3 and drbd9 managed
> > by drbdmanage? That calls for troubles. Multiple page filling troubles.
> Not exactly, I just want to be able to activate storage on this DRBD9
> node - stand-alone for now - so I can put some VMs on it. Then I will
> upgrade the other node's OS, wipe and reconfigure its storage, then
> actually add it to the DRBD 9 cluster.

I think that could work, something along these lines, I've never tested
that, so you want to try that in some test environment:
- Start with 1 node and "redundancy 1"
- Put VMs on it.
- Add the second node to the drbdmanage cluster
- Set "redundancy 2".
- New VMs should then be replicated on both nodes. The old ones are only
  on the first node, you have to do that for every old resource: 
  "drbdmanage assign onlyOnHostAxyz HostB"

Regards, rck

More information about the drbd-user mailing list