[DRBD-user] DRBD 9 and pacemaker

mdsreg_linbit at microdata.co.uk mdsreg_linbit at microdata.co.uk
Mon Oct 23 20:40:17 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

I am setting up a test drbd 9 cluster with Pacemaker. The documentation 
states that you can either use drbd as if it is a SAN or have Pacemaker 
control drbd manually. I am using drbdmanage to handle the resource 
creation and was hoping to keep everything as simple as possible and 
have drbdmanage do most of the hard work and also allow automatic 
resource promotion. At this stage, I intend to set up just an 
active/passive cluster.

So far, I have zfs and drbd set up as I want. I am using a zfs -> drbd 
-> zfs -> nfs configuration. So, for example, I have a drbd device 
called vm-data-01 and I have created a zpool called vm-data-01 and on my 
first cluster this has been exported via nfs. It works fine. I can also 
migrate this resource to my second node and again, it works fine. The 
nfs share also exports correctly.

My main question is whether or not this is the "correct" way to do it 
and whether there are any serious pitfalls. Also, how do I prevent 
auto-promotion of the drbd block device if the node I am migrating to is 
not up to date?

Many thanks.


More information about the drbd-user mailing list