[DRBD-user] Default drbdmanage system-kill behavior

Mariusz Mazur mmazur at axeos.com
Fri Oct 27 14:57:12 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


By default drbdmanage during its normal operation will lock any
lvm-related operations on a server somewhere between running the first
'drbdmanage init' and a year into production deployment. This is known
and documented behavior.

For any new drbd9 user there are two ways of avoiding it:
1. Divine inspiration compels one to read documentation section 5.4
titled "Configuring storage plugins" thoroughly and while going
through 5.4.1 comprehend immediately that if you don't do what it
says, it will kill your system at some point. (This is not stated.)
2. Be lucky enough to stumble upon the problem initially with a 'init'
or 'node-add' and persist for long enough to figure out what the issue
is.

>From brief contact with a linbit developer it seems to me that company
policy is 'new users just need to know to read 5.4.1'. Preferably
before 5.1 and 5.2 which actually show how to use init/add-node.

If I wanted to come up with a good way to leave new users thinking
"should've just used gluster, like everybody else", I don't think I'd
do a better job.


(Btw: are there any other 5.4.1s a new user should be aware of?)



More information about the drbd-user mailing list