Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
> Date: Fri, 4 Jul 2014 14:18:48 +0200 > From: lars.ellenberg at linbit.com > To: drbd-user at lists.linbit.com > Subject: Re: [DRBD-user] One-line doubt when clusterizing DRBD resources... > > Note that in most parts of the world, > "doubt" is NOT the same as "question" > > ;-) Apart from an absolute lack of English fluency on my part, it was a blatant attempt to look knowledgeable and attract someone (who really is) on a a supposedly subtle subject ;-> A very successful attempt, it seems :-)) > > Do they need to be halted on both nodes with "drbdadm down res_name" > > before stopping drbd service and clusterizing them all? > > Not necessarily. > > Depends on what you want, what you expect, and what you do. > > If you have existing DRBD resource, > which are in active use, > but do not use a cluster manager yet, > you now want to add a cluster manager (pacemaker), > and you expect it to take over control, without interfering, > then you should experiment with this in a test environment first. Actually I was starting from scratch and testing I did (while reading tons of your posts etc. ;> ), but a different problem afflicted my tests (wrong fence-peer handler) and at one time I was afraid that this "dubious practice" could instead be part of the problem. > What you can do is start to configure pacemaker in "maintenance-mode", > and once you are positive that it is set up the way you want it, > take it out of maintenance-mode. > > At which point it will "reprobe" the state of the world (ok, this > cluster), and if it finds all resources already active and in line with > the configured policies, it will not take any action. Really brilliant! Many thanks for this suggestion: I will surely consider this strategy from now on, even when starting from scratch. What I actually did was instead (I only have DRBD-backed KVM resources on CentOS 6.5): *) begin with cluster (CMAN+Pacemaker) stopped/unconfigured on both nodes *) manually start DRBD service on both nodes *) create-md and up the resources on both nodes *) make a resource primary on a selected node *) virt-install/test there then shut down VM *) make the resource secondary on the above-selected node *) down the resource on both nodes *) repeat for all resources *) manually stop DRBD service on both nodes *) start/configure cluster on both nodes *) batch-define resources (pcs -f resource_cfg...) and test them "live" one at a time On the "doubt" itself I know understand (from your answer and from tests, other problems corrected) that it is not necessary to totally quiesce DRBD (if it finds a consistent status when evaluating the clustered resources). Many thanks again. Regards, Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20140704/c06f32c6/attachment.htm>