<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>> Date: Fri, 4 Jul 2014 14:18:48 +0200<br><div>> From: lars.ellenberg@linbit.com<br>> To: drbd-user@lists.linbit.com<br>> Subject: Re: [DRBD-user] One-line doubt when clusterizing DRBD resources...<br>> <br>> Note that in most parts of the world,<br>> "doubt" is NOT the same as "question"<br>> <br>>         -)<br><br>Apart from an absolute lack of English fluency on my part, it was a blatant attempt to look knowledgeable and attract someone (who really is) on a a supposedly subtle subject ;-><br><br>A very successful attempt, it seems :-))<br><br>> > Do they need to be halted on both nodes with "drbdadm down res_name"<br>> > before stopping drbd service and clusterizing them all?<br>> <br>> Not necessarily.<br>> <br>> Depends on what you want, what you expect, and what you do.<br>> <br>> If you have existing DRBD resource,<br>> which are in active use,<br>> but do not use a cluster manager yet,<br>> you now want to add a cluster manager (pacemaker),<br>> and you expect it to take over control, without interfering,<br>> then you should experiment with this in a test environment first.<br><br>Actually I was starting from scratch and testing I did (while reading tons of your posts etc. ;> ), but a different problem afflicted my tests (wrong fence-peer handler) and at one time I was afraid that this "dubious practice" could instead be part of the problem.<br><br>> What you can do is start to configure pacemaker in "maintenance-mode",<br>> and once you are positive that it is set up the way you want it,<br>> take it out of maintenance-mode.<br>> <br>> At which point it will "reprobe" the state of the world (ok, this<br>> cluster), and if it finds all resources already active and in line with<br>> the configured policies, it will not take any action.<br><br>Really brilliant!<br>Many thanks for this suggestion: I will surely consider this strategy from now on, even when starting from scratch.<br><br>What I actually did was instead (I only have DRBD-backed KVM resources on CentOS 6.5):<br><br>*) begin with cluster (CMAN+Pacemaker) stopped/unconfigured on both nodes<br>*) manually start DRBD service on both nodes<br>*) create-md and up the resources on both nodes<br>*) make a resource primary on a selected node<br>*) virt-install/test there then shut down VM<br>*) make the resource secondary on the above-selected node<br>*) down the resource on both nodes<br>*) repeat for all resources<br>*) manually stop DRBD service on both nodes<br>*) start/configure cluster on both nodes<br>*) batch-define resources (pcs -f resource_cfg...) and test them "live" one at a time<br><br>On the "doubt" itself I know understand (from your answer and from tests, other problems corrected) that it is not necessary to totally quiesce DRBD (if it finds a consistent status when evaluating the clustered resources).<br><br>Many thanks again.<br><br>Regards,<br>Giuseppe<br></div>                                            </div></body>
</html>