<div style="line-height:1.7;color:#000000;font-size:14px;font-family:Arial">Failed to send to one or more email server, so send again.<div style="color: rgb(0, 0, 0); line-height: 1.7; font-family: Arial; font-size: 14px;"><div style="color: rgb(0, 0, 0); line-height: 1.7; font-family: Arial; font-size: 14px;"><div><p><br></p></div><pre><br>At 2016-09-27 15:47:37, "Nick Wang" <<a href="mailto:nwang@suse.com">nwang@suse.com</a>> wrote:
>>>> On 2016-9-26 at 19:17, in message
><CACp6BS7W6PyW=453WkrRFGSZ+f0mqHqL2m9FjSXFicOCQ+<a href="mailto:wiwA@mail.gmail.com">wiwA@mail.gmail.com</a>>, Igor
>Cicimov <<a href="mailto:igorc@encompasscorporation.com">igorc@encompasscorporation.com</a>> wrote:
>> On 26 Sep 2016 7:26 pm, "mzlld1988" <<a href="mailto:mzlld1988@163.com">mzlld1988@163.com</a>> wrote:
>> >
>> > I apply the attached patch file to scripts£¯drbd.ocf£¬then pacemaker can
>> start drbd successfully£¬but only two nodes£¬ the third node's drbd is
>> down£¬is it right?
>> Well you didnt say you have 3 nodes. Usually you use pacemaker with 2 nodes
>> and drbd.
>The patch suppose to help on 3(more) nodes scenario, as long as only one Primary.
>Is the 3 nodes DRBD cluster working without pacemaker? And how did you
>configure in pacemaker?
<strong>Accoring to </strong><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf,%20I" target="_blank"><b>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf, I</b></a><b> executed the following commands to configure drbd in pacemaker.</b><br> [root@pcmk-1 ~]# pcs cluster cib drbd_cfg<br> [root@pcmk-1 ~]# pcs -f drbd_cfg resource create WebData ocf:linbit:drbd \<br> drbd_resource=wwwdata op monitor interval=60s<br> [root@pcmk-1 ~]# pcs -f drbd_cfg resource master WebDataClone WebData \<br> master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 \<br> notify=true<br> [root@pcmk-1 ~]# pcs cluster cib-push drbd_cfg
>> > And another question is £¬can pacemaker successfully stop the slave node
>> ? My result is pacemaker can't sop the slave node.
>> >
>Yes, need to check the log on which resource prevent pacemaker to stop.
<p><b>Pacemaker can't stop slave node's drbd, I think the reason may be the same as my previous email(see attached file) ,but no one reply that email.</b><br>[root@drbd ~]# pcs status<br> Cluster name: mycluster<br> Stack: corosync<br> Current DC: drbd.node103 (version 1.1.15-e174ec8) - partition with quorum<br> Last updated: Mon Sep 26 04:36:50 2016 Last change: Mon Sep 26 04:36:49 2016 by root via cibadmin on drbd.node101</p><p><span style="color: rgb(255, 0, 0);">3 nodes and 2 resources configured</span></p><p>Online: [ drbd.node101 drbd.node102 drbd.node103 ]</p><p>Full list of resources:</p><p><span style="color: rgb(255, 0, 0);"> Master/Slave Set: WebDataClone [WebData]<br> Masters: [ drbd.node102 ]<br> Slaves: [ drbd.node101 ]</span></p><p>Daemon Status:<br> corosync: active/corosync.service is not a native service, redirecting to /sbin/chkconfig.<br> Executing /sbin/chkconfig corosync --level=5<br> enabled<br> pacemaker: active/pacemaker.service is not a native service, redirecting to /sbin/chkconfig.<br> Executing /sbin/chkconfig pacemaker --level=5<br> enabled<br> pcsd: active/enabled</p><p>-------------------------------------------------------------<br>Faile to execute ¡®pcs cluster stop drbd.node101¡¯</p><p><span style="background-color: rgb(255, 0, 0);">=Error message on drbd.node101(secondary node)</span><br> Sep 26 04:39:26 drbd lrmd[3521]: notice: WebData_stop_0:4726:stderr [ Command 'drbdsetup down r0' terminated with exit code 11 ]<br> Sep 26 04:39:26 drbd lrmd[3521]: notice: WebData_stop_0:4726:stderr [ r0: State change failed: (-10) State change was refused by peer node ]<br> Sep 26 04:39:26 drbd lrmd[3521]: notice: WebData_stop_0:4726:stderr [ additional info from kernel: ]<br> Sep 26 04:39:26 drbd lrmd[3521]: notice: WebData_stop_0:4726:stderr [ failed to disconnect ]<br> Sep 26 04:39:26 drbd lrmd[3521]: notice: WebData_stop_0:4726:stderr [ Command 'drbdsetup down r0' terminated with exit code 11 ]<br> Sep 26 04:39:26 drbd crmd[3524]: error: Result of stop operation for WebData on drbd.node101: Timed Out | call=12 key=WebData_stop_0 timeout=100000ms<br> Sep 26 04:39:26 drbd crmd[3524]: notice: drbd.node101-WebData_stop_0:12 [ r0: State change failed: (-10) State change was refused by peer node\nadditional info from kernel:\nfailed to disconnect\nCommand 'drbdsetup down r0' terminated with exit code 11\nr0: State change failed: (-10) State change was refused by peer node\nadditional info from kernel:\nfailed to disconnect\nCommand 'drbdsetup down r0' terminated with exit code 11\nr0: State change failed: (-10) State change was refused by peer node\nadditional info from kernel:\nfailed t<br><span style="background-color: rgb(255, 0, 0);"><strong> </strong>=Error message on drbd.node102(primary node)</span><br> Sep 26 04:39:25 drbd kernel: drbd r0 drbd.node101: Preparing remote state change 3578772780 (primary_nodes=4, weak_nodes=FFFFFFFFFFFFFFFB)<br> Sep 26 04:39:25 drbd kernel: drbd r0: State change failed: Refusing to be Primary while peer is not outdated<br> Sep 26 04:39:25 drbd kernel: drbd r0: Failed: susp-io( no -> fencing)<br> Sep 26 04:39:25 drbd kernel: drbd r0 drbd.node101: Failed: conn( Connected -> TearDown ) peer( Secondary -> Unknown )<br> Sep 26 04:39:25 drbd kernel: drbd r0/0 drbd1 drbd.node101: Failed: pdsk( UpToDate -> DUnknown ) repl( Established -> Off )<br> Sep 26 04:39:25 drbd kernel: drbd r0 drbd.node101: Aborting remote state change 3578772780
>> > I'm looking forward to your answers.Thanks.
>> >
>> Yes it works with 2 nodes drbd9 configured in standard way not via
>> drbdmanager. Haven't tried any other layout.
>>
>> >
>
>Best regards,
>Nick
</p></pre></div></div><br><br><span title="neteasefooter"><p> </p></span></div><br><br><span title="neteasefooter"><p> </p></span>