<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office"><head><!--[if gte mso 9]><xml><o:OfficeDocumentSettings><o:AllowPNG/><o:PixelsPerInch>96</o:PixelsPerInch></o:OfficeDocumentSettings></xml><![endif]--></head><body><div style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:13px;"><div>Hello guys,</div><div><br></div><div>By two weeks I'm struggling with the DRBD&Pacemaker configuration in order to have an HA NFS server</div><div><br></div><div>I tried all the examples google was able to display me without success</div><div><br></div><div>Also, I've read lots of articles on this distribution list and was not able to end up with a working configuration either</div><div><br></div><div>This article is interesting enough <a href="http://drbd-user.linbit.narkive.com/fTSxwkgw/secundary-not-finish-synchronizing" rel="nofollow" target="_blank" class="enhancr_card_5284113911">secundary not finish synchronizing</a> especially this quote:</div><div><br></div><div id="ydpfb73d46cenhancr_card_5284113911" class="ydpfb73d46cyahoo-link-enhancr-card ydpfb73d46cyahoo-link-enhancr-not-allow-cover ydpfb73d46cymail-preserve-class ydpfb73d46cymail-preserve-style" style="max-width:400px;font-family:"Helvetica Neue", "Segoe UI", Helvetica, Arial, sans-serif;" data-url="http://drbd-user.linbit.narkive.com/fTSxwkgw/secundary-not-finish-synchronizing" data-type="YENHANCER" data-size="MEDIUM" contenteditable="false"><a href="http://drbd-user.linbit.narkive.com/fTSxwkgw/secundary-not-finish-synchronizing" style="text-decoration:none !important;color:#000 !important;" class="ydpfb73d46cyahoo-enhancr-cardlink" rel="nofollow" target="_blank"><table border="0" class="ydpfb73d46ccard-wrapper ydpfb73d46cyahoo-ignore-table" cellpadding="0" cellspacing="0" style="max-width:400px;"><tbody><tr><td width="400"><table border="0" class="ydpfb73d46ccard ydpfb73d46cyahoo-ignore-table" cellpadding="0" cellspacing="0" width="100%" style="max-width:400px;border-width:1px;border-style:solid;border-color:rgb(224, 228, 233);border-radius:2px;"><tbody><tr><td><table border="0" class="ydpfb73d46ccard-info ydpfb73d46cyahoo-ignore-table" cellpadding="0" cellspacing="0" style="background:#fff;position:relative;z-index:2;width:100%;max-width:400px;border-radius:0 0 2px 2px;border-top:1px solid rgb(224, 228, 233);"><tbody><tr><td style="background-color:#ffffff;padding:16px 0 16px 12px;vertical-align:top;border-radius:0 0 0 2px;"></td><td style="vertical-align:middle;padding:12px 24px 16px 12px;width:99%;font-family:"Helvetica Neue", "Segoe UI", Helvetica, Arial, sans-serif;border-radius:0 0 2px 0;"><h2 class="ydpfb73d46ccard-title" style="font-size: 14px; line-height: 19px; margin: 0px 0px 6px; font-family: "Helvetica Neue", "Segoe UI", Helvetica, Arial, sans-serif; color: rgb(38, 40, 42);">secundary not finish synchronizing</h2><p class="ydpfb73d46ccard-description" style="font-size: 12px; line-height: 16px; margin: 0px; color: rgb(151, 155, 167);"></p></td></tr></tbody></table></td></tr></tbody></table></td></tr></tbody></table></a></div><div><br></div><div>"<span><span style="color: rgb(34, 34, 34); font-family: "Helvetica Neue", Arial, sans-serif; font-size: 15px;">To be able to avoid DRBD data divergence due to cluster split-brain,</span><br style="color: rgb(34, 34, 34); font-family: "Helvetica Neue", Arial, sans-serif; font-size: 15px;"><span style="color: rgb(34, 34, 34); font-family: "Helvetica Neue", Arial, sans-serif; font-size: 15px;">you'd need both. Stonith alone is not good enough, DRBD fencing</span><br style="color: rgb(34, 34, 34); font-family: "Helvetica Neue", Arial, sans-serif; font-size: 15px;"><span style="color: rgb(34, 34, 34); font-family: "Helvetica Neue", Arial, sans-serif; font-size: 15px;">policies alone are not good enough. You need both.</span></span>"</div><div><br></div><div>but still not able to make it work</div><div><br></div><div><br></div><div>Now that I have expressed my feelings about the product/s :) let me summarize my experience:</div><div><br></div><div>2 identical VMs with an LVM volume and a SINGLE NIC</div><div><br></div><div>DRBD 9.0.9</div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span><div># rpm -qa|grep drbd</div></span></div><div><span><div>drbd90-utils-9.1.0-1.el7.elrepo.x86_64</div></span></div><div><span><div>kmod-drbd90-9.0.9-1.el7_4.elrepo.x86_64</div></span></div></blockquote><div><span><div><br></div></span>Pacemaker 1.1.16</div><div><span><div># rpm -qa|grep pacemaker</div></span></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span><div>pacemaker-1.1.16-12.el7_4.8.x86_64</div></span></div><div><span><div>pacemaker-libs-1.1.16-12.el7_4.8.x86_64</div></span></div><div><span><div>pacemaker-cluster-libs-1.1.16-12.el7_4.8.x86_64</div></span></div><div><span><div>pacemaker-cli-1.1.16-12.el7_4.8.x86_64</div></span></div></blockquote><div><span><div><br></div><div>Corosync 2.4.0</div></span></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span><div># rpm -qa|grep corosync</div></span></div><div><span><div>corosynclib-2.4.0-9.el7_4.2.x86_64</div></span></div><div><span><div>corosync-2.4.0-9.el7_4.2.x86_6</div></span></div></blockquote><div><br></div><div><br></div><div>DRBD resource on both nodes:</div><div><span><div># cat /etc/drbd.d/r0.res</div></span></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span><div>resource r0 {</div></span></div><div><span><div>net {</div></span></div><div><span><br></span></div><div><span><div># fencing resource-only;</div></span></div><div><span><div> fencing resource-and-stonith;</div></span></div><div>}<br></div><div><span><div><br></div></span></div><div><span><div>handlers {</div></span></div><div><span><div> fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";</div></span></div><div><span><div> after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";</div></span></div><div><span><div>}</div></span></div><div><span><div><br></div></span></div><div><span><div>protocol C;</div></span></div><div><span><div>on nfs1 {</div></span></div><div><span><div> device /dev/drbd0;</div></span></div><div><span><div> disk /dev/mapper/vg_cdf-lv_cdf;</div></span></div><div><span><div> address 10.200.50.21:7788;</div></span></div><div><span><div> meta-disk internal;</div></span></div><div><span><div> }</div></span></div><div><span><div> on nfs2 {</div></span></div><div><span><div> device /dev/drbd0;</div></span></div><div><span><div> disk /dev/mapper/vg_cdf-lv_cdf;</div></span></div><div><span><div> address 10.200.50.22:7788;</div></span></div><div><span><div> meta-disk internal;</div></span></div><div><span><div> }</div></span></div><div><span><div>}</div></span></div></blockquote><div><span><div><br></div></span>Everything is good up until now; mounted the volume on both nodes and was able to see how data flies</div><div><br></div><div>The problem occurs with the Pacemaker on top because I was not able to configure it to have a Master and a Slave resource, only a Master and a stopped one</div><div><br></div><div>Here the Pacemaker configs:</div><div><br></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span>pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.200.50.20 cidr_netmask=24 op monitor interval=30s</span></div><div><span><br></span></div><div><span><span><div>pcs cluster cib drbd_cfg</div></span></span></div><div><span><span><div>pcs -f drbd_cfg resource create Data ocf:linbit:drbd drbd_resource=r0 op monitor interval=60s</div></span></span></div><div><span><span><div>pcs -f drbd_cfg resource master DataClone Data master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true</div></span></span></div><div><span><span><div>pcs -f drbd_cfg constraint colocation add DataClone with ClusterIP INFINITY</div></span></span></div><div><span><span><div>pcs -f drbd_cfg constraint order ClusterIP then DataClone</div></span></span></div><div><span><span><div>pcs cluster cib-push drbd_cfg</div></span></span></div><div><span><br></span></div><div><span><br></span></div><div><span><span><div>pcs cluster cib fs_cfg</div></span></span></div><div><span><span><div>pcs -f fs_cfg resource create DataFS Filesystem device="/dev/drbd0" directory="/var/vols/itom" fstype="xfs"</div></span></span></div><div><span><span><div>pcs -f fs_cfg constraint colocation add DataFS with DataClone INFINITY with-rsc-role=Master</div></span></span></div><div><span><span><div>pcs -f fs_cfg constraint order promote DataClone then start DataFS</div></span></span></div><div><span><span><div>pcs cluster cib-push fs_cfg</div></span></span></div><div><span><br></span></div><div><span><br></span></div><div><span><span><div>pcs cluster cib nfs_cfg<span style="white-space: pre-wrap;">        </span></div></span></span></div><div><span><span><div>pcs -f nfs_cfg resource create nfsd nfsserver nfs_shared_infodir=/var/vols/nfsinfo</div></span></span></div><div><span><span><div>pcs -f nfs_cfg resource create nfscore exportfs clientspec="*" options=rw,sync,anonuid=1999,anongid=1999,all_squash directory=/var/vols/core fsid=1999</div></span></span></div><div><span><span><div>pcs -f nfs_cfg resource create nfsdca exportfs clientspec="*" options=rw,sync,anonuid=1999,anongid=1999,all_squash directory=/var/vols/dca fsid=1999</div></span></span></div><div><span><span><div>pcs -f nfs_cfg resource create nfsnode1 exportfs clientspec="*" options=rw,sync,anonuid=1999,anongid=1999,all_squash directory=/var/vols/node1 fsid=1999</div></span></span></div><div><span><span><div>pcs -f nfs_cfg resource create nfsnode2 exportfs clientspec="*" options=rw,sync,anonuid=1999,anongid=1999,all_squash directory=/var/vols/node2 fsid=1999</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint order DataFS then nfsd</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint order nfsd then nfscore</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint order nfsd then nfsdca</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint order nfsd then nfsnode1</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint order nfsd then nfsnode2</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint colocation add nfsd with DataFS INFINITY</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint colocation add nfscore with nfsd INFINITY</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint colocation add nfsdca with nfsd INFINITY</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint colocation add nfsnode1 with nfsd INFINITY</div></span></span></div><div><span><span><div>pcs -f nfs_cfg constraint colocation add nfsnode2 with nfsd INFINITY</div></span></span></div><div><span><span><div>pcs cluster cib-push nfs_cfg</div></span></span></div><div><span><br></span></div><div><span><br></span></div><div><span><div>pcs stonith create nfs1_fen fence_ipmilan pcmk_host_list="nfs1" ipaddr=100.200.50.21 login=user passwd=pass lanplus=1 cipher=1 op monitor interval=60s</div></span></div><div><span><div>pcs constraint location nfs1_fen avoids nfs1</div></span></div><div><span><div>pcs stonith create nfs2_fen fence_ipmilan pcmk_host_list="nfs2" ipaddr=100.200.50.22 login=user passwd=pass lanplus=1 cipher=1 op monitor interval=60s</div></span></div><div><span><div>pcs constraint location nfs2_fen avoids nfs2</div></span></div></blockquote><div><br></div><div><br></div><div>And here the status of the cluster:</div><div><br></div><blockquote style="margin: 0 0 0 40px; border: none; padding: 0px;"><div><span><div># pcs status</div></span></div><div><span><div>Cluster name: nfs-cluster</div></span></div><div><span><div>Stack: corosync</div></span></div><div><span><div>Current DC: nfs2 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum</div></span></div><div><span><div>Last updated: Thu Apr 26 13:31:20 2018</div></span></div><div><span><div>Last change: Thu Apr 26 09:10:44 2018 by root via cibadmin on nfs1</div></span></div><div><span><div><br></div></span></div><div><span><div>2 nodes configured</div></span></div><div><span><div>11 resources configured</div></span></div><div><span><div><br></div></span></div><div><span><div>Online: [ nfs1 nfs2 ]</div></span></div><div><span><div><br></div></span></div><div><span><div>Full list of resources:</div></span></div><div><span><div><br></div></span></div><div><span><div> ClusterIP (ocf::heartbeat:IPaddr2): Started nfs1</div></span></div><div><span><div> Master/Slave Set: DataClone [Data]</div></span></div><div><span><div> Masters: [ nfs1 ]</div></span></div><div><div> <b>Stopped</b>: [ nfs2 ]</div></div><div><span><div> DataFS (ocf::heartbeat:Filesystem): Started nfs1</div></span></div><div><span><div> nfsd (ocf::heartbeat:nfsserver): Started nfs1</div></span></div><div><span><div> nfscore (ocf::heartbeat:exportfs): Started nfs1</div></span></div><div><span><div> nfsdca (ocf::heartbeat:exportfs): Started nfs1</div></span></div><div><span><div> nfsnode1 (ocf::heartbeat:exportfs): Started nfs1</div></span></div><div><span><div> nfsnode2 (ocf::heartbeat:exportfs): Started nfs1</div></span></div><div><span><div> nfs1_fen (stonith:fence_ipmilan): Stopped</div></span></div><div><span><div> nfs2_fen (stonith:fence_ipmilan): Stopped</div></span></div><div><span><div><br></div></span></div><div><span><div>Failed Actions:</div></span></div><div><span><div>* nfs1_fen_start_0 on nfs2 'unknown error' (1): call=97, status=Timed Out, exitreason='none',</div></span></div><div><span><div> last-rc-change='Thu Apr 26 09:10:45 2018', queued=0ms, exec=20009ms</div></span></div><div><span><div>* nfs2_fen_start_0 on nfs1 'unknown error' (1): call=118, status=Timed Out, exitreason='none',</div></span></div><div><span><div> last-rc-change='Thu Apr 26 09:11:03 2018', queued=0ms, exec=20013ms</div></span></div><div><span><div><br></div></span></div><div><span><div><br></div></span></div><div><span><div>Daemon Status:</div></span></div><div><span><div> corosync: active/enabled</div></span></div><div><span><div> pacemaker: active/enabled</div></span></div><div><span><div> pcsd: active/enabled</div></span></div></blockquote><div><div><br></div><div>So, with the above config, I'm seeing drbd started on the "promoted" master node with a <b>connecting </b>status because the "slave"'s drbd is not running</div><div><b>This is my first concern: how to instruct Pacemaker to start both drbd processes on both hosts/VMs at the cluster startup? </b>(kinda Master/Slave and the synchronization to happen)</div><div>(I have to manually start the drbd on the slave to have the following resources deployed/started so no automation/resilience...etc)</div><div><b><br></b></div><div><b><br></b></div><div><b>My second concern is about STONITH</b>; is this ipmilan applicable for the current implementation? (2 VMs with a single NIC each)</div><div><br></div><div><b>Third one</b>: how to test that this HA indeed happens; I was trying by forcing the switch via a constraint like</div><div>"<span>pcs constraint location ClusterIP prefers nfs2=INFINITY</span>" or by disconnecting the NIC</div><div><br></div><div>If somebody may share their experience and why not, some sample configs, I'll appreciate it. Also, any additional feedback regarding the current configuration is more than welcome</div><div><br></div><div>Many thanks,</div><div>Mihai</div><div><br></div><div>PS. Although this is a really good <a href="http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/pdf/Clusters_from_Scratch/Pacemaker-1.1-Clusters_from_Scratch-en-US.pdf">book</a>, I was not able to make it work :(</div><br></div><div>PS.PS. this is just a personal assessment in order to understand the power of these technologies </div></div></body></html>