<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 24, 2017 at 7:19 PM, Raman Gupta <span dir="ltr"><<a href="mailto:ramangupta16@gmail.com" target="_blank">ramangupta16@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi All,<div><br></div><div><div>I am having a problem where if in GFS2 dual-Primary-DRBD Pacemaker Cluster, a node crashes then the running node hangs! The CLVM commands hang, the libvirt VM on running node hangs. </div><div><br></div><div>Env:</div><div>---------</div><div>CentOS 7.3</div><div>DRBD 8.4 </div><div>gfs2-utils-3.1.9-3.el7.x86_64<br></div><div>Pacemaker 1.1.15-11.el7_3.4<br></div><div>corosync-2.4.0-4.el7.x86_64<br></div><div><br></div><div><br></div><div>Infrastructure:</div><div>------------------------</div><div><div>1) Running A 2 node Pacemaker Cluster with proper fencing between the two. Nodes are server4 and server7.</div><div><br></div><div>2) Running DRBD dual-Primary and hosting GFS2 filesystem.</div><div><br></div><div>3) Pacemaker has DLM and cLVM resources configured among others.</div><div><br></div><div>4) A KVM/QEMU virtual machine is running on server4 which is holding the cluster resources.</div><div><br></div></div><div><br></div><div>Normal:</div><div>------------</div><div>5) In normal condition when the two nodes are completely UP then things are fine. The DRBD dual-primary works fine. The disk of VM is hosted on DRBD mount directory /backup and VM runs fine with Live Migration happily happening between the 2 nodes.</div><div><br></div><div><br></div><div>Problem:</div><div>----------------</div><div>6) Stop server7 [shutdown -h now] ---> LVM commands like pvdisplay hangs, VM runs only for 120s ---> After 120s DRBD/GFS2 panics (/var/log/messages below) in server4 and DRBD mount directory (/backup) becomes unavailable and VM hangs in server4. The DRBD though is fine on server4 and in Primary/Secondary mode in WFConnection state.<br></div><div><br></div><div>Mar 24 11:29:28 server4 crm-fence-peer.sh[54702]: invoked for vDrbd</div><div>Mar 24 11:29:28 server4 crm-fence-peer.sh[54702]: WARNING drbd-fencing could not determine the master id of drbd resource vDrbd</div><div><b>Mar 24 11:29:28 server4 kernel: drbd vDrbd: helper command: /sbin/drbdadm fence-peer vDrbd exit code 1 (0x100)</b></div><div><b>Mar 24 11:29:28 server4 kernel: drbd vDrbd: fence-peer helper broken, returned 1</b></div></div></div></blockquote><div><br></div><div>I guess this is the problem. Since the drbd fencing script fails DLM will hang to avoid resource corruption since it has no information about the status of the other node.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div>Mar 24 11:32:01 server4 kernel: INFO: task kworker/8:1H:822 blocked for more than 120 seconds.</div><div>Mar 24 11:32:01 server4 kernel: "echo 0 > /proc/sys/kernel/hung_task_<wbr>timeout_secs" disables this message.</div><div>Mar 24 11:32:01 server4 kernel: kworker/8:1H D ffff880473796c18 0 822 2 0x00000080</div><div>Mar 24 11:32:01 server4 kernel: Workqueue: glock_workqueue glock_work_func [gfs2]</div><div>Mar 24 11:32:01 server4 kernel: ffff88027674bb10 0000000000000046 ffff8802736e9f60 ffff88027674bfd8</div><div>Mar 24 11:32:01 server4 kernel: ffff88027674bfd8 ffff88027674bfd8 ffff8802736e9f60 ffff8804757ef808</div><div>Mar 24 11:32:01 server4 kernel: 0000000000000000 ffff8804757efa28 ffff8804757ef800 ffff880473796c18</div><div>Mar 24 11:32:01 server4 kernel: Call Trace:</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff8168bbb9>] schedule+0x29/0x70</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffffa0714ce4>] drbd_make_request+0x2a4/0x380 [drbd]</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff812e0000>] ? aes_decrypt+0x260/0xe10</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff810b17d0>] ? wake_up_atomic_t+0x30/0x30</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff812ee6f9>] generic_make_request+0x109/<wbr>0x1e0</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff812ee841>] submit_bio+0x71/0x150</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffffa063ee11>] gfs2_meta_read+0x121/0x2a0 [gfs2]</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffffa063f392>] gfs2_meta_indirect_buffer+<wbr>0x62/0x150 [gfs2]</div><div>Mar 24 11:32:01 server4 kernel: [<ffffffff810d2422>] ? load_balance+0x192/0x990</div><div><br></div><div>7) After server7 is UP, Pacemaker Cluster is started, DRBD started and Logical Volume activated and only after that in server4 the DRBD mount directory (/backup) becomes available and VM resumes in server4. So after server7 is down and till it is completely UP the VM in server4 hangs.</div></div><div><br></div><div><br></div><div>Can anyone help how to avoid running node hang when other node crashes?</div><div><br></div><div><br></div><div>Attaching DRBD config file.</div><span class="gmail-HOEnZb"><font color="#888888"><div><br></div></font></span></div></blockquote><div><br></div><div>Do you actually have fencing configured in pacemaker? Since you have drbd fencing policy set to "resource-and-stonith" you <b>must</b> have fencing setup in pacemaker too. Have you also set no-quorum-policy="ignore" in pacemaker? maybe show us your pacemaker config too so we don't have to guess....</div><div><br></div><div>Not related to the problem but I would also add "after-resync-target" handler too:</div><div><br></div><div><div>handlers {</div><div><span style="white-space:pre"> ...</span></div><div><span class="gmail-Apple-tab-span" style="white-space:pre">                </span>fence-peer "/usr/lib/drbd/crm-fence-peer.sh";</div><div><span class="gmail-Apple-tab-span" style="white-space:pre">                </span>after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";</div><div><span class="gmail-Apple-tab-span" style="white-space:pre">        </span>}</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span class="gmail-HOEnZb"><font color="#888888"><div></div><div><br></div><div>--Raman</div><div><br></div></font></span></div>
<br>______________________________<wbr>_________________<br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer" target="_blank">http://lists.linbit.com/<wbr>mailman/listinfo/drbd-user</a><br>
<br></blockquote></div><br></div></div>