<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style>
</head>
<body dir="ltr">
<div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;" dir="ltr">
<p style="margin-top:0;margin-bottom:0">A bit of progress, but still call traces being dumped in the logs. I waited for the full initial sync to finish, then I created the file system from a different node, ae-fs02, instead of ae-fs01. Initially, the command
hung for a while, but it eventually succeded. However the following call traces where dumped:</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0"><span style="font-family:monospace">[ 5687.457691] drbd test: role( Secondary -> Primary )
<br>
[ 5882.661739] INFO: task mkfs.xfs:80231 blocked for more than 120 seconds. <br>
[ 5882.661770] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<br>
[ 5882.661796] mkfs.xfs D ffff9df559b1cf10 0 80231 8839 0x00000080 <br>
[ 5882.661800] Call Trace: <br>
[ 5882.661807] [<ffffffffb7d12f49>] schedule+0x29/0x70 <br>
[ 5882.661809] [<ffffffffb7d108b9>] schedule_timeout+0x239/0x2c0 <br>
[ 5882.661819] [<ffffffffc08a1f1b>] ? drbd_make_request+0x23b/0x360 [drbd] <br>
[ 5882.661824] [<ffffffffb76f76e2>] ? ktime_get_ts64+0x52/0xf0 <br>
[ 5882.661826] [<ffffffffb7d1245d>] io_schedule_timeout+0xad/0x130 <br>
[ 5882.661828] [<ffffffffb7d1357d>] wait_for_completion_io+0xfd/0x140 <br>
[ 5882.661833] [<ffffffffb76cee80>] ? wake_up_state+0x20/0x20 <br>
[ 5882.661837] [<ffffffffb792308c>] blkdev_issue_discard+0x2ac/0x2d0 <br>
[ 5882.661843] [<ffffffffb792c141>] blk_ioctl_discard+0xd1/0x120 <br>
[ 5882.661845] [<ffffffffb792cc12>] blkdev_ioctl+0x5e2/0x9b0 <br>
[ 5882.661849] [<ffffffffb7859691>] block_ioctl+0x41/0x50 <br>
[ 5882.661854] [<ffffffffb782fb90>] do_vfs_ioctl+0x350/0x560 <br>
[ 5882.661857] [<ffffffffb77ccc77>] ? do_munmap+0x317/0x470 <br>
[ 5882.661859] [<ffffffffb782fe41>] SyS_ioctl+0xa1/0xc0 <br>
[ 5882.661862] [<ffffffffb7d1f7d5>] system_call_fastpath+0x1c/0x21 <br>
[ 6002.650486] INFO: task mkfs.xfs:80231 blocked for more than 120 seconds. <br>
[ 6002.650514] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<br>
[ 6002.650539] mkfs.xfs D ffff9df559b1cf10 0 80231 8839 0x00000080 <br>
[ 6002.650543] Call Trace: <br>
[ 6002.650550] [<ffffffffb7d12f49>] schedule+0x29/0x70 <br>
[ 6002.650552] [<ffffffffb7d108b9>] schedule_timeout+0x239/0x2c0 <br>
[ 6002.650563] [<ffffffffc08a1f1b>] ? drbd_make_request+0x23b/0x360 [drbd] <br>
[ 6002.650567] [<ffffffffb76f76e2>] ? ktime_get_ts64+0x52/0xf0 <br>
[ 6002.650569] [<ffffffffb7d1245d>] io_schedule_timeout+0xad/0x130 <br>
[ 6002.650571] [<ffffffffb7d1357d>] wait_for_completion_io+0xfd/0x140 <br>
[ 6002.650575] [<ffffffffb76cee80>] ? wake_up_state+0x20/0x20 <br>
[ 6002.650579] [<ffffffffb792308c>] blkdev_issue_discard+0x2ac/0x2d0 <br>
[ 6002.650582] [<ffffffffb792c141>] blk_ioctl_discard+0xd1/0x120 <br>
[ 6002.650585] [<ffffffffb792cc12>] blkdev_ioctl+0x5e2/0x9b0 <br>
[ 6002.650588] [<ffffffffb7859691>] block_ioctl+0x41/0x50 <br>
[ 6002.650591] [<ffffffffb782fb90>] do_vfs_ioctl+0x350/0x560 <br>
[ 6002.650594] [<ffffffffb77ccc77>] ? do_munmap+0x317/0x470 <br>
[ 6002.650596] [<ffffffffb782fe41>] SyS_ioctl+0xa1/0xc0 <br>
[ 6002.650599] [<ffffffffb7d1f7d5>] system_call_fastpath+0x1c/0x21 <br>
[ 6122.639403] INFO: task mkfs.xfs:80231 blocked for more than 120 seconds. <br>
[ 6122.639426] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
<br>
[ 6122.639451] mkfs.xfs D ffff9df559b1cf10 0 80231 8839 0x00000080 <br>
[ 6122.639455] Call Trace: <br>
[ 6122.639463] [<ffffffffb7d12f49>] schedule+0x29/0x70 <br>
[ 6122.639465] [<ffffffffb7d108b9>] schedule_timeout+0x239/0x2c0 <br>
[ 6122.639476] [<ffffffffc08a1f1b>] ? drbd_make_request+0x23b/0x360 [drbd] <br>
[ 6122.639480] [<ffffffffb76f76e2>] ? ktime_get_ts64+0x52/0xf0 <br>
[ 6122.639482] [<ffffffffb7d1245d>] io_schedule_timeout+0xad/0x130 <br>
[ 6122.639484] [<ffffffffb7d1357d>] wait_for_completion_io+0xfd/0x140 <br>
[ 6122.639489] [<ffffffffb76cee80>] ? wake_up_state+0x20/0x20 <br>
[ 6122.639493] [<ffffffffb792308c>] blkdev_issue_discard+0x2ac/0x2d0 <br>
[ 6122.639496] [<ffffffffb792c141>] blk_ioctl_discard+0xd1/0x120 <br>
[ 6122.639499] [<ffffffffb792cc12>] blkdev_ioctl+0x5e2/0x9b0 <br>
[ 6122.639501] [<ffffffffb7859691>] block_ioctl+0x41/0x50 <br>
[ 6122.639504] [<ffffffffb782fb90>] do_vfs_ioctl+0x350/0x560 <br>
[ 6122.639507] [<ffffffffb77ccc77>] ? do_munmap+0x317/0x470 <br>
[ 6122.639509] [<ffffffffb782fe41>] SyS_ioctl+0xa1/0xc0 <br>
[ 6122.639512] [<ffffffffb7d1f7d5>] system_call_fastpath+0x1c/0x21<br>
<br>
</span><br>
</p>
<p style="margin-top:0;margin-bottom:0">I would think this is not normal. Do you think this is a RHEL 7.5 specific issue?</p>
<p style="margin-top:0;margin-bottom:0"><br>
</p>
<p style="margin-top:0;margin-bottom:0"><span style="font-family:monospace"># cat /etc/redhat-release <br>
Red Hat Enterprise Linux Server release 7.5 (Maipo)<br>
</span><span style="font-family:monospace"># uname -a <br>
Linux ae-fs02 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux<br>
<br>
</span>Diego</p>
<div id="Signature">
<div id="divtagdefaultwrapper" style="font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255); font-family: Calibri, Arial, Helvetica, sans-serif, EmojiFont, "Apple Color Emoji", "Segoe UI Emoji", NotoColorEmoji, "Segoe UI Symbol", "Android Emoji", EmojiSymbols;">
<div></div>
<br>
<p></p>
</div>
</div>
</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Remolina, Diego J<br>
<b>Sent:</b> Wednesday, May 2, 2018 12:33:44 PM<br>
<b>To:</b> Roland Kammerer; drbd-user@lists.linbit.com<br>
<b>Subject:</b> Re: [DRBD-user] New 3-way drbd setup does not seem to take i/o</font>
<div> </div>
</div>
<style type="text/css" style="display:none">
<!--
p
        {margin-top:0;
        margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif">
<p style="margin-top:0; margin-bottom:0">Dear Roland,</p>
<p style="margin-top:0; margin-bottom:0"><br>
</p>
<p style="margin-top:0; margin-bottom:0">I cleared the current cluster configuration with drbdmanage uninit in all nodes and started fresh after manually clearing the zvol in the ZFS pool as well and rebooting the servers.</p>
<p style="margin-top:0; margin-bottom:0"><br>
</p>
<p style="margin-top:0; margin-bottom:0">Once again there is a hang when I try to create and XFS filesystem on top of the drbd device. I do see some panics on the logs (scroll all the way to the end):</p>
<p style="margin-top:0; margin-bottom:0"><br>
</p>
<p style="margin-top:0; margin-bottom:0"><a href="http://termbin.com/b5u3" class="x_OWAAutoLink" id="LPlnk876059">http://termbin.com/b5u3</a><br>
</p>
<p style="margin-top:0; margin-bottom:0"><br>
</p>
<p style="margin-top:0; margin-bottom:0">I am running this on RHEL 7.5 on kernel: <span style="font-family:monospace">3.10.0-862.el7.x86_64<br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><span>Am I hitting a bug? Is it possible the problem is I am not waiting for the initial sync to finish? I have already upgraded the kernel module to the latest 0.9.14 announced today.</span><br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><span style="font-family:monospace"># rpm -qa |grep kmod-drbd</span></span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><span style="font-family:monospace"></span></span><span style="font-family:monospace; font-size:12pt">kmod-</span><span style="font-family:monospace; font-size:12pt; font-weight:bold; color:rgb(255,84,84)">drbd</span><span style="font-family:monospace; font-size:12pt">-9.0.14_3.10.0_862-1.el7.x86_64</span><span style="font-family:monospace"><span style="font-family:monospace"><br>
</span></span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace; font-size:12pt"><br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace; font-size:12pt"><span style="font-family:monospace">[root@ae-fs01 tmp]# drbdmanage list-nodes
<br>
+------------------------------------------------------------------------------------------------------------+
<br>
| <span style="color:rgb(24,178,178)">Name</span> | <span style="color:rgb(178,104,24)">
Pool Size</span> | <span style="color:rgb(178,104,24)">Pool Free</span> | |
<span style="color:rgb(24,178,24)">State</span> | <br>
|------------------------------------------------------------------------------------------------------------|
<br>
| <span style="color:rgb(24,178,178)">ae-fs01</span> | <span style="color:rgb(178,104,24)">13237248</span> | <span style="color:rgb(178,104,24)">12730090</span> | | <span style="color:rgb(24,178,24)">ok</span>
| <br>
| <span style="color:rgb(24,178,178)">ae-fs02</span> | <span style="color:rgb(178,104,24)">13237248</span> | <span style="color:rgb(178,104,24)">12730095</span> | | <span style="color:rgb(24,178,24)">ok</span>
| <br>
| <span style="color:rgb(24,178,178)">ae-fs03</span> | <span style="color:rgb(178,104,24)">13237248</span> | <span style="color:rgb(178,104,24)">12730089</span> | | <span style="color:rgb(24,178,24)">ok</span>
| <br>
+------------------------------------------------------------------------------------------------------------+
<br>
[root@ae-fs01 tmp]# drbdmanage list-volumes <br>
+------------------------------------------------------------------------------------------------------------+
<br>
| <span style="color:rgb(24,178,178)">Name</span> | <span style="color:rgb(178,104,24)">
Vol ID</span> | <span style="color:rgb(178,104,24)">Size</span> | <span style="color:rgb(178,104,24)">
Minor</span> | | <span style="color:rgb(24,178,24)">
State</span> | <br>
|------------------------------------------------------------------------------------------------------------|
<br>
| <span style="color:rgb(24,178,178)">test</span> | <span style="color:rgb(178,104,24)">0</span> |
<span style="color:rgb(178,104,24)">465.66 GiB</span> | <span style="color:rgb(178,104,24)">100</span> | | <span style="color:rgb(24,178,24)">ok</span> |
<br>
+------------------------------------------------------------------------------------------------------------+
<br>
[root@ae-fs01 tmp]# drbdadm status <br>
.drbdctrl role:<span style="font-weight:bold; color:rgb(84,255,255)">Primary</span>
<br>
volume:0 disk:<span style="font-weight:bold; color:rgb(84,255,84)">UpToDate</span>
<br>
volume:1 disk:<span style="font-weight:bold; color:rgb(84,255,84)">UpToDate</span>
<br>
ae-fs02 role:Secondary <br>
volume:0 peer-disk:<span style="color:rgb(24,178,24)">UpToDate</span> <br>
volume:1 peer-disk:<span style="color:rgb(24,178,24)">UpToDate</span> <br>
ae-fs03 role:Secondary <br>
volume:0 peer-disk:<span style="color:rgb(24,178,24)">UpToDate</span> <br>
volume:1 peer-disk:<span style="color:rgb(24,178,24)">UpToDate</span> <br>
<br>
test role:<span style="font-weight:bold; color:rgb(84,255,255)">Primary</span> <br>
disk:<span style="font-weight:bold; color:rgb(84,255,84)">UpToDate</span> <br>
ae-fs02 role:Secondary <br>
replication:<span style="color:rgb(178,24,24)">SyncSource</span> peer-disk:<span style="color:rgb(178,24,24)">Inconsistent</span> done:5.12
<br>
ae-fs03 role:Secondary <br>
replication:<span style="color:rgb(178,24,24)">SyncSource</span> peer-disk:<span style="color:rgb(178,24,24)">Inconsistent</span> done:5.15<br>
<br>
</span></span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace; font-size:12pt">Thanks,</span><br>
</p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace"><br>
</span></p>
<p style="margin-top:0; margin-bottom:0"><span style="font-family:monospace">Diego</span></p>
<div id="x_Signature">
<div id="x_divtagdefaultwrapper" style="font-size:12pt; color:rgb(0,0,0); background-color:rgb(255,255,255); font-family:Calibri,Arial,Helvetica,sans-serif,EmojiFont,"Apple Color Emoji","Segoe UI Emoji",NotoColorEmoji,"Segoe UI Symbol","Android Emoji",EmojiSymbols">
<p></p>
</div>
</div>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> drbd-user-bounces@lists.linbit.com <drbd-user-bounces@lists.linbit.com> on behalf of Roland Kammerer <roland.kammerer@linbit.com><br>
<b>Sent:</b> Wednesday, May 2, 2018 2:30:54 AM<br>
<b>To:</b> drbd-user@lists.linbit.com<br>
<b>Subject:</b> Re: [DRBD-user] New 3-way drbd setup does not seem to take i/o</font>
<div> </div>
</div>
<div class="x_BodyFragment"><font size="2"><span style="font-size:11pt">
<div class="x_PlainText">On Tue, May 01, 2018 at 04:14:52PM +0000, Remolina, Diego J wrote:<br>
> Hi, was wondering if you could guide me as to what could be the issue here. I configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related packages.<br>
> <br>
> <br>
> I created a zfs pool, then use zfs2.Zfs2 plugin and created a<br>
> resource. All seems fine, up to the point when I want to test the<br>
> resource and create a file system in it. At that point, if I try to<br>
> create say an XFS filesystem, things freeze. If I create a ZFS pool on<br>
> the drbd device, the creation succeeds, but then I cannot write or<br>
> read from that.<br>
> <br>
> <br>
> # zfs list<br>
> NAME USED AVAIL REFER MOUNTPOINT<br>
> mainpool 11.6T 1.02T 24K none<br>
> mainpool/export_00 11.6T 12.6T 7.25G -<br>
> <br>
> The plugin configuration:<br>
> [GLOBAL]<br>
> <br>
> [Node:ae-fs01]<br>
> storage-plugin = drbdmanage.storage.zvol2.Zvol2<br>
> <br>
> [Plugin:Zvol2]<br>
> volume-group = mainpool<br>
> <br>
> <br>
> # drbdmanage list-nodes<br>
> +------------------------------------------------------------------------------------------------------------+<br>
> | Name | Pool Size | Pool Free | | State |<br>
> |------------------------------------------------------------------------------------------------------------|<br>
> | ae-fs01 | 13237248 | 1065678 | | ok |<br>
> | ae-fs02 | 13237248 | 1065683 | | ok |<br>
> | ae-fs03 | 13237248 | 1065672 | | ok |<br>
> +------------------------------------------------------------------------------------------------------------+<br>
> <br>
> <br>
> # drbdmanage list-volumes<br>
> +------------------------------------------------------------------------------------------------------------+<br>
> | Name | Vol ID | Size | Minor | | State |<br>
> |------------------------------------------------------------------------------------------------------------|<br>
> | export | 0 | 10.91 TiB | 106 | | ok |<br>
> +------------------------------------------------------------------------------------------------------------+<br>
> <br>
> But trying to make one node primary and creating a file system, either<br>
> a new zfs pool for data or XFS file system fail.<br>
> <br>
> <br>
> # drbdadm primary export<br>
> # drbdadm status<br>
> .drbdctrl role:Secondary<br>
> volume:0 disk:UpToDate<br>
> volume:1 disk:UpToDate<br>
> ae-fs02 role:Primary<br>
> volume:0 peer-disk:UpToDate<br>
> volume:1 peer-disk:UpToDate<br>
> ae-fs03 role:Secondary<br>
> volume:0 peer-disk:UpToDate<br>
> volume:1 peer-disk:UpToDate<br>
> <br>
> export role:Primary<br>
> disk:UpToDate<br>
> ae-fs02 role:Secondary<br>
> peer-disk:UpToDate<br>
> ae-fs03 role:Secondary<br>
> peer-disk:UpToDate<br>
> <br>
> # zpool create export /dev/drbd106<br>
> # zfs set compression=lz4 export<br>
> # ls /export<br>
> ls: reading directory /export: Not a directory<br>
> <br>
> If I destroy the pool and try to format /dev/drbd106 as XFS, it just<br>
> hangs forever. Any ideas as to what is happening?<br>
<br>
Carving out zvols which are then use by DRBD should work. Putting<br>
another zfs/zpool on top might have it's quirks, especially with<br>
auto-promote. And maybe the failed XFS was then a follow up problem.<br>
<br>
So start with somthing easier then:<br>
create a small (like 10M) resource with DM and then try to create the<br>
XFS on on (without the additional zfs steps).<br>
<br>
Regards, rck<br>
_______________________________________________<br>
drbd-user mailing list<br>
drbd-user@lists.linbit.com<br>
<a href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</div>
</span></font></div>
</div>
</body>
</html>