<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi, <div><br></div><div>unfortunately, I have encountered some rather serious problems. </div><div><br></div><div>While running a series of io benchmarks / stress tests, I got the following lockup:</div><div><br></div><div><blockquote type="cite" class=""><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: BUG: soft lockup detected on CPU#5!</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel:</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: Call Trace:</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: <IRQ> [<ffffffff802a360e>] softlockup_tick+0xdb/0xed</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8028783c>] update_process_times+0x42/0x68</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8026c30d>] smp_local_timer_interrupt+0x23/0x47</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8026ca01>] smp_apic_timer_interrupt+0x41/0x47</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8025878a>] apic_timer_interrupt+0x66/0x6c</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: <EOI> [<ffffffff8835478a>] :xfs:xfs_trans_update_ail+0x78/0xcd</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff88353961>] :xfs:xfs_trans_chunk_committed+0x9f/0xe4</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff883539f0>] :xfs:xfs_trans_committed+0x4a/0xdd</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff88349830>] :xfs:xlog_state_do_callback+0x173/0x31c</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff88360868>] :xfs:xfs_buf_iodone_work+0x0/0x37</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff88349ac1>] :xfs:xlog_iodone+0xe8/0x10b</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff80249152>] run_workqueue+0x94/0xe5</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff80245aec>] worker_thread+0x0/0x122</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8028f823>] keventd_create_kthread+0x0/0x61</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff80245bdc>] worker_thread+0xf0/0x122</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8027c8e1>] default_wake_function+0x0/0xe</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8028f823>] keventd_create_kthread+0x0/0x61</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8028f823>] keventd_create_kthread+0x0/0x61</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8023057c>] kthread+0xd4/0x107</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff80258aa0>] child_rip+0xa/0x12</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff8028f823>] keventd_create_kthread+0x0/0x61</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff802304a8>] kthread+0x0/0x107</div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; font: normal normal normal 12px/normal Helvetica; ">Dec 22 17:53:19 host kernel: [<ffffffff80258a96>] child_rip+0x0/0x12</div></blockquote><br></div><div>This happened on two CPU cores at the same time. The system is responsive, but the respective xfs und pdflush threads entered state D and cannot be stopped. Neither can the filsystem be unmounted or the system be gracefully shut down. </div><div><br></div><div><br></div><div>The question is if that could be related to DRBD? I'm getting more and more convinced, that this issue is due to the "certified" scsi driver not working properly, but I just want to rule out that DRBD is involved. </div><div><br></div><div>Thanks, </div><div> </div><div> Thomas</div><div><br></div><div><br></div><div><div>Am 19.12.2008 um 20:44 schrieb Thomas Reinhold:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>Am 18.12.2008 um 17:27 schrieb Lars Ellenberg:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>On Thu, Dec 18, 2008 at 04:46:10PM +0100, Thomas Reinhold wrote:<br><blockquote type="cite">Hi, <br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I've done a little further testing and ran DRBD directly on top of the raid set<br></blockquote><blockquote type="cite">(without using dm_crypt). Still got the same disk flush errors when having<br></blockquote><blockquote type="cite">flushing enabled. <br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">So can I assume that either the lower level scsi driver megaraid_sas (Debian<br></blockquote><blockquote type="cite">2.6.18.6-amd64) or the raid controller (LSI MegaRaid 1078) does not support<br></blockquote><blockquote type="cite">flushing?<br></blockquote><br>absolutely.<br><br><blockquote type="cite">And another question: Can disabling flushing in DRBD cause any other problems<br></blockquote><blockquote type="cite">than data corruptions at power loss?<br></blockquote><br>I'd say "no" if you promise not to sue me in case I'm wrong.</div></blockquote><div><br></div>How could I sue you for using a free product? I would have to buy a support contract first ;-)</div><div><br></div><div>Anyways, thanks for your help! I have disabled the raid controller cache for now, as I dislike the idea of having too much data in cache (even though we are using an UPS). </div><div><br></div><div>The disk caches are still enabled, however, as the performance impact of disabling both caches would be too great. We'll see how that works with XFS. </div><div><br></div><div>If we encounter any problems, I'll get back to the list. </div><div><br></div><div><br></div><div>Regards, </div><div><br></div><div> Thomas</div><div> </div><div><br></div><div><blockquote type="cite"><div><br><br>-- <br>: Lars Ellenberg<br>: LINBIT | Your Way to High Availability<br>: DRBD/HA support and consulting <a href="http://www.linbit.com">http://www.linbit.com</a><br><br>DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.<br>__<br>please don't Cc me, but send to list -- I'm subscribed<br>_______________________________________________<br>drbd-user mailing list<br><a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br><a href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br></div></blockquote></div><br></div>_______________________________________________<br>drbd-user mailing list<br><a href="mailto:drbd-user@lists.linbit.com">drbd-user@lists.linbit.com</a><br><a href="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</a><br></blockquote></div><br></body></html>