[Drbd-dev] DRBD8: Panic in drbd_bm_write_sect() after an ioerrorduring resync.

Montrose, Ernest Ernest.Montrose at stratus.com
Fri Feb 16 18:42:56 CET 2007


Lars,
I will patch and see what happens..
As for your last comment...The IO handler definitely keeps ticking it
seems
Regardless of the reference count.

Thanks!
EM-- 

-----Original Message-----
From: drbd-dev-bounces at linbit.com [mailto:drbd-dev-bounces at linbit.com]
On Behalf Of Lars Ellenberg
Sent: Friday, February 16, 2007 12:32 PM
To: drbd-dev at linbit.com
Subject: Re: [Drbd-dev] DRBD8: Panic in drbd_bm_write_sect() after an
ioerrorduring resync.

/ 2007-02-16 09:55:12 -0500
\ Montrose, Ernest:
> Phil,
> Thanks!
> 
> I think all these panics on I/O errors are all related to the same
bug.
> 
> Your comments make me look at a different angle... Looking at the logs

> around the failure Shows a problem on repeated I/O errors...the state 
> machine is somewhat confused..It essentially Goes from 
> Uptodate->Failed which is fine...then from
> Failed->Diskless...fine...then we go and
> Wait for mdev->local_cnt to be false like you explained...
> Then we get more I/O errors...and our problem starts...
> We go from Diskless->failed..again.(This does not seem correct since 
> we just went from this state)

even though I dislike our overall state engine design, it may be enough
to do

--- drbd/drbd_main.c    (revision 2754)
+++ drbd/drbd_main.c    (working copy)
@@ -604,6 +604,11 @@
                dec_local(mdev);
        }

+       /* If we are Diskless, we can only go to Attaching. */
+       if ( (os.disk == Diskless) && (ns.disk != Attaching) ) {
+               ns.disk = Diskless;
+       }
+
        /* Early state sanitising. Dissalow the invalidate ioctl to
 * connect  */
        if( (ns.conn == StartingSyncS || ns.conn == StartingSyncT) &&
                os.conn < Connected ) {


> Then faile->diskless again
> We get more I/O errors...(not good)
> Mdev->bc is set to null eventually
> We went and wait again for mdev->local_cnt to be False..(not good) Now

> we die an awful ungodly death..:)
> 
> Here is the full log around the failure:
> Feb 15 16:01:57 captain kernel: end_request: I/O error, dev sda, 
> sector
> 17554615
> Feb 15 16:01:57 captain kernel: drbd0: disk( UpToDate -> Failed ) Feb 
> 15 16:01:57 captain kernel: drbd0: Local IO failed. Detaching...
> Feb 15 16:01:57 captain kernel: drbd_io_error: EM--****** Handling an 
> IO error***mdev->bc is valid*********************** Feb 15 16:01:57 
> captain kernel: drbd0: disk( Failed -> Diskless ) Feb 15 16:01:57 
> captain kernel: drbd0: Notified peer that my disk is broken.
> Feb 15 16:01:57 captain kernel: after_state_ch: EM-- *******Waiting 
> for
> mdev->local_cnt to be FALSE ******
> Feb 15 16:01:57 captain kernel: end_request: I/O error, dev sda, 
> sector
> 17554623
> Feb 15 16:01:57 captain kernel: drbd0: disk( Diskless -> Failed )

right. this is not allowed.

but this also means that our reference counting of in-flight local
requests is not ok, since once local_cnt is zero, there should be no
more in-flight requests to the local disk that might trigger the end_io
handler.

-- 
: Lars Ellenberg                            Tel +43-1-8178292-55 :
: LINBIT Information Technologies GmbH      Fax +43-1-8178292-82 :
: Vivenotgasse 48, A-1120 Vienna/Europe    http://www.linbit.com :
_______________________________________________
drbd-dev mailing list
drbd-dev at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-dev


More information about the drbd-dev mailing list