[DRBD-user] Kernel hung on DRBD / MD RAID

Andreas Bauer ab at voltage.de
Tue Feb 21 00:03:57 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

From:	Lars Ellenberg <lars.ellenberg at linbit.com>
Sent:	Mon 20-02-2012 23:14

> On Mon, Feb 20, 2012 at 10:16:50PM +0100, Andreas Bauer wrote:
> > The underlying device should never get stuck in the first place, so it
> > would be sufficient to handle it manually when it happens. But when I
> > "force-detach", the DRBD device would change to be readonly correct?
> Not as long as the peer is still reachable and up-to-date.

Just to make sure I understand it correctly...

So when vm-master is Primary, vm-slave is Secondary, and I force-detach the backing device on vm-master, DRBD will automatically make vm-slave the Primary and direct writes to that host?

> > I guess a running VM on top of it wouldn't like that.
> > 
> > Can DRBD 8.3.11 force-detach manually?
> I think 8.3.12 got that feature. It may still not cover all corner cases.
> For a manually forced detach while IO is already stuck on the lower level 
> device,
> you need to "drbdadm detach --force" the first time you try, or the
> "polite" detach may get stuck itself and prevent the later --force.
> And you may or may not need an additional "resume-io" to actually get
> all "hung" IO to be -EIO-ed on that level.
> They may still be completed OK to upper layers (file system),
> if the peer was reachable and up-to-date.

Thanks. So I will see if I have a chance to upgrade at some point.

If this hang should happen again, I hopefully have more chance to play around and investigate.



More information about the drbd-user mailing list