[DRBD-user] two questions about manually "drbdadm detach all"

Lars Ellenberg lars.ellenberg at linbit.com
Mon Jan 12 10:15:57 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


you need to subscribed here to have your posts go through.
if you are not subscribed, your posts may get lost.

On Thu, Jan 08, 2009 at 11:14:33PM +0800, yls wrote:
> According to the introduction at http://www.drbd.org/docs/about/, there
> are two paragraphs about Diskless and detach:
>     on {Disk states}: "Diskless. No local block device has been assigned
> to the DRBD driver. This may mean that the resource has never attached
> to its backing device, that it has been manually detached using drbdadm
> detach, or that it automatically detached after a lower-level I/O
> error."
>    on {Disk error handling strategies}: "Masking I/O errors.  If DRBD is
> configured to detach on lower-level I/O error, DRBD will do so,
> automatically, upon occurrence of the first lower-level I/O error. The
> I/O error is masked from upper layers while DRBD transparently fetches
> the affected block from the peer node, over the network. From then
> onwards, DRBD is said to operate in diskless mode, and carries out all
> subsequent I/O operations, read and write, on the peer node. Performance
> in this mode is inevitably expected to suffer, but the service continues
> without interruption, and can be moved to the peer node in a deliberate
> fashion at a convenient time."
> 
> Here are my questions:
> (1) can I do maunally "drbdadm detach all" on the primary node of the
> cluster?

yes.

> I want to simulate some kind of disk failure in this way. 

not good.
none of the error handling pathes would be excercised.

if you want to simulate some kind of io error, put a devicemapper target
between drbd and lower level device, I think there is some that
arbitrarily and intentionally cause all sorts of weird behaviour from
delays to flaky behaviour.
or the generic io fault injection framework of the linux kernel.
or even the drbd internal faul injection
(/sys/module/drbd/parameters/*fault*)

> (2) if the DRBD version is 0.7.24, can I expect "From then onwards, DRBD
> is said to operate in diskless mode, and carries out all subsequent I/O
> operations, read and write, on the peer node. " ??

yes of course.

> is there any differences on this COMMAND between version 0.7.24   and
> 8.0/later??

drbd 0.7 will not let you re-_attch_ on a Primary, you'd have to make it
secondary first.

you will _always_ get a full sync if you detach a primary.

0.7 is known to have races and bad behaviour handling io errors under load.

actually the last bug I know of relating to these detach/attach code
pathes was only fixed in the most recent drbd 8.3.0 release.

please don't even bother looking at 0.7.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list