Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 2011-10-13 20:00, Jojy Varghese wrote: > Hi > We are testing drbd for our storage cluster and had a question > about the behavior we are seeing. we have layered drbd on top of a > device mapper layer. When we simulate a block error using the dm > layer, we see that the requests for those particular blocks is > forwarded to the peer node. We are using 8.3x version of drbd. The > documentation says that the default behavior is to takeout the > defective node even if there is 1 block error. Er, no. It never was, and the documentation (at least the User's Guide) never said so. - Prior to 8.4, the default behavior was to simply not do anything about the I/O error and pass it right up to the calling layer, where the latter was expected to handle it. http://www.drbd.org/users-guide-legacy/s-configure-io-error-behavior.html - Since 8.4, the default behavior is to transparently read from or write to the affected block on the peer node, "detaching" from the local (faulty) device and masking the I/O error. http://www.drbd.org/users-guide/s-configure-io-error-behavior.html Removal from the cluster in case of an I/O error (by way of a deliberate kernel panic) was an option in 0.7, and can still be configured via a local-io-error handler. If it was ever the default, then that would have been prior to 0.7 -- i.e. long before I started working with DRBD, so I wouldn't know. Hope this helps. Cheers, Florian -- Need help with DRBD? http://www.hastexo.com/now