[DRBD-user] Disk Corruption = DRBD Failure?

Charles Kozler charles at fixflyer.com
Tue Oct 11 17:09:15 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

I have been reading the docs and still seem to be unclear as to some things-

Assume I have a two node setup with DRBD in Primary/Primary with Xen 
writing to /dev/drbd0 on node1. I use Primary/Primary for live migration 
and in my Xen DomU configuration file I use phy: and not drbd: handler.

Now, what happens if the disk on node1 begins to fail and the blocks 
where /dev/drbd0 resides are corrupted while we continue to write to 
this- will these bad/corrupted blocks be replicated to node2?

Example aside, in short, I am wondering if a failing disk on a node will 
result in DRBD replicating bad block data to the secondary node.  I know 
there a place in the docs describing integrity checker using the kernels 
crypt algo's (like md5) so maybe thats an option to prevent it?

In either case, is there any way to prevent bad block data from node 1 
being replicated to node 2?

-- 
Regards,
Chuck Kozler
/Lead Infrastructure & Systems Administrator/
---
*Office*: 1-646-290-6267 | *Mobile*: 1-646-385-3684
FIX Flyer

Notice to Recipient: This e-mail is meant only for the intended 
recipient(s) of the transmission, and contains confidential information 
which is proprietary
to FIX Flyer LLC. Any unauthorized use, copying, distribution, or 
dissemination is strictly prohibited. All rights to this information is 
reserved by FIX Flyer LLC.
If you are not the intended recipient, please contact the sender by 
reply e-mail and please delete this e-mail from your system and destroy 
any copies
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20111011/f38948c2/attachment.htm>


More information about the drbd-user mailing list