Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On 3 Nov 2011, at 10:32, Florian Haas wrote: >> I'm now finding myself a cable to connect these hosts directly. > > That may or may not solve your issue; the original motivation for the > data integrity feature was to catch issues with NICs, not cables. Used a new cable, directly connected the hosts, and disabled checksum offloading on the NICs too. For prosperity, here's what I did: ethtool -k eth3 .. to check the current status, and ethtool -K eth3 rx off ethtool -K eth3 tx off ethtool -K eth3 gso off .. to disable offloading. A couple of minutes later, I got the error again - block drbd3: Digest integrity check FAILED. It only appears to be happening on drbd3 and drbd5, and it's always kvm-host-02 that reports the FAILure. I guess it's because the virtual machines using those resources are writing more often. kvm-host-02 is the Secondary for them. It recovers quickly and synchronises itself. Will the process writing to the Primary experience IO wait when this happens? I'm using protocol C. Will keep monitoring. > Have we mentioned storage replication is complicated? :) I'm getting the picture.... :-) >> I guess the syntax has changed. How do I enable this in Lucid's DRBD 8.3.7? > > It belongs in the "syncer" section in 8.3.x. > > http://www.drbd.org/users-guide-legacy/s-use-online-verify.html Thanks. Verified my disks - as expected, all is well. I'm planning on triggering a verify every week for all of my volumes out of cron. Cheers, Nick