Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello, i'm using ocfs2 on top of lvm on top of drbd in a KVM virtual server and fighing with kernel bugs (ocfs, virtio) for weeks. Now that seems start to work, i have found something more that is weird. Kernel: 126.96.36.199 Drbd: 8.3.4 When starting drbd, i see thousands of those errors: [ 4962.247701] end_request: I/O error, dev vda, sector 0 [ 4962.263811] end_request: I/O error, dev vda, sector 0 [ 4962.276360] end_request: I/O error, dev vda, sector 0 Drbd doesn't make use of such device. I found a bug regarding kernel and barriers here: https://bugzilla.redhat.com/show_bug.cgi?id=514901 which seems to state that virtio_blk (the one i'm using) had some problems like this and it seems related to barriers that are not supported by virtio_blk. The machine was working well, so i exclude errors on the disk (it would probably be another device and another sector, not vda which is not used and not sector zeo which contains the MBR which just works). Playing with DRBD, i tried both with and without "no-disk-barrier" in the disk section but this has no effect. Assuming that - the patches fixing that error should have gone into 2.6.31-RC1 (i'm using 188.8.131.52) - that with 184.108.40.206 + drbd 8.3.3 was working well what i'm asking is: should i submit a bug report to LKML or is this something related to DRBD ? Attachment shows an excerpt of the log (drbdadm up resource && drbdadm down all). DRBD configuration is probably not relevant. I tried all sort of combinations of the disk options in order to try to go on. Max P.S.: i should mention that the same configurations on a NON LVM partition does work ok. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ioerr.txt URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20091014/088478d8/attachment.txt>