Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Tue, Jul 08, 2014 at 10:44:47AM -0500, Ross Anderson wrote: > Greetings, > > I will have to echo this issue. Upgraded to 3.15.4 and started seeing > lots of these messages. System is Active/Active with external > metadata. 10Gb link w/ HW SAS raid. Any help appreciated. > > > Thanks, > Ross > > > I am using DRBD active/active cluster as well and have LVM volumes on it. > But I do not have any error messages nor do I have any errors in my VMs which are using qemu/kvm as well. > > I am not even sure what my messages are telling me. I am using and external metadata disk. > > Thomas > > On Monday 07 July 2014 11:46:24 Marcus Pereira wrote: > > > >I have the same problem. Yesterday I upgraded to kernel 3.15.3 and my > >kernel.log is filling with this messages: > > > >Jul 7 11:23:05 virtx kernel: [55267.193678] block drbd39: meta_data io: > >kworker/u65:1 [31208]:drbd_md_sync_page_io(,16s,WRITE) > >_al_write_transaction+0x4f9/0xa00 [drbd] These are DEBUG level messages and should have become "dynamic debug" messages, switched off by default, as they have been before. For no good reason they became not dynamic, but "static", always on, debug messages. We'll fix that. If you "roll your own", diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h index a76ceb3..1e1b88f 100644 --- a/drivers/block/drbd/drbd_int.h +++ b/drivers/block/drbd/drbd_int.h @@ -132,8 +132,10 @@ void drbd_printk_with_wrong_object_type(void); __drbd_printk_peer_device, level, fmt, ## args), \ drbd_printk_with_wrong_object_type())))) +#if defined(DEBUG) #define drbd_dbg(obj, fmt, args...) \ drbd_printk(KERN_DEBUG, obj, fmt, ## args) +#endif #define drbd_alert(obj, fmt, args...) \ drbd_printk(KERN_ALERT, obj, fmt, ## args) #define drbd_err(obj, fmt, args...) \ > >Seems to happen on all my drbd devices (currently 52 on this server) and > >is affecting functionality. > >I use external meta-disk. Disk and meta-disk are LVM volumes and blocks > >are used by VPSs running below kvm/libvirt/qemu. > > > >Some of my guest VPS show errors like this: > >Jul 7 09:09:51 webx kernel: [28863.199248] EXT4-fs warning (device > >vda1): ext4_end_bio:317: I/O error writing to inode 1050005 (offset 0 > >size 0 starting block 4871811) > >Jul 7 09:09:51 webx kernel: [28863.199278] Buffer I/O error on device > >vda1, logical block 4871555 *that* is something else, and should not be caused by the above debug messages. > >Jul 7 09:09:51 webx kernel: [28863.199378] EXT4-fs warning (device > >vda1): ext4_end_bio:317: I/O error writing to inode 1050005 (offset 0 > >size 0 starting block 4871814) > >Jul 7 09:09:51 webx kernel: [28863.199381] Buffer I/O error on device > >vda1, logical block 4871558 > >Jul 7 09:09:51 webx kernel: [28863.199383] Buffer I/O error on device > >vda1, logical block 4871559 > > > >And some times the guest remount the filesystem read-only due to the errors. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed