Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hello, Lars,
Just to clarify, are you recommending 8.3.12 over 8.4.0 in all production use, or just when using multiple volumes per resource?
Thanks,
Steve Roth
-----Original Message-----
From: drbd-user-bounces at lists.linbit.com [mailto:drbd-user-bounces at lists.linbit.com] On Behalf Of Lars Ellenberg
Sent: Tuesday, December 13, 2011 03:52
To: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] DRBD list_add corruption and hang
On Fri, Dec 09, 2011 at 12:57:38AM +0000, Newman, Brad wrote:
> I'm using DRBD 8.4.0 with Heartbeat and Pacemaker on CentOS 6.
> I've encountered an issue repeatedly while setting up my DRBD - based on reading the logs, it seems to be caused by a corrupted linked list.
Yes, 8.4.0 is seriously broken,
at least if you actually try to use the multiple volumes per resource.
We are working on the 8.4.1 release.
I recommend to stay with 8.3.x (at the time of writing 8.3.12) in production for the time being.
> Sometimes the problem occurs while the two nodes are syncing, but more
> often DRBD completes syncing and then fails after some apparently
> random interval (but in half a dozen trials, the failure occurred
> every time, and generally in less than an hour).
>
> The problem manifests itself as the primary node reporting the
> connection state as ProtocolError, and the standby reporting its
> connection state as WFConnection. Once the state on the primary is
> ProtocolError, DRBD appears to hang. I/O to the replicated volume
> also hangs, and the kernel produces "hung task" messages. Demoting
> and stopping DRBD on the primary times out, and I've been forced to
> `reboot -f` to recover the node.
>
> The connection state change to ProtocolError is always accompanied by
> kernel oops messages about "list_add corruption". However, I've seen
> several different stack traces with the oops message - so I assume
> that the stack traces are from where the problem is being detected,
> not from the code that's corrupting the list.
>
> Has anyone seen this before - is this a known issue that's already
> been fixed? Is this being caused by a problem with my configuration?
> In my searches I found a couple references to issues that look
> somewhat similar, but they were rather old and did not seem to have
> any resolutions. Any help would be greatly appreciated.
>
> Thanks,
>
> Brad Newman
>
> uname:
> Linux str00057.cup.hp.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27
> 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux (CentOS 6.0)
>
> DRBD version:
> version: 8.4.0 (api:1/proto:86-100)
>
> drbd.conf:
> resource foo {
> handlers {
> fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
> after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
> pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
> pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
> local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
> }
> volume 0 {
> device /dev/drbd1;
> disk /dev/vg01/lv_files_rep;
> meta-disk internal;
> }
> volume 1 {
> device /dev/drbd2;
> disk /dev/vg01/lv_db_rep;
> meta-disk internal;
> }
> net {
> protocol C;
> sndbuf-size 0;
> after-sb-0pri discard-least-changes;
> after-sb-1pri discard-secondary;
> after-sb-2pri disconnect;
> }
> startup {
> wfc-timeout 60;
> degr-wfc-timeout 10;
> outdated-wfc-timeout 10;
> }
> disk {
> fencing resource-only;
> }
> floating ipv6 [fe80::250:56ff:fea0:1bb%eth1]:7789;
> floating ipv6 [fe80::250:56ff:fea0:12c%eth1]:7789;
> }
>
> And here's the log output from one of the trials:
> Dec 8 12:52:02 str00057 kernel: d-con foo: Handshake successful:
> Agreed network protocol version 100 Dec 8 12:52:02 str00057 kernel:
> d-con foo: conn( WFConnection -> WFReportParams ) Dec 8 12:52:02
> str00057 kernel: d-con foo: Starting asender thread (from drbd_r_foo
> [11258]) Dec 8 12:52:02 str00057 kernel: block drbd1: max BIO size = 1048576 Dec 8 12:52:02 str00057 kernel: block drbd1: drbd_sync_handshake:
> Dec 8 12:52:02 str00057 kernel: block drbd1: self
> 77A9D37AA2C609DD:0000000000000004:0000000000000000:0000000000000000
> bits:786399 flags:0 Dec 8 12:52:02 str00057 kernel: block drbd1: peer
> 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:786399 flags:0 Dec 8 12:52:02 str00057 kernel: block drbd1: uuid_compare()=2 by rule 30 Dec 8 12:52:02 str00057 kernel: block drbd1: Becoming sync source due to disk states.
> Dec 8 12:52:02 str00057 kernel: block drbd1: Writing the whole bitmap, full sync required after drbd_sync_handshake.
> Dec 8 12:52:02 str00057 kernel: block drbd1: bitmap WRITE of 24 pages
> took 5 jiffies Dec 8 12:52:02 str00057 kernel: block drbd1: 3072 MB (786399 bits) marked out-of-sync by on disk bit-map.
> Dec 8 12:52:02 str00057 kernel: block drbd1: peer( Unknown ->
> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( DUnknown ->
> Inconsistent ) Dec 8 12:52:02 str00057 kernel: block drbd1: Resumed
> AL updates Dec 8 12:52:02 str00057 kernel: block drbd1: send bitmap
> stats [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression: 100.0% Dec 8 12:52:02 str00057 kernel: block drbd2: max BIO size = 1048576 Dec 8 12:52:02 str00057 kernel: block drbd2: drbd_sync_handshake:
> Dec 8 12:52:02 str00057 kernel: block drbd2: self
> 4999C5E69B17CBF9:0000000000000004:0000000000000000:0000000000000000
> bits:2097079 flags:0 Dec 8 12:52:02 str00057 kernel: block drbd2:
> peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:2097079 flags:0 Dec 8 12:52:02 str00057 kernel: block drbd2: uuid_compare()=2 by rule 30 Dec 8 12:52:02 str00057 kernel: block drbd2: Becoming sync source due to disk states.
> Dec 8 12:52:02 str00057 kernel: block drbd2: Writing the whole bitmap, full sync required after drbd_sync_handshake.
> Dec 8 12:52:02 str00057 kernel: block drbd2: bitmap WRITE of 64 pages
> took 4 jiffies Dec 8 12:52:02 str00057 kernel: block drbd2: 8192 MB (2097079 bits) marked out-of-sync by on disk bit-map.
> Dec 8 12:52:02 str00057 kernel: block drbd2: peer( Unknown ->
> Secondary ) conn( WFReportParams -> WFBitMapS ) pdsk( DUnknown ->
> Inconsistent ) Dec 8 12:52:02 str00057 kernel: block drbd2: Resumed
> AL updates Dec 8 12:52:02 str00057 kernel: block drbd2: send bitmap
> stats [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression:
> 100.0% Dec 8 12:52:02 str00057 kernel: block drbd1: receive bitmap
> stats [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression:
> 100.0% Dec 8 12:52:02 str00057 kernel: block drbd1: helper command: /sbin/drbdadm before-resync-source minor-1 Dec 8 12:52:02 str00057 kernel: block drbd1: helper command: /sbin/drbdadm before-resync-source minor-1 exit code 0 (0x0) Dec 8 12:52:02 str00057 kernel: block drbd1: conn( WFBitMapS -> SyncSource ) Dec 8 12:52:02 str00057 kernel: block drbd1: Began resync as SyncSource (will sync 3145596 KB [786399 bits set]).
> Dec 8 12:52:02 str00057 kernel: block drbd2: receive bitmap stats
> [Bytes(packets)]: plain 0(0), RLE 21(1), total 21; compression: 100.0%
> Dec 8 12:52:02 str00057 kernel: block drbd2: helper command:
> /sbin/drbdadm before-resync-source minor-2 Dec 8 12:52:02 str00057
> kernel: block drbd1: updated sync UUID
> 77A9D37AA2C609DD:0001000000000004:0000000000000004:0000000000000000
> Dec 8 12:52:02 str00057 kernel: block drbd2: helper command:
> /sbin/drbdadm before-resync-source minor-2 exit code 0 (0x0) Dec 8
> 12:52:02 str00057 kernel: block drbd2: conn( WFBitMapS -> SyncSource ) Dec 8 12:52:02 str00057 kernel: block drbd2: Began resync as SyncSource (will sync 8388316 KB [2097079 bits set]).
> Dec 8 12:52:02 str00057 kernel: block drbd2: updated sync UUID
> 4999C5E69B17CBF9:0001000000000004:0000000000000004:0000000000000000
> Dec 8 12:52:14 str00057 kernel: ------------[ cut here ]------------
> Dec 8 12:52:14 str00057 kernel: WARNING: at lib/list_debug.c:30
> __list_add+0x8f/0xa0() (Not tainted) Dec 8 12:52:14 str00057 kernel:
> Hardware name: VMware Virtual Platform Dec 8 12:52:14 str00057 kernel: list_add corruption. prev->next should be next (ffff88010d57f1e8), but was ffff880233cb3480. (prev=ffff880233cb3480).
> Dec 8 12:52:14 str00057 kernel: Modules linked in: drbd(U) libcrc32c
> sit tunnel4 ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter
> ip_tables ip6t_REJECT nf_conntrack_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 vsock(U) ext3 jbd xfs exportfs dm_mirror dm_region_hash dm_log ppdev parport_pc parport e1000 vmware_balloon vmci(U) i2c_piix4 i2c_core sg shpchp ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif ata_generic pata_acpi ata_piix vmw_pvscsi dm_mod [last unloaded: microcode] Dec 8 12:52:14 str00057 kernel: Pid: 11258, comm: drbd_r_foo Not tainted 2.6.32-71.29.1.el6.x86_64 #1 Dec 8 12:52:14 str00057 kernel: Call Trace:
> Dec 8 12:52:14 str00057 kernel: <IRQ> [<ffffffff8106b947>]
> warn_slowpath_common+0x87/0xc0 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff8106ba36>] warn_slowpath_fmt+0x46/0x50 Dec 8 12:52:14
> str00057 kernel: [<ffffffff8126967f>] __list_add+0x8f/0xa0 Dec 8
> 12:52:14 str00057 kernel: [<ffffffffa03e9dcd>]
> drbd_endio_read_sec_final+0xad/0x3b0 [drbd] Dec 8 12:52:14 str00057
> kernel: [<ffffffff811a2cf4>] ? bio_free+0x64/0x70 Dec 8 12:52:14
> str00057 kernel: [<ffffffffa03ea280>]
> drbd_peer_request_endio+0x1b0/0x800 [drbd] Dec 8 12:52:14 str00057
> kernel: [<ffffffff810563f3>] ? __wake_up+0x53/0x70 Dec 8 12:52:14
> str00057 kernel: [<ffffffff811a19ad>] bio_endio+0x1d/0x40 Dec 8
> 12:52:14 str00057 kernel: [<ffffffffa0001514>] dec_pending+0xe4/0x1e0
> [dm_mod] Dec 8 12:52:14 str00057 kernel: [<ffffffffa00018af>]
> clone_endio+0x9f/0xd0 [dm_mod] Dec 8 12:52:14 str00057 kernel:
> [<ffffffff811a19ad>] bio_endio+0x1d/0x40 Dec 8 12:52:14 str00057
> kernel: [<ffffffff812403cb>] req_bio_endio+0xab/0x110 Dec 8 12:52:14
> str00057 kernel: [<ffffffff8124140f>] blk_update_request+0xff/0x440
> Dec 8 12:52:14 str00057 kernel: [<ffffffff81241777>]
> blk_update_bidi_request+0x27/0x80 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff8124284f>] blk_end_bidi_request+0x2f/0x80 Dec 8 12:52:14
> str00057 kernel: [<ffffffff812428f0>] blk_end_request+0x10/0x20 Dec 8
> 12:52:14 str00057 kernel: [<ffffffff8134b0bf>]
> scsi_io_completion+0xaf/0x550 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff813424b2>] scsi_finish_command+0xc2/0x130 Dec 8 12:52:14
> str00057 kernel: [<ffffffff8134b6cd>] scsi_softirq_done+0x14d/0x170
> Dec 8 12:52:14 str00057 kernel: [<ffffffff81247e25>]
> blk_done_softirq+0x85/0xa0 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff81073d67>] __do_softirq+0xb7/0x1e0 Dec 8 12:52:14 str00057
> kernel: [<ffffffff81095c50>] ? hrtimer_interrupt+0x140/0x250 Dec 8
> 12:52:14 str00057 kernel: [<ffffffff810142cc>] call_softirq+0x1c/0x30
> Dec 8 12:52:14 str00057 kernel: [<ffffffff81015f35>]
> do_softirq+0x65/0xa0 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff81073b65>] irq_exit+0x85/0x90 Dec 8 12:52:14 str00057
> kernel: [<ffffffff814d0a31>] smp_apic_timer_interrupt+0x71/0x9c
> Dec 8 12:52:14 str00057 kernel: [<ffffffff81013c93>]
> apic_timer_interrupt+0x13/0x20 Dec 8 12:52:14 str00057 kernel: <EOI>
> [<ffffffff814cbab7>] ? _spin_unlock_irqrestore+0x17/0x20 Dec 8
> 12:52:14 str00057 kernel: [<ffffffff8134286c>]
> scsi_dispatch_cmd+0x1bc/0x340 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff8134a1c5>] scsi_request_fn+0x425/0x5c0 Dec 8 12:52:14
> str00057 kernel: [<ffffffff81240bc2>]
> __generic_unplug_device+0x32/0x40 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff8123a7c9>] elv_insert+0x79/0x1f0 Dec 8 12:52:14 str00057
> kernel: [<ffffffff8123a9b2>] __elv_add_request+0x72/0xd0 Dec 8
> 12:52:14 str00057 kernel: [<ffffffff812435af>]
> __make_request+0x12f/0x4c0 Dec 8 12:52:14 str00057 kernel:
> [<ffffffff81241982>] generic_make_request+0x1b2/0x4f0 Dec 8 12:52:14
> str00057 kernel: [<ffffffff811a26c0>] ? __bio_add_page+0x110/0x230 Dec
> 8 12:52:14 str00057 kernel: [<ffffffffa03f160c>]
> drbd_submit_peer_request+0x1dc/0x2c0 [drbd] Dec 8 12:52:14 str00057
> kernel: [<ffffffffa03f7128>] receive_DataRequest+0x488/0x710 [drbd]
> Dec 8 12:52:14 str00057 kernel: [<ffffffffa03efd36>] ?
> drbd_recv_all+0x16/0x40 [drbd] Dec 8 12:52:14 str00057 kernel:
> [<ffffffffa03f0608>] drbdd+0xb8/0x180 [drbd] Dec 8 12:52:14 str00057
> kernel: [<ffffffffa03f6c30>] drbdd_init+0x40/0xb0 [drbd] Dec 8 12:52:14 str00057 kernel: [<ffffffffa0405a39>] drbd_thread_setup+0xa9/0x160 [drbd] Dec 8 12:52:14 str00057 kernel: [<ffffffff810141ca>] child_rip+0xa/0x20 Dec 8 12:52:14 str00057 kernel: [<ffffffffa0405990>] ? drbd_thread_setup+0x0/0x160 [drbd] Dec 8 12:52:14 str00057 kernel: [<ffffffff810141c0>] ? child_rip+0x0/0x20 Dec 8 12:52:14 str00057 kernel: ---[ end trace 649ecfd255e04a60 ]--- Dec 8 12:52:14 str00057 kernel: d-con foo: BAD! BarrierAck #3293967630 received with n_writes=1, expected n_writes=2!
> Dec 8 12:52:14 str00057 kernel: d-con foo: peer( Secondary -> Unknown
> ) conn( SyncSource -> ProtocolError )
And so on.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD(r) and LINBIT(r) are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list -- I'm subscribed
_______________________________________________
drbd-user mailing list
drbd-user at lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user