[Drbd-dev] drbdadm bug in dual-primary/corosync/ocfs cluster
Matthew Christ
matthew.j.christ at gmail.com
Thu Nov 10 07:53:52 CET 2011
Hello:
I'm testing a dual-primary DRBD cluster with corosync and OCFS and I
get the following bugcheck when a node is fenced, loses power, or
loses network connectivity. The kernel bugcheck shows up on the
surviving node: drbd-8.3.10.tar.gz
Nov 9 23:55:10 vmhost0 external/ipmi[28054]: debug: ipmitool output:
Chassis Power Control: Reset
Nov 9 23:55:11 vmhost0 kernel: [ 2391.883321] bnx2 0000:08:00.0:
eth0: NIC Copper Link is Down
Nov 9 23:55:11 vmhost0 kernel: [ 2391.902305] br1: port 1(eth0)
entering forwarding state
...a bunch of crm fencing/resource migration crud.
Nov 9 23:55:14 vmhost0 kernel: [ 2394.029531] block drbd0: conn(
Unconnected -> WFConnection )
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030015] BUG: unable to handle
kernel NULL pointer dereference at 0000000000000030
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030019] IP:
[<ffffffff8143c48b>] sock_ioctl+0x1f/0x201
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030027] PGD 215908067 PUD
21cda7067 PMD 0
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030031] Oops: 0000 [#2] SMP
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030033] last sysfs file:
/sys/fs/ocfs2/loaded_cluster_plugins
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030036] CPU 2
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030037] Modules linked in:
ocfs2 ocfs2_nodemanager ocfs2_stack_user ocfs2_stackglue dlm drbd
lru_cache ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4
nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT iptable_mangle
iptable_filter ip_tables bnx2
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030049]
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030052] Pid: 28141, comm:
drbdadm Tainted: G D 2.6.38-gentoo-r6STI #4 Dell Inc.
PowerEdge 1950/0NK937
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030057] RIP:
0010:[<ffffffff8143c48b>] [<ffffffff8143c48b>] sock_ioctl+0x1f/0x201
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030061] RSP:
0018:ffff88021ce27e68 EFLAGS: 00010282
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030063] RAX: 0000000000000000
RBX: 0000000000005401 RCX: 00007fff27903490
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030065] RDX: 00007fff27903490
RSI: 0000000000005401 RDI: ffff88021ce69300
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030067] RBP: ffff88021ce27e98
R08: 00007fff279034d0 R09: 00007ff44488ee60
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030070] R10: 00007fff279032b0
R11: 0000000000000202 R12: 00000000ffffffe7
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030072] R13: 00007fff27903490
R14: ffff88021e404840 R15: 0000000000000000
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030075] FS:
00007ff444a91700(0000) GS:ffff8800cf900000(0000)
knlGS:0000000000000000
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030077] CS: 0010 DS: 0000 ES:
0000 CR0: 0000000080050033
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030080] CR2: 0000000000000030
CR3: 000000021afa6000 CR4: 00000000000006e0
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030082] DR0: 0000000000000000
DR1: 0000000000000000 DR2: 0000000000000000
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030084] DR3: 0000000000000000
DR6: 00000000ffff0ff0 DR7: 0000000000000400
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030087] Process drbdadm (pid:
28141, threadinfo ffff88021ce26000, task ffff8802139fadc0)
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030088] Stack:
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030090] ffff88021ce27e78
0000000000000246 ffff88021ce69300 00000000ffffffe7
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030094] 00007fff27903490
00007fff27903490 ffff88021ce27f28 ffffffff811286f4
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030098] ffff88021ce27f78
ffffffff8151309f 0000000000000000 0000000000000000
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030101] Call Trace:
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030108] [<ffffffff811286f4>]
do_vfs_ioctl+0x484/0x4c5
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030113] [<ffffffff8151309f>] ?
page_fault+0x1f/0x30
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030116] [<ffffffff81128786>]
sys_ioctl+0x51/0x77
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030121] [<ffffffff8102f93b>]
system_call_fastpath+0x16/0x1b
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030123] Code: 00 5b 41 5c 41 5d
41 5e 41 5f c9 c3 55 48 89 e5 41 56 41 55 49 89 d5 41 54 53 89 f3 48
83 ec 10 4c 8b b7 a0 00 00 00 49 8b 46 20 <4c> 8b 60 30 8d 83 10 76 ff
ff 83 f8 0f 77 0d 4c 89 e7 e8 4e 3e
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030143] RIP
[<ffffffff8143c48b>] sock_ioctl+0x1f/0x201
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030146] RSP <ffff88021ce27e68>
Nov 9 23:55:14 vmhost0 kernel: [ 2394.030148] CR2: 0000000000000030
Nov 9 23:55:14 vmhost0 kernel: [ 2394.033648] ---[ end trace
16ec925abb8aa89d ]---
Nov 9 23:55:14 vmhost0 kernel: [ 2394.033727] block drbd0: helper
command: /sbin/drbdadm fence-peer minor-0 exit code 0 (0x9)
Nov 9 23:55:14 vmhost0 kernel: [ 2394.033730] block drbd0: fence-peer
helper broken, returned 0
Nov 9 23:55:14 vmhost0 kernel: [ 2394.250451] bnx2 0000:08:00.0:
eth0: NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit
flow control ON
* The bug only occurs when fencing 'resource-only' or
'resource-and-stonith' is used.
* Bug occurs in Ubuntu 11.10 and Gentoo, using the stable versions of
DRBD provided by their respective repositories.
* Bug occurs with kernel versions 3.1, 3.0, 2.6.39, 2.6.38.
* Bug occurs when using the provided-by-package 'fence-peer' handler,
and with the newest 'fence-peer' handler from your git repo.
Everything seems to work fine when I've turn off DRBD's fencing. I
don't exactly need it with corosync to handle stonith.
Let me know if you need additional information.
Cheers,
Matt
-------------- next part --------------
A non-text attachment was scrubbed...
Name: drbd.bug
Type: application/octet-stream
Size: 16065 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-dev/attachments/20111110/e1fcc5fe/attachment.obj>
More information about the drbd-dev
mailing list