[DRBD-user] Out of memory error when invoking fence-handler

Digimer lists at alteeve.ca
Sun Nov 9 22:05:52 CET 2014

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


CentOS 6.6, DRBD 8.3.16.

So this sucked:

After rebooting and restoring, I retried and got the same result a 
second time. After moving my VMs to the other node, I tested crashing 
the other node and again saw the "out of mem, failed to invoke 
fence-peer helper" message. After that, I rebooted both nodes. I've not 
yet tested if that resolved the issue.

Anyone seen this before?

====
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: PingAck did not arrive 
in time.
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: peer( Primary -> Unknown 
) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 
0 -> 1 )
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: asender terminated
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: Terminating drbd0_asender
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: Connection closed

*** Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: out of mem, failed 
to invoke fence-peer helper

Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: conn( NetworkFailure -> 
Unconnected )
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: receiver terminated
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: Restarting drbd0_receiver
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: receiver (re)started
Nov  9 15:18:40 fea-c01n01 kernel: block drbd0: conn( Unconnected -> 
WFConnection )
Nov  9 15:18:42 fea-c01n01 corosync[3256]:   [TOTEM ] A processor 
failed, forming new configuration.
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: PingAck did not arrive 
in time.
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: peer( Primary -> Unknown 
) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) susp( 
0 -> 1 )
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: asender terminated
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: Terminating drbd1_asender
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: Connection closed
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: out of mem, failed to 
invoke fence-peer helper
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: conn( NetworkFailure -> 
Unconnected )
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: receiver terminated
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: Restarting drbd1_receiver
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: receiver (re)started
Nov  9 15:18:42 fea-c01n01 kernel: block drbd1: conn( Unconnected -> 
WFConnection )
Nov  9 15:18:44 fea-c01n01 corosync[3256]:   [QUORUM] Members[1]: 1
Nov  9 15:18:44 fea-c01n01 corosync[3256]:   [TOTEM ] A processor joined 
or left the membership and a new membership was formed.
Nov  9 15:18:44 fea-c01n01 kernel: dlm: closing connection to node 2
Nov  9 15:18:44 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Trying to acquire journal lock...
Nov  9 15:18:44 fea-c01n01 fenced[3323]: fencing node fea-c01n02.feaind.com
Nov  9 15:18:44 fea-c01n01 corosync[3256]:   [CPG   ] chosen downlist: 
sender r(0) ip(10.20.10.1) ; members(old:2 left:1)
Nov  9 15:18:44 fea-c01n01 corosync[3256]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Nov  9 15:19:03 fea-c01n01 fenced[3323]: fence fea-c01n02.feaind.com success
Nov  9 15:19:05 fea-c01n01 rgmanager[3496]: Marking service:storage_n02 
as stopped: Restricted domain unavailable
Nov  9 15:19:05 fea-c01n01 rgmanager[3496]: Marking service:libvirtd_n02 
as stopped: Restricted domain unavailable
Nov  9 15:21:16 fea-c01n01 kernel: INFO: task kslowd001:6376 blocked for 
more than 120 seconds.
Nov  9 15:21:16 fea-c01n01 kernel:      Not tainted 2.6.32-504.el6.x86_64 #1
Nov  9 15:21:16 fea-c01n01 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov  9 15:21:16 fea-c01n01 kernel: kslowd001     D 0000000000000012 
0  6376      2 0x00000080
Nov  9 15:21:16 fea-c01n01 kernel: ffff880864a93b40 0000000000000046 
000000030000fbc8 0000000000000018
Nov  9 15:21:16 fea-c01n01 kernel: ffff880800000008 ffffffffa04a4030 
ffff880864a9f538 ffffffffa04a3fe0
Nov  9 15:21:16 fea-c01n01 kernel: ffff880800000003 ffff880864a9f5e8 
ffff880864a91af8 ffff880864a93fd8
Nov  9 15:21:16 fea-c01n01 kernel: Call Trace:
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a4030>] ? 
gdlm_ast+0x0/0x210 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a3fe0>] ? 
gdlm_bast+0x0/0x50 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0483100>] ? 
gfs2_glock_holder_wait+0x0/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa048310e>] 
gfs2_glock_holder_wait+0xe/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152ac7f>] 
__wait_on_bit+0x5f/0x90
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0483100>] ? 
gfs2_glock_holder_wait+0x0/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152ad28>] 
out_of_line_wait_on_bit+0x78/0x90
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109eb80>] ? 
wake_bit_function+0x0/0x50
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04840e5>] 
gfs2_glock_wait+0x45/0x90 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0487619>] 
gfs2_glock_nq+0x2c9/0x410 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa049ae23>] 
gfs2_recover_work+0xc3/0x7b0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81063bf3>] ? 
perf_event_task_sched_out+0x33/0x70
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff810096f0>] ? 
__switch_to+0xd0/0x320
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa049ae1b>] ? 
gfs2_recover_work+0xbb/0x7b0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0487939>] ? 
gfs2_glock_nq_num+0x59/0xa0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8106d59b>] ? 
enqueue_task_fair+0xfb/0x100
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81117ef3>] 
slow_work_execute+0x233/0x310
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81118127>] 
slow_work_thread+0x157/0x360
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109eb00>] ? 
autoremove_wake_function+0x0/0x40
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81117fd0>] ? 
slow_work_thread+0x0/0x360
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109e66e>] kthread+0x9e/0xc0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8100c20a>] child_rip+0xa/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109e5d0>] ? kthread+0x0/0xc0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20
Nov  9 15:21:16 fea-c01n01 kernel: INFO: task gfs2_quotad:6384 blocked 
for more than 120 seconds.
Nov  9 15:21:16 fea-c01n01 kernel:      Not tainted 2.6.32-504.el6.x86_64 #1
Nov  9 15:21:16 fea-c01n01 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov  9 15:21:16 fea-c01n01 kernel: gfs2_quotad   D 0000000000000013 
0  6384      2 0x00000080
Nov  9 15:21:16 fea-c01n01 kernel: ffff880864abfc20 0000000000000046 
0000000000000000 0000000400000018
Nov  9 15:21:16 fea-c01n01 kernel: 000000000000001c ffffffffa04a4030 
00000067797edaa8 ffffffffa04a3fe0
Nov  9 15:21:16 fea-c01n01 kernel: ffff880800000005 00000001000229a9 
ffff8808648a05f8 ffff880864abffd8
Nov  9 15:21:16 fea-c01n01 kernel: Call Trace:
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a4030>] ? 
gdlm_ast+0x0/0x210 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a3fe0>] ? 
gdlm_bast+0x0/0x50 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0483100>] ? 
gfs2_glock_holder_wait+0x0/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa048310e>] 
gfs2_glock_holder_wait+0xe/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152ac7f>] 
__wait_on_bit+0x5f/0x90
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0483100>] ? 
gfs2_glock_holder_wait+0x0/0x20 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152ad28>] 
out_of_line_wait_on_bit+0x78/0x90
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109eb80>] ? 
wake_bit_function+0x0/0x50
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04840e5>] 
gfs2_glock_wait+0x45/0x90 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0487619>] 
gfs2_glock_nq+0x2c9/0x410 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81087fdb>] ? 
try_to_del_timer_sync+0x7b/0xe0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a0469>] 
gfs2_statfs_sync+0x59/0x1c0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152a84a>] ? 
schedule_timeout+0x19a/0x2e0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa04a0461>] ? 
gfs2_statfs_sync+0x51/0x1c0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0497c27>] 
quotad_check_timeo+0x57/0xb0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0497eb4>] 
gfs2_quotad+0x234/0x2b0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109eb00>] ? 
autoremove_wake_function+0x0/0x40
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa0497c80>] ? 
gfs2_quotad+0x0/0x2b0 [gfs2]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109e66e>] kthread+0x9e/0xc0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8100c20a>] child_rip+0xa/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109e5d0>] ? kthread+0x0/0xc0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20
Nov  9 15:21:16 fea-c01n01 kernel: INFO: task qemu-kvm:24410 blocked for 
more than 120 seconds.
Nov  9 15:21:16 fea-c01n01 kernel:      Not tainted 2.6.32-504.el6.x86_64 #1
Nov  9 15:21:16 fea-c01n01 kernel: "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov  9 15:21:16 fea-c01n01 kernel: qemu-kvm      D 0000000000000013 
0 24410      1 0x00000080
Nov  9 15:21:16 fea-c01n01 kernel: ffff880651e439c8 0000000000000082 
0000000000000000 ffffffff8126b3c4
Nov  9 15:21:16 fea-c01n01 kernel: ffff8810702c9400 ffff881072d1cb00 
00000066cdb4e376 ffffffffa000461c
Nov  9 15:21:16 fea-c01n01 kernel: ffff880651e43988 00000001000221b5 
ffff88065a893ab8 ffff880651e43fd8
Nov  9 15:21:16 fea-c01n01 kernel: Call Trace:
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8126b3c4>] ? 
blk_unplug+0x34/0x70
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffffa000461c>] ? 
dm_table_unplug_all+0x5c/0x100 [dm_mod]
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8152a1b3>] 
io_schedule+0x73/0xc0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811ce7bd>] 
__blockdev_direct_IO_newtrunc+0xb7d/0x1270
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811ca120>] ? 
blkdev_get_block+0x0/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811cef27>] 
__blockdev_direct_IO+0x77/0xe0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811ca120>] ? 
blkdev_get_block+0x0/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811cb1a7>] 
blkdev_direct_IO+0x57/0x60
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811ca120>] ? 
blkdev_get_block+0x0/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81124ef2>] 
generic_file_direct_write+0xc2/0x190
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff81126811>] 
__generic_file_aio_write+0x3a1/0x490
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff810b2b43>] ? 
futex_wake+0x93/0x150
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff811ca6fc>] 
blkdev_aio_write+0x3c/0xa0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8118dd5a>] 
do_sync_write+0xfa/0x140
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8108e41e>] ? 
send_signal+0x3e/0x90
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8109eb00>] ? 
autoremove_wake_function+0x0/0x40
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8108e856>] ? 
group_send_sig_info+0x56/0x70
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8123a5eb>] ? 
selinux_file_permission+0xfb/0x150
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8122d446>] ? 
security_file_permission+0x16/0x20
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8118e058>] vfs_write+0xb8/0x1a0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8118eae2>] 
sys_pwrite64+0x82/0xa0
Nov  9 15:21:16 fea-c01n01 kernel: [<ffffffff8100b072>] 
system_call_fastpath+0x16/0x1b
^[[A^[[BNov  9 15:21:43 fea-c01n01 corosync[3256]:   [TOTEM ] A 
processor joined or left the membership and a new membership was formed.
Nov  9 15:21:43 fea-c01n01 corosync[3256]:   [QUORUM] Members[2]: 1 2
Nov  9 15:21:43 fea-c01n01 corosync[3256]:   [QUORUM] Members[2]: 1 2
Nov  9 15:21:43 fea-c01n01 corosync[3256]:   [CPG   ] chosen downlist: 
sender r(0) ip(10.20.10.1) ; members(old:1 left:0)
Nov  9 15:21:43 fea-c01n01 corosync[3256]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: Handshake successful: 
Agreed network protocol version 97
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: conn( WFConnection -> 
WFReportParams )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: Starting asender thread 
(from drbd0_receiver [5969])
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: Handshake successful: 
Agreed network protocol version 97
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: data-integrity-alg: 
<not-used>
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: conn( WFConnection -> 
WFReportParams )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: Starting asender thread 
(from drbd1_receiver [5973])
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: drbd_sync_handshake:
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: self 
47E6EB8E53DA07A5:0000000000000000:512448D1B622C51D:512348D1B622C51D 
bits:0 flags:0
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: peer 
47E6EB8E53DA07A4:0000000000000000:512448D1B622C51D:512348D1B622C51D 
bits:130048 flags:2
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: uuid_compare()=-1 by rule 40
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: I shall become 
SyncTarget, but I am primary!
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: data-integrity-alg: 
<not-used>
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: drbd_sync_handshake:
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: self 
18E7A58FEF0C2589:0000000000000000:6D79ACDE04B51A3F:6D78ACDE04B51A3F 
bits:0 flags:0
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: peer 
18E7A58FEF0C2588:0000000000000000:6D79ACDE04B51A3F:6D78ACDE04B51A3F 
bits:130048 flags:2
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: uuid_compare()=-1 by rule 40
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: I shall become 
SyncTarget, but I am primary!
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: conn( WFReportParams -> 
Disconnecting )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: error receiving 
ReportState, l: 4!
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: conn( WFReportParams -> 
Disconnecting )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: error receiving 
ReportState, l: 4!
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: asender terminated
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: Terminating drbd0_asender
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: asender terminated
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: Terminating drbd1_asender
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: Connection closed
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: Connection closed
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: helper command: 
/sbin/drbdadm fence-peer minor-0
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: conn( Disconnecting -> 
StandAlone )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: helper command: 
/sbin/drbdadm fence-peer minor-1
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: conn( Disconnecting -> 
StandAlone )
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: receiver terminated
Nov  9 15:22:27 fea-c01n01 kernel: block drbd0: Terminating drbd0_receiver
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: receiver terminated
Nov  9 15:22:27 fea-c01n01 kernel: block drbd1: Terminating drbd1_receiver

*** Nov  9 15:22:27 fea-c01n01 rhcs_fence: Attempting to fence peer 
using RHCS from DRBD...

Nov  9 15:22:27 fea-c01n01 rhcs_fence: Attempting to fence peer using 
RHCS from DRBD...
Nov  9 15:22:37 fea-c01n01 corosync[3256]:   [TOTEM ] A processor 
failed, forming new configuration.
Nov  9 15:22:39 fea-c01n01 corosync[3256]:   [QUORUM] Members[1]: 1
Nov  9 15:22:39 fea-c01n01 corosync[3256]:   [TOTEM ] A processor joined 
or left the membership and a new membership was formed.
Nov  9 15:22:39 fea-c01n01 corosync[3256]:   [CPG   ] chosen downlist: 
sender r(0) ip(10.20.10.1) ; members(old:2 left:1)
Nov  9 15:22:39 fea-c01n01 corosync[3256]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Nov  9 15:22:39 fea-c01n01 kernel: dlm: closing connection to node 2
Nov  9 15:22:39 fea-c01n01 fenced[3323]: fencing node fea-c01n02.feaind.com
Nov  9 15:22:46 fea-c01n01 fenced[3323]: fence fea-c01n02.feaind.com success
Nov  9 15:22:47 fea-c01n01 fence_node[1906]: fence fea-c01n02.feaind.com 
success

*** Nov  9 15:22:47 fea-c01n01 kernel: block drbd0: helper command: 
/sbin/drbdadm fence-peer minor-0 exit code 7 (0x700)

Nov  9 15:22:47 fea-c01n01 kernel: block drbd0: fence-peer helper 
returned 7 (peer was stonithed)
Nov  9 15:22:47 fea-c01n01 kernel: block drbd0: pdsk( DUnknown -> 
Outdated )
Nov  9 15:22:47 fea-c01n01 kernel: block drbd0: new current UUID 
0B0310DD5442A87D:47E6EB8E53DA07A5:512448D1B622C51D:512348D1B622C51D
Nov  9 15:22:47 fea-c01n01 kernel: block drbd0: susp( 1 -> 0 )
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Looking at journal...
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Acquiring the transaction lock...
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Replaying journal...
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Replayed 16 of 37 blocks
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Found 10 revoke tags
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Journal replayed in 1s
Nov  9 15:22:47 fea-c01n01 kernel: GFS2: fsid=fea-cluster-01:shared.1: 
jid=0: Done
Nov  9 15:22:47 fea-c01n01 fence_node[1907]: fence fea-c01n02.feaind.com 
success
Nov  9 15:22:47 fea-c01n01 kernel: block drbd1: helper command: 
/sbin/drbdadm fence-peer minor-1 exit code 7 (0x700)
Nov  9 15:22:47 fea-c01n01 kernel: block drbd1: fence-peer helper 
returned 7 (peer was stonithed)
Nov  9 15:22:47 fea-c01n01 kernel: block drbd1: pdsk( DUnknown -> 
Outdated )
Nov  9 15:22:47 fea-c01n01 kernel: block drbd1: new current UUID 
FA229355474F1103:18E7A58FEF0C2589:6D79ACDE04B51A3F:6D78ACDE04B51A3F
Nov  9 15:22:47 fea-c01n01 kernel: block drbd1: susp( 1 -> 0 )
=====

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without 
access to education?



More information about the drbd-user mailing list