Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi all,
[Please CC me in replies, I am not subscribed to this list.]
I am trying to to use ocfs2 over drbd8 and am failing right now with seemingly
the same error as described in
http://lists.linbit.com/pipermail/drbd-user/2006-August/005483.html
Packages are from Debian etch (drbd8-utils 2:8.0.0-1 with the modules compiled
from drbd8-module-source 8.0.0-1, ocfs2-tools 1.2.1-1.3 with kernel modules
from the Debian kernel package linux-image-2.6.18-4-xen-686
2.6.18.dfsg.1-11). Kernel command line from /proc/cmdline is more or less
boring:
[root at jupiter2 ~]# cat /proc/cmdline
root=/dev/md1 ro selinux=1 enforcing=0 irqpoll console=tty0
With this setup, I get
[root at jupiter ~]# mount -t ocfs2 /dev/drbd0 /home/
ocfs2_hb_ctl: I/O error on channel while starting heartbeat
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not
permitted"
and dmesg then tells me
(21368,0):o2hb_setup_one_bio:290 ERROR: Error adding page to bio i = 7,
vec_len = 4096, len = 0
, start = 0
(21368,0):o2hb_read_slots:385 ERROR: status = -5
(21368,0):o2hb_populate_slot_data:1299 ERROR: status = -5
(21368,0):o2hb_region_dev_write:1399 ERROR: status = -5
on both nodes of the cluster. drbd8 seems to be up correctly:
[root at jupiter2 ~]# cat /proc/drbd
version: 8.0.0 (api:86/proto:86)
SVN Revision: 2713 build by rene at moon, 2007-02-26 14:34:42
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate C r---
ns:492376 nr:0 dw:492376 dr:3107 al:150 bm:0 lo:0 pe:0 ua:0 ap:0
resync: used:0/31 hits:0 misses:0 starving:0 dirty:0 changed:0
act_log: used:0/257 hits:15922 misses:150 starving:0 dirty:0
changed:150
and the ocfs2 configuration is pretty simple:
[root at jupiter2 ~]# cat /etc/ocfs2/cluster.conf
cluster:
node_count = 2
name = ocfs2
node:
ip_port = 7777
ip_address = 192.168.255.30
number = 0
name = jupiter
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.255.29
number = 1
name = jupiter2
cluster = ocfs2
(and copied to both hosts). o2cb seems to be fine as well (equal on both
nodes)
[root at jupiter2 ~]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking cluster ocfs2: Online
Checking heartbeat: Not active
mkfs.ocfs2 has been called without specific options for cluster block sizes,
so it used the default of 4k. Underlying /dev/drbd0 are software RAID1
devices (which work flawlessly).
Has this issue seen any solution since last year? And help is greatly
appreciated!
with best regards,
Rene
--
-------------------------------------------------
Gibraltar firewall http://www.gibraltar.at/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20070227/ef607538/attachment.pgp>