<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=Windows-1252">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.0.6603.0">
<TITLE>RE: [DRBD-user] OCFS2 over DRBDv8</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>have you configured correctly the file /etc/ocfs2/cluster.conf ?<BR>
here is one example:<BR>
#######################################################################<BR>
cluster:<BR>
node_count = 2 # number of nodes in the cluster<BR>
name = clusterteste # name of the cluster<BR>
node:<BR>
ip_port = 7777 # Port number that will be used by ocfs2<BR>
ip_address = 192.168.30.1 # IP ADDR. of Node 0<BR>
number = 0 # Number that identify node 0 (must be <BR>
# a unique number for each node)<BR>
name = no1 # hostname of node zxero<BR>
cluster = clusterteste # name of the cluster that node is part<BR>
node:<BR>
ip_port = 7777<BR>
ip_address = 192.168.30.2<BR>
number = 1<BR>
name = no2<BR>
cluster = clusterteste<BR>
<BR>
##############################################################<BR>
<BR>
You must copy this file to all nodes that will be part of the ocfs2 cluster. in that case the two drbd 0.8 servers.<BR>
Now see the status of the cluster:<BR>
# /etc/init.d/o2cb status<BR>
<BR>
the results will be like:<BR>
<BR>
Module "configfs": Not loaded<BR>
Filesystem "configfs": Not mounted<BR>
Module "ocfs2_nodemanager": Not loaded<BR>
Module "ocfs2_dlm": Not loaded<BR>
Module "ocfs2_dlmfs": Not loaded<BR>
Filesystem "ocfs2_dlmfs": Not mounted<BR>
<BR>
Load the modules:<BR>
<BR>
# /etc/init.d/o2cb load<BR>
<BR>
Loading module "configfs": OK<BR>
Creating directory '/config': OK<BR>
Mounting configfs filesystem at /config: OK<BR>
Loading module "ocfs2_nodemanager": OK<BR>
Loading module "ocfs2_dlm": OK<BR>
Loading module "ocfs2_dlmfs": OK<BR>
Mounting ocfs2_dlmfs filesystem at /dlm: OK<BR>
<BR>
Turn the cluster online:<BR>
<BR>
# /etc/init.d/o2cb online clustertest<BR>
Starting cluster clusterteste: OK<BR>
<BR>
If you want to turn the cluster offline:<BR>
<BR>
# /etc/init.d/o2cb offline clustertest<BR>
Cleaning heartbeat on clusterteste: OK<BR>
Stopping cluster clusterteste: OK<BR>
<BR>
if you want to unload the modules:<BR>
<BR>
# /etc/init.d/o2cb unload<BR>
Unmounting ocfs2_dlmfs filesystem: OK<BR>
Unloading module "ocfs2_dlmfs": OK<BR>
Unmounting configfs filesystem: OK<BR>
Unloading module "configfs": OK<BR>
<BR>
Configuring OCFS2 cluster services to start in the boot process:<BR>
<BR>
# /etc/init.d/o2cb configure<BR>
Configuring the O2CB driver.<BR>
This will configure the on-boot properties of the O2CB driver.<BR>
The following questions will determine whether the driver is loaded on<BR>
boot. The current values will be shown in brackets ('[]'). Hitting<BR>
<ENTER> without typing an answer will keep that current value. Ctrl-C<BR>
will abort.<BR>
Load O2CB driver on boot (y/n) [n]: y<BR>
Cluster to start on boot (Enter "none" to clear) []: clusterteste<BR>
Writing O2CB configuration: OK<BR>
Cluster clusterteste already online<BR>
<BR>
<BR>
IMPORTANT: TO MOUNT THE DEVICE THE OCFS2 CLUSTER IN THE TWO MACHINES MUST BE ONLINE!<BR>
<BR>
<BR>
Information translated from portuguese and extracted from:<BR>
<A HREF="http://guialivre.governoeletronico.gov.br/seminario/index.php/DocumentacaoTecnologiasDRBDOCFS2#OCFS2">http://guialivre.governoeletronico.gov.br/seminario/index.php/DocumentacaoTecnologiasDRBDOCFS2#OCFS2</A><BR>
<BR>
<BR>
Leonardo Rodrigues de Mello<BR>
<BR>
<BR>
<BR>
-----Original Message-----<BR>
From: drbd-user-bounces@lists.linbit.com on behalf of Kilian CAVALOTTI<BR>
Sent: qui 17/8/2006 11:48<BR>
To: ocfs2-users@oss.oracle.com; drbd-user@linbit.com<BR>
Cc: <BR>
Subject: [DRBD-user] OCFS2 over DRBDv8<BR>
<BR>
Hi all,<BR>
<BR>
I'm new to OCFS2, but not so new to DRBD. I'd like to use the new<BR>
primary/primary feature of DRBDv8 to create a shared storage space and<BR>
concurrently access it from multiple clients, using OCFS2.<BR>
<BR>
I configured two hosts with DRBD, allowed two primaries, and successfully<BR>
made each partition primary.<BR>
<BR>
# cat /proc/drbd<BR>
version: 8.0pre4 (api:84/proto:82)<BR>
SVN Revision: 2375M build by root@moby, 2006-08-17 15:54:17<BR>
0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate r---<BR>
ns:0 nr:1398278 dw:1398278 dr:98 al:0 bm:1895 lo:0 pe:0 ua:0 ap:0<BR>
resync: used:0/7 hits:86007 misses:1381 starving:0 dirty:0<BR>
changed:1381<BR>
act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0<BR>
1: cs:Unconfigured<BR>
<BR>
I tried to format the volume with a traditionnal filesystem, and<BR>
successfully mounted it on both nodes.<BR>
<BR>
I then tried with ocfs2. On the first node, mkfs and mount went without a<BR>
hitch, but on the second one, I systematically get an error when I try to<BR>
do anything on the volume (fsck'ing, starting ocfs2-heartbeat, mounting,<BR>
etc.). dmesg shows the following,<BR>
<BR>
drbd0: role( Secondary -> Primary )<BR>
drbd0: Writing meta data super block now.<BR>
(6672,0):o2hb_setup_one_bio:290 ERROR: Error adding page to bio i = 1,<BR>
vec_len = 4096, len = 0<BR>
, start = 0<BR>
(6672,0):o2hb_read_slots:385 ERROR: status = -5<BR>
(6672,0):o2hb_populate_slot_data:1279 ERROR: status = -5<BR>
(6672,0):o2hb_region_dev_write:1379 ERROR: status = -5<BR>
<BR>
<BR>
It seems that the heartbeat process can't write to the device, for an<BR>
unknown reason:<BR>
<BR>
open("/sys/kernel/config/cluster", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = 4<BR>
fstat(4, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0<BR>
fcntl(4, F_SETFD, FD_CLOEXEC) = 0<BR>
getdents64(4, /* 3 entries */, 4096) = 88<BR>
getdents64(4, /* 0 entries */, 4096) = 0<BR>
close(4) = 0<BR>
mkdir("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135",<BR>
0755) = 0<BR>
open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/block_bytes",<BR>
O_WRONLY) = 4<BR>
write(4, "512", 3) = 3<BR>
close(4) = 0<BR>
open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/start_block",<BR>
O_WRONLY) = 4<BR>
write(4, "2176", 4) = 4<BR>
close(4) = 0<BR>
open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/blocks",<BR>
O_WRONLY) = 4<BR>
write(4, "255", 3) = 3<BR>
close(4) = 0<BR>
open("/dev/drbd0", O_RDWR) = 4<BR>
open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/dev",<BR>
O_WRONLY) = 5<BR>
write(5, "4", 1) = -1 EIO (Input/output error)<BR>
close(5) = 0<BR>
close(4) = 0<BR>
rmdir("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135")<BR>
= 0<BR>
semop(0, 0x7fff930bfe30, 1) = 0<BR>
close(3) = 0<BR>
write(2, "mkfs.ocfs2", 10mkfs.ocfs2) = 10<BR>
write(2, ": ", 2: ) = 2<BR>
write(2, "I/O error on channel", 20I/O error on channel) = 20<BR>
write(2, " ", 1 ) = 1<BR>
write(2, "while initializing the dlm", 26while initializing the dlm) = 26<BR>
write(2, "\r\n", 2<BR>
<BR>
I can't figure if it's a DRBD- or a OCFS2-related issue, and I'd take any<BR>
enlightenment with gratitude.<BR>
<BR>
BTW, I use amd64, debian-provided 2.6.17 kernel, drbd8-module-source<BR>
8.0pre4-1 (I tried SVN trunk too), and ocfs2-tools 1.2.1-1.<BR>
<BR>
Thanks in advance,<BR>
--<BR>
Kilian CAVALOTTI Administrateur réseaux et systèmes<BR>
UPMC / CNRS - LIP6 (C870)<BR>
8, rue du Capitaine Scott Tel. : 01 44 27 88 54<BR>
75015 Paris - France Fax. : 01 44 27 70 00<BR>
_______________________________________________<BR>
drbd-user mailing list<BR>
drbd-user@lists.linbit.com<BR>
<A HREF="http://lists.linbit.com/mailman/listinfo/drbd-user">http://lists.linbit.com/mailman/listinfo/drbd-user</A><BR>
<BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>