Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
have you configured correctly the file /etc/ocfs2/cluster.conf ? here is one example: ####################################################################### cluster: node_count = 2 # number of nodes in the cluster name = clusterteste # name of the cluster node: ip_port = 7777 # Port number that will be used by ocfs2 ip_address = 192.168.30.1 # IP ADDR. of Node 0 number = 0 # Number that identify node 0 (must be # a unique number for each node) name = no1 # hostname of node zxero cluster = clusterteste # name of the cluster that node is part node: ip_port = 7777 ip_address = 192.168.30.2 number = 1 name = no2 cluster = clusterteste ############################################################## You must copy this file to all nodes that will be part of the ocfs2 cluster. in that case the two drbd 0.8 servers. Now see the status of the cluster: # /etc/init.d/o2cb status the results will be like: Module "configfs": Not loaded Filesystem "configfs": Not mounted Module "ocfs2_nodemanager": Not loaded Module "ocfs2_dlm": Not loaded Module "ocfs2_dlmfs": Not loaded Filesystem "ocfs2_dlmfs": Not mounted Load the modules: # /etc/init.d/o2cb load Loading module "configfs": OK Creating directory '/config': OK Mounting configfs filesystem at /config: OK Loading module "ocfs2_nodemanager": OK Loading module "ocfs2_dlm": OK Loading module "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Turn the cluster online: # /etc/init.d/o2cb online clustertest Starting cluster clusterteste: OK If you want to turn the cluster offline: # /etc/init.d/o2cb offline clustertest Cleaning heartbeat on clusterteste: OK Stopping cluster clusterteste: OK if you want to unload the modules: # /etc/init.d/o2cb unload Unmounting ocfs2_dlmfs filesystem: OK Unloading module "ocfs2_dlmfs": OK Unmounting configfs filesystem: OK Unloading module "configfs": OK Configuring OCFS2 cluster services to start in the boot process: # /etc/init.d/o2cb configure Configuring the O2CB driver. This will configure the on-boot properties of the O2CB driver. The following questions will determine whether the driver is loaded on boot. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Load O2CB driver on boot (y/n) [n]: y Cluster to start on boot (Enter "none" to clear) []: clusterteste Writing O2CB configuration: OK Cluster clusterteste already online IMPORTANT: TO MOUNT THE DEVICE THE OCFS2 CLUSTER IN THE TWO MACHINES MUST BE ONLINE! Information translated from portuguese and extracted from: http://guialivre.governoeletronico.gov.br/seminario/index.php/DocumentacaoTecnologiasDRBDOCFS2#OCFS2 Leonardo Rodrigues de Mello -----Original Message----- From: drbd-user-bounces at lists.linbit.com on behalf of Kilian CAVALOTTI Sent: qui 17/8/2006 11:48 To: ocfs2-users at oss.oracle.com; drbd-user at linbit.com Cc: Subject: [DRBD-user] OCFS2 over DRBDv8 Hi all, I'm new to OCFS2, but not so new to DRBD. I'd like to use the new primary/primary feature of DRBDv8 to create a shared storage space and concurrently access it from multiple clients, using OCFS2. I configured two hosts with DRBD, allowed two primaries, and successfully made each partition primary. # cat /proc/drbd version: 8.0pre4 (api:84/proto:82) SVN Revision: 2375M build by root at moby, 2006-08-17 15:54:17 0: cs:Connected st:Primary/Primary ds:UpToDate/UpToDate r--- ns:0 nr:1398278 dw:1398278 dr:98 al:0 bm:1895 lo:0 pe:0 ua:0 ap:0 resync: used:0/7 hits:86007 misses:1381 starving:0 dirty:0 changed:1381 act_log: used:0/257 hits:0 misses:0 starving:0 dirty:0 changed:0 1: cs:Unconfigured I tried to format the volume with a traditionnal filesystem, and successfully mounted it on both nodes. I then tried with ocfs2. On the first node, mkfs and mount went without a hitch, but on the second one, I systematically get an error when I try to do anything on the volume (fsck'ing, starting ocfs2-heartbeat, mounting, etc.). dmesg shows the following, drbd0: role( Secondary -> Primary ) drbd0: Writing meta data super block now. (6672,0):o2hb_setup_one_bio:290 ERROR: Error adding page to bio i = 1, vec_len = 4096, len = 0 , start = 0 (6672,0):o2hb_read_slots:385 ERROR: status = -5 (6672,0):o2hb_populate_slot_data:1279 ERROR: status = -5 (6672,0):o2hb_region_dev_write:1379 ERROR: status = -5 It seems that the heartbeat process can't write to the device, for an unknown reason: open("/sys/kernel/config/cluster", O_RDONLY|O_NONBLOCK|O_DIRECTORY) = 4 fstat(4, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 getdents64(4, /* 3 entries */, 4096) = 88 getdents64(4, /* 0 entries */, 4096) = 0 close(4) = 0 mkdir("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135", 0755) = 0 open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/block_bytes", O_WRONLY) = 4 write(4, "512", 3) = 3 close(4) = 0 open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/start_block", O_WRONLY) = 4 write(4, "2176", 4) = 4 close(4) = 0 open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/blocks", O_WRONLY) = 4 write(4, "255", 3) = 3 close(4) = 0 open("/dev/drbd0", O_RDWR) = 4 open("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135/dev", O_WRONLY) = 5 write(5, "4", 1) = -1 EIO (Input/output error) close(5) = 0 close(4) = 0 rmdir("/sys/kernel/config/cluster/ocfs2_cluster/heartbeat/D6F76726AFE4472CBF0650A1FEF09135") = 0 semop(0, 0x7fff930bfe30, 1) = 0 close(3) = 0 write(2, "mkfs.ocfs2", 10mkfs.ocfs2) = 10 write(2, ": ", 2: ) = 2 write(2, "I/O error on channel", 20I/O error on channel) = 20 write(2, " ", 1 ) = 1 write(2, "while initializing the dlm", 26while initializing the dlm) = 26 write(2, "\r\n", 2 I can't figure if it's a DRBD- or a OCFS2-related issue, and I'd take any enlightenment with gratitude. BTW, I use amd64, debian-provided 2.6.17 kernel, drbd8-module-source 8.0pre4-1 (I tried SVN trunk too), and ocfs2-tools 1.2.1-1. Thanks in advance, -- Kilian CAVALOTTI Administrateur réseaux et systèmes UPMC / CNRS - LIP6 (C870) 8, rue du Capitaine Scott Tel. : 01 44 27 88 54 75015 Paris - France Fax. : 01 44 27 70 00 _______________________________________________ drbd-user mailing list drbd-user at lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20060817/c039779b/attachment.htm>