[DRBD-user] drbd, ocfs2 and heartbeat

Lee Musgrave lee at sclinternet.co.uk
Fri Oct 4 17:30:53 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

i'm hoping someone here can help me, since no-one seems to be using the
ocfs2 mailing-list.

i'm new to drbd, heartbeat and ocfs2, but i have managed to get partitions
created and exported for iscsi and nfs, and they are working fine. (iscsi
performance is great, nfs is slow), but have been having a nightmare with
the iscsi and nfs partitions are primary/secondary. the ocfs2 partitions
are dual-primary. i was having problems getting both machines to mount
/dev/drbd0 simultaneously in a reliable manner on virtualized servers, i
have now tried it using physical servers, and it's now reliable although
concurrent writing to a file is slow.
i've been using ubuntu 12.04, and sticking with the packages installed by
default using apt-get, i've now tried it with 13.10 beta2 since performance
is supposed to be greatly improved in drbd 8.4.3. again sticking with
what's installed using apt-get.
i want to export the ocfs2 cluster, so that it's used for all the website
data for up to 20 servers (ubuntu 12.04), but i can't seem to get it
working. all the website stats/logging goes into the same files, so it has
to be able to write concurrently.
can i export it using heartbeat and nfs? does heartbeat work with
dual-primaries? does exporting an ocfs2 partition over nfs still allow
concurrent writes to a single file?
everything seems ok as far as drbd and ocfs2 goes when accessing it locally
on these 2 servers, but i have no idea how to get it working for the
webservers,i don't want to create a 22 node cluster due to the bandwidth
overheads associated with that.
what do i need to put in the configuration files to get it working?

here's what i currently have on a system i'm just trying ocfs2 on for

dual-homed servers, nas1 : and
                              nas2 : and
so as i understand it, all the synchronisation/replication for the config
as below, should occur on the 10.10.10.* subnet, (direct connection with
crossover cable, 1Gb network, mtu 9000)
webservers should connect using the 192.168.0.* subnet


# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf";
include "drbd.d/*.res";

resource ocfs2.config {
protocol C;
handlers {
pri-on-incon-degr "echo o > /proc/sysreq-trigger ; halt -f";
pri-lost-after-sb "echo o > /proc/sysreq-trigger ; halt -f";
local-io-error "echo o > /proc/sysreq-trigger ; halt -f";
outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";

startup {
wfc-timeout 120;
degr-wfc-timeout 120;
become-primary-on both;

disk {
on-io-error detach;
resync-rate 100M;

net {
cram-hmac-alg sha1;
shared-secret "password";
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
rr-conflict disconnect;

on nas1 {
device /dev/drbd0;
disk /dev/sda4;
meta-disk /dev/sda3[0];

on nas2 {
device /dev/drbd0;
disk /dev/sda4;
meta-disk /dev/sda3[0];

node_count = 2
name = datacluster
ip_port = 7777
ip_address =
number = 1
name = nas1
cluster = datacluster
ip_port = 7777
ip_address =
number = 2
name = nas2
cluster = datacluster

formatted using: mkfs.ocfs2 -T mail -L "datacluster" /dev/drbd0
mounted using mount -t ocfs2 /dev/drbd0 /test


nas1 IPaddr:: drbddisk::ocfs2.config
Filesystem::/dev/drbd0::/test::ocfs2 nfs-kernel-server

i can find articles that say ocfs2 can be exported over nfs, but i can't
find anything that actually shows how,
any help or advice would be greatly appreciated, and would alleviate weeks
of head-bashing :(

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20131004/0f4a0f18/attachment.htm>

More information about the drbd-user mailing list