Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Here's my drbd.conf
resource "res1" {
protocol C;
startup {
wfc-timeout 0;
degr-wfc-timeout 120;
}
disk {
on-io-error detach;
}
net {
allow-two-primaries;
}
syncer {
rate 15M;
}
on "machine1systems.com"
{
device /dev/drbd0;
disk /dev/sdd;
address 192.168.30.12:7788;
meta-disk internal;
}
on "machine2.systems.com"
{
device /dev/drbd0;
disk /dev/sde;
address 192.168.30.14:7788;
meta-disk internal;
}
}
service drbd start
drbdadm adjust res1(on both sides)
drbdadm primary res1(on both sides)
Then *on local primary side*,
pvcreate /dev/drbd0
vgcreate drbdtest /dev/drbd0
lvcreate -n drbdlv1 -L 50M drbdtest
Then,on *remote primary side*,I took snapshot
lvcreate -S -n snap_drbdlv1 -L 50M /dev/drbdtest/drbdlv1
I am using 'drbd-8.3.0' on 'CentOS 2.6.18-92.el5' machines.
*My problem*
I tested above similar scenarios like above,I got split brain error at
different intervals.
like first time I got it after 10 minutes,so using
http://www.drbd.org/users-guide/s-resolve-split-brain.html,I recovered from
it.
Then when I tested I got split brain after 30 minutes(though no operation
was going on @ that time,I am closely watching /var/log/messages),then after
recovery I got it after 1 hour and so on.
So I am totally confused @ exact event of split brain.Though I am doing any
IO,I got it sometimes.
I am not using any shared file system at the moment.It might be required
only if I want that level of concurrency.
---------------------------------
Thanks and Regards,
Himanshu Padmanabhi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090506/5c765da7/attachment.htm>