[DRBD-user] Dual primary// GFS2// Cannot mount /dev/drbd0 on second DRBD node

Kaloyan Kovachev kkovachev at varna.net
Tue Aug 9 14:48:39 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


>> clean_start="1" have done what it was there for, so should be removed
now
>> - you need quorum to access the gfs without risking data corruption.
> Changes in cluster.conf will be taken over after restarting cman, I 
> guess. So I will better have to umount and make node2 secondary for the 
> while, I guess. Right?

no need to stop node2 - just remove clean_start from the config - clean
start is if you start a node and it can't see the second node (after
post_join_delay) to assume it is the only user of the gfs (for example),
because the other one is dead. With clean_start=1 it will start using the
gfs while without it will issue fencing to the other node and keep waiting
for it to join which prevents split brain and corrupting the data.

> 
>> <fencedevice name="human" agent="fence_manual"/>  - you need a fencing
>> device, but use some other method (hint: take a look at fence_xvmd)
>> instead
>> of manual. manual fencing is fine for tests, but not for production.
> Hmm, the machines' names are still "xen1" and "xen2" as I tested Xen on 
> them in the beginning. But for real now I am running KVM on them. :-/
> 

I am not using xen or kvm, so can't be sure, but i think fence_xvmd works
with KVM too




More information about the drbd-user mailing list