[DRBD-user] Some aclaration on DRBD functionality

drbd at bobich.net drbd at bobich.net
Mon Feb 18 16:09:30 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Mon, 18 Feb 2008, Pablo Gómez wrote:

> Hello everyone,
>
> I am new here, being reading for a while, set up DRBD and tested it and
> I get some doubts regarding of the  suitability of DRBD for my problem.
>
> After my tests, I understand that DRBD autosyncs a (previously written
> with valuable data) partition and gives the posibility of mounting in a
> single system from a cluster. Read and Write by a single system,
> automatically sync'ed by other(s), while it is not mounted, or usable.
> (Am I right?, or I am seeing only a part of the capabilities?)
>
> My main question: is there any possibility of using a DRBD partition as
> R/W for everyone (in the cluster?)
>
> I am using 8.0.6 on SuSE 10.3

Yes, it's supposed to be able to do that. I've been trying to get the same 
thing to work (CentOS5, DRBD 8.0.11, GFS), but there appear to be 
problems.

1) When mounting GFS off a syncing DRBD on the machine that's catching up, 
it throws up errors about detecting a concurrent remote write, which 
causes DLM and GFS to abort and the machine leaves the cluster.

2) The performance on accessing the FS even with just a single node being 
up appears to be heavily degraded in 8.0.11 compared to DRBD 8.0.6.

3) I had managed to get both nodes to come up with DRBD 8.0.6 and mount 
GFS correctly, but soon afterward, GFS corruption was detected (without 
any signifficant concurrent disk load). I had to fsck the GFS volume to 
fix it before it was usable again.

I can forward you my drbd.conf if that's of interest.

I suspect the underlying cause is that DRBD, when operating in 
primary/primary mode, when it detects that one mirror isn't up to date 
upon connecting to the resource, degrades one node down to secondary while 
the resyncs. Is it safe under such circumstances to:

drbdadm primary <resource>

on the node that is catching up, and mount the FS while the resync is in 
progress? Or is it necessary to wait until resync is complete before 
mounting the volume on the syncing node?

Gordan


More information about the drbd-user mailing list