[DRBD-user] Starting up in degraded mode

drbd at bobich.net drbd at bobich.net
Fri Feb 8 17:10:12 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

I'm building a 2-node cluster, and I only have 1 node built at the moment. 
I'm planning to use DRBD as a primary-primary shared storage device. When 
I reboot the machine, the DRBD service starts, but it doesn't activate the 
resource and create the /dev/drbd1 device node.

My config file is below:

global { usage-count no; }
common { syncer { rate 10M; } }

resource r0
{
         protocol C;
         net
         {
                 cram-hmac-alg sha1;
                 shared-secret "password";
         }

         on server1
         {
                 device          /dev/drbd1;
                 disk            /dev/sda6;
                 address         10.0.0.1:7789;
                 meta-disk       internal;
         }

         on server2
         {
                 device          /dev/drbd1;
                 disk            /dev/sda6;
                 address         10.0.0.1:7789;
                 meta-disk       internal;
         }

I am using GFS.

1) Would it come up correctly if both peers were available and accessible?

2) Is there a standard way of ensuring that if the 2nd node is not 
accessible after some timeout (e.g. 10-30 seconds), the current node sets 
the local instance as primary and create the device node?

3) Is there a sane way to handle the condition where both nodes come up 
individually and only then the connection is restored? Obviously, the 
disks would not be consistent, but they would both be working by that 
point. Resyncing the BD underneath GFS would probably trash whichever 
node's data is being overwritten. Is there a method available to prevent 
this split-brain condition? One option I can see is to not sync. GFS would 
ry to mount, notice the other node up but not using it's journal, and 
cluster would end up fencing one node. It'd be a race on which one gets 
fenced, but that isn't a huge problem.

But first I need to ensure that the local DRBD powers up even if the peer 
isn't around. Is there a config option for that?

Thanks.

Gordan



More information about the drbd-user mailing list