Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Hi Gordan,
Just use "drbdadm primary <res>".
When your 2nd node is connected and configured, the sync will start
instantly.
It is only once needed to include the special syntax "--
--overwrite-data-of-peer"
to tell DRBD that you are sure what you are doing.
This status is "remembered", even over reboots.
Greetz, Nico
अनुज Anuj Singh schreef:
> On Feb 8, 2008 9:40 PM, <drbd at bobich.net> wrote:
>
>> Hi,
>>
>> I'm building a 2-node cluster, and I only have 1 node built at the moment.
>> I'm planning to use DRBD as a primary-primary shared storage device. When
>> I reboot the machine, the DRBD service starts, but it doesn't activate the
>> resource and create the /dev/drbd1 device node.
>>
>> My config file is below:
>>
>> global { usage-count no; }
>> common { syncer { rate 10M; } }
>>
>> resource r0
>> {
>> protocol C;
>> net
>> {
>> cram-hmac-alg sha1;
>> shared-secret "password";
>> }
>>
>> on server1
>> {
>> device /dev/drbd1;
>> disk /dev/sda6;
>> address 10.0.0.1:7789;
>> meta-disk internal;
>> }
>>
>> on server2
>> {
>> device /dev/drbd1;
>> disk /dev/sda6;
>> address 10.0.0.1:7789;
>> meta-disk internal;
>> }
>>
>> I am using GFS.
>>
>> 1) Would it come up correctly if both peers were available and accessible?
>>
>
> allow-two-primaries;
>
>
>> 2) Is there a standard way of ensuring that if the 2nd node is not
>> accessible after some timeout (e.g. 10-30 seconds), the current node sets
>> the local instance as primary and create the device node?
>>
>>
>
> yes. use heartbeat
>
> http://www.linux-ha.org/GettingStarted
>
>
>> 3) Is there a sane way to handle the condition where both nodes come up
>> individually and only then the connection is restored? Obviously, the
>> disks would not be consistent, but they would both be working by that
>> point. Resyncing the BD underneath GFS would probably trash whichever
>> node's data is being overwritten. Is there a method available to prevent
>> this split-brain condition? One option I can see is to not sync. GFS would
>> ry to mount, notice the other node up but not using it's journal, and
>> cluster would end up fencing one node. It'd be a race on which one gets
>> fenced, but that isn't a huge problem.
>>
>>
>
>
>
>
> after-sb-0pri discard-younger-primary;
>
>
>
> after-sb-1pri discard-secondary;
>
>
>
> after-sb-2pri call-pri-lost-after-sb;
>
> go threw man pages.
>
>
>
>> But first I need to ensure that the local DRBD powers up even if the peer
>> isn't around. Is there a config option for that?
>>
>> Thanks.
>>
>> Gordan
>> _______________________________________________
>> drbd-user mailing list
>> drbd-user at lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
--
Behandeld door / Handled by: N.J. van der Horn (Nico)
---
ICT Support Vanderhorn IT-works, www.vanderhorn.nl,
Voorstraat 55, 3135 HW Vlaardingen, The Netherlands,
Tel +31 10 2486060, Fax +31 10 2486061