[DRBD-user] Setting primary on reboot

Cameron Smith velvetpixel at gmail.com
Tue Mar 23 18:48:50 CET 2010


My issue was entirely due to a dns issue.

DRBD is working as expected now.

On Tue, Mar 23, 2010 at 8:24 AM, Cameron Smith <velvetpixel at gmail.com>wrote:

> From the user guide:
>
> "Dealing with temporary primary node failure
>
> From DRBD's standpoint, failure of the primary node is almost identical to
> a failure of the secondary node. The surviving node detects the peer node's
> failure, and switches to disconnected mode. DRBD does *not* promote the
> surviving node to the primary role; it is the cluster management
> application's responsibility to do so.
>
> When the failed node is repaired and returns to the cluster, it does so in
> the secondary role, thus, as outlined in the previous section, no further
> manual intervention is necessary. Again, DRBD does not change the resource
> role back, it is up to the cluster manager to do so (if so configured).
>
> DRBD ensures block device consistency in case of a primary node failure by
> way of a special mechanism. For a detailed discussion, refer to the
> section called “The Activity Log”<http://www.drbd.org/users-guide/s-activity-log.html>
> ."
>
>
> There are no instructions on what to tell the cluster manager to do.
>
> When node1 (primary) goes down how do I get heartbeat to promote secondary
> to primary so that the filesystem at /dev/drbd1 on node2 can become
> mountable?
>
> When node1 comes back up how to I configure heartbeat to tell node1 to
> become primary after it has been updated (synced) with the data from node2
> that was created during node1's downtime?
>
>
>
> On Tue, Mar 23, 2010 at 8:07 AM, Cameron Smith <velvetpixel at gmail.com>wrote:
>
>> Thank you for the response!
>> Here is the output of cat /proc/drbd for both node1 and node2 before and
>> after node1(primary) gets rebooted.
>>
>>
>> Before reboot:
>> [root at node1 ~]# cat /proc/drbd
>> version: 8.0.16 (api:86/proto:86)
>> GIT-hash: d30881451c988619e243d6294a899139eed1183d build by
>> mockbuild at v20z-x86-64.home.local, 2009-08-22 13:26:57
>>
>>  1: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---
>>     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
>> resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
>> act_log: used:0/127 hits:0 misses:0 starving:0 dirty:0 changed:0
>>
>> [root at node2 ~]# cat /proc/drbd
>> version: 8.0.16 (api:86/proto:86)
>> GIT-hash: d30881451c988619e243d6294a899139eed1183d build by
>> mockbuild at v20z-x86-64.home.local, 2009-08-22 13:26:57
>>
>>  1: cs:Connected st:Secondary/Primary ds:UpToDate/UpToDate C r---
>>     ns:0 nr:4493035 dw:4493035 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
>> resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
>>  act_log: used:0/127 hits:0 misses:0 starving:0 dirty:0 changed:0
>>
>>
>> After reboot of node1:
>> [root at node1 ~]# cat /proc/drbd
>> version: 8.0.16 (api:86/proto:86)
>> GIT-hash: d30881451c988619e243d6294a899139eed1183d build by
>> mockbuild at v20z-x86-64.home.local, 2009-08-22 13:26:57
>>
>>  1: cs:WFConnection st:Secondary/Unknown ds:UpToDate/DUnknown C r---
>>     ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0
>> resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
>> act_log: used:0/127 hits:0 misses:0 starving:0 dirty:0 changed:0
>>
>> [root at node2 ~]# cat /proc/drbd
>> version: 8.0.16 (api:86/proto:86)
>> GIT-hash: d30881451c988619e243d6294a899139eed1183d build by
>> mockbuild at v20z-x86-64.home.local, 2009-08-22 13:26:57
>>
>>  1: cs:StandAlone st:Secondary/Unknown ds:UpToDate/DUnknown   r---
>>     ns:0 nr:4493035 dw:4493043 dr:85 al:1 bm:1 lo:0 pe:0 ua:0 ap:0
>> resync: used:0/61 hits:0 misses:0 starving:0 dirty:0 changed:0
>>  act_log: used:0/127 hits:1 misses:1 starving:0 dirty:0 changed:1
>>
>>
>> I am using heartbeat and this is my haresources contents:
>> node1.example.com drbddisk::r0 Filesystem::/dev/drbd1::/data::ext3
>> 10.10.2.21 mysqld httpd
>>
>> Should there be anything else in the haresources file that forces node1 to
>> primary on reboot?
>>
>> Thank you,
>> Cameron
>>
>>
>>
>>
>> On Sun, Mar 21, 2010 at 6:14 PM, Reindy <reindy at gmail.com> wrote:
>>
>>> hi, what is your current status? please brief us the details so that we
>>> can help you.
>>>
>>> Example: give your output of "cat /proc/drbd"
>>>
>>> you need to make sure that both of your nodes are sync and in
>>> WFConnection status,.. I still don't have the clear picture on your current
>>> situation, please let us know therefore we would able to help you.
>>>
>>> Thanks!
>>>
>>> On Fri, Mar 19, 2010 at 12:50 AM, Cameron Smith <velvetpixel at gmail.com>wrote:
>>>
>>>> I have two nodes.
>>>>
>>>> Node one I set to proimary with:
>>>> drbdadm -- --overwrite-data-of-peer primary r0
>>>>
>>>> That command does not survive a reboot of node1 so How do get get node1
>>>> to regain primary status after it comes back online?
>>>>
>>>> When node1 comes back online will node2 write the data to node1 that was
>>>> written during the downtime for node1?
>>>>
>>>> Thanks!
>>>> Cameron
>>>>
>>>> _______________________________________________
>>>> drbd-user mailing list
>>>> drbd-user at lists.linbit.com
>>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100323/05be4c8e/attachment.htm>


More information about the drbd-user mailing list