[DRBD-user] DRBD with RHCS

Chan Ching Yu, Patrick cychan at clustertech.com
Tue Jun 4 16:52:35 CEST 2013

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


I've tried the built-in DRBD resource before.
By setting with Luci, there is only one line generated in cluster.conf

<drbd name="mdisk" resource="ha"/>

However, nothing happens when I migrate the service manually.
clusvcadm -r drbd_service

That's why I switch to "script" resource by setting the script path
/usr/local/drbd/share/cluster/drbd.sh

However, running drbd.sh manually has the error as mentioned.

Is there any detailed web site teaching integration of RHCS and DRBD on 
RHEL6?


-----原始邮件----- 
From: Digimer
Sent: Tuesday, June 04, 2013 8:57 PM
To: Chan Ching Yu, Patrick
Cc: drbd-user at lists.linbit.com
Subject: Re: [DRBD-user] DRBD with RHCS

On 06/03/2013 11:54 PM, Chan Ching Yu, Patrick wrote:
> Hi all,
> I’ve configured DRBD on two nodes running on CentOS6.3, it’s pretty
> good. The disks on two nodes can be synchronized succsssfully.
> master1# cat /proc/drbd
> 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
> ns:204840 nr:2087836 dw:2292676 dr:1037 al:50 bm:128 lo:0 pe:0 ua:0 ap:0
> ep:1 wo:f oos:0
> master2# cat /proc/drbd
> 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
> ns:2087836 nr:204840 dw:204840 dr:2087836 al:0 bm:128 lo:0 pe:0 ua:0
> ap:0 ep:1 wo:f oos:0
> However, I came across problem when I integrate RHCS (Redhat Cluster
> Suite) to DRBD.
> I’ve added the DRBD-proivided script
> /usr/local/drbd/share/cluster/drbd.sh as the RHCS resource.
> <rm>
> <resources>
> <ip address="192.168.129.190" sleeptime="10"/>
> <drbd name="mdisk" resource="ha"/>
> <script file="/usr/local/drbd/share/cluster/drbd.sh" name="drbd_script"/>
> </resources>
> <service domain="drbd_domain" name="mdisk_svc" recovery="relocate">
> <ip ref="192.168.129.190"/>
> <script ref="drbd_script"/>
> </service>
> </rm>
> However,  when I relocate the service to another node, the script does
> not take any effect.
> Then I tried to run this script manually. I guess the reason is the
> script does not know which device to promote/demote.
> [root at master2 ~]# /usr/local/drbd/share/cluster/drbd.sh start
> USAGE: drbdadm primary [OPTION...] {all|RESOURCE...}
> GENERAL OPTIONS:
>    --stacked, -S
>    --dry-run, -d
>    --verbose, -v
>    --config-file=..., -c ...
>    --config-to-test=..., -t ...
>    --drbdsetup=..., -s ...
>    --drbdmeta=..., -m ...
>    --drbd-proxy-ctl=..., -p ...
>    --sh-varname=..., -n ...
>    --peer=..., -P ...
>    --version, -V
>    --setup-option=..., -W ...
>    --help, -h
> OPTIONS FOR primary:
>    --force[=...]
> Version: 8.4.3 (api:1)
> GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by
> root at master2.local, 2013-06-01 07:29:10
> No resource names specified
> How to integrate this script (drbd.sh) to RHCS?  Should I feed the
> resource name to RHCS?
> Thanks very much.
> Regards,
> CY

The <script...> resource agent is designed for init.d (like) scripts.
The agent will pass start, stop and status to that script and will look
at the exit code to determine success/failure. There is a drbd specific
resource agent, called 'drbd'. It can handle promoting a node's DRBD
resource to primary when needed and other such actions.

As an aside, are you using actual fencing? Without it, you will find
that the cluster locks up when you test fail over.

Cheers

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education? 




More information about the drbd-user mailing list