[DRBD-user] operation monitor with OCF script from release 8.3

Димитър Бойн dboyn at postpath.com
Tue Aug 4 06:14:51 CEST 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,
I am using Pacemaker packages from the SuSE repository with CentOS 53 x86_64.
I recently compiled and rpmbuilt the newest DRBD release available and upgraded my cluster.
My initial attempt was to use the original OCF script but its logic seems to be completely off from what DRBD8.3 expects to be asked to do.
So I replaced the OCF script with the one provided with release 8.3 as well.
 
However then what OCF was trying to do for monitoring was for some reason telling that my resource is not running and the crm_mon was constantly resetting going in circles -
 showing the resource in Slave->Master->Slave and so on.
Notice that this was not the actual case -using "watch cat /proc/drbd" I could see that the resource was in stable Primary mode and I had actually mounted the file system and running a data base type application on top of it.
 
Here is my original DRBD Resource:
<master id= ms-drbd0 >
      <meta_attributes id= ma-ms-drbd0 >
        <nvpair value= 1  id= ma-ms-drbd0-1  name= clone-max />
        <nvpair value= 1  id= ma-ms-drbd0-2  name= clone-node-max />
        <nvpair value= 1  id= ma-ms-drbd0-3  name= master-max />
        <nvpair value= 1  id= ma-ms-drbd0-4  name= master-node-max />
        <nvpair id= ma-ms-drbd0-5  name= notify  value= true />
        <nvpair value= false  id= ma-ms-drbd0-6  name= globally-unique />
        <nvpair value= stopped  id= ma-ms-drbd0-7  name= target-role />
      </meta_attributes>
      <primitive class= ocf  provider= heartbeat  type= drbd  id= drbd0 >
        <instance_attributes id= ia-drbd0 >
          <nvpair id= ia-drbd0-1  name= drbd_resource  value= drbd0 />
          <nvpair name= clone_overrides_hostname  id= ia-drbd0-2  value= no />
          <nvpair name= drbdconf  id= ia-drbd0-3  value= /etc/drbd0.conf />
        </instance_attributes>
        <operations>
          <op id= op-drbd0-1  name= monitor  interval= 59s  timeout= 10s  role= Master />
          <op id= op-drbd0-2  name= monitor  interval= 60s  timeout= 10s  role= Slave />
        </operations>
      </primitive>
    </master>
 
 
In order to "calm" crm_mon I nuked the following from the Resource:
        <operations>
          <op id= op-drbd0-1  name= monitor  interval= 59s  timeout= 10s  role= Master />
          <op id= op-drbd0-2  name= monitor  interval= 60s  timeout= 10s  role= Slave />
        </operations>
 
I am now concerned that the Resource is not monitored at all. :-(
 
Also going through the OCF drbd script I noticed:
"
drbd_status() {
    role=$(drbdadm role $OCF_RESKEY_resource)
    case $role in
        Primary/*)
            return $OCF_RUNNING
            ;;
        Secondary/*)
            return $OCF_NOT_RUNNING
            ;;
 
    esac
    return $OCF_ERR_GENERIC
}"
 
Does this mean that Slave status is now considered "not running/stopped" ?
 
I need to have only one side of the peer controlled in the cluster and the other in a different cluster and the old script was allowing me to have only one clone per site with target-role=Master||Slave
 
I guess I might have actually missed a whole new concept about Pacemaker and DRBD peers you are trying to push in 8.3 and Later?
 
 
Thanks!
 
./Dimitar Boyn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20090803/42caafc0/attachment.htm>


More information about the drbd-user mailing list