Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
An embedded and charset-unspecified text was scrubbed...
Name: warning1.txt
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20070702/bc7e7ce9/attachment.txt>
-------------- next part --------------
I am running DRBD on RH cluster suite 5 presently. So far it behaves
fairly well for me, although I have run across a vexing issue which I
will post in another message. I set up individual resources for my DRBD
patitions and use a script to manage the start, stop and status
operations. I cobbled one together from the ones distributed with
Heartbeat. The one for version 2 is much more complex, and I doubt RH
Cluster can even make use of most of it.
I have linux bring up the kernel module, and start all resources
secondary and have the cluster manager make them primary (start) and
secondary (stop) as necessary.
I'm attaching my primary script in case anyone finds them useful.
Disclaimer: I am not an experienced shell programmer. No warranties
etc. Would love improvements if anyone has suggestions. For each
resource, I have an additional script called drbd-res.sh which includes
the above, sets the resource name and provides start, stop and status
operations as such:
. $(dirname $0)/drbd.sh
RESOURCE="res"
case "$CMD" in
start)
drbd_start
;;
stop)
# exec, so the exit code of drbdadm propagates
drbd_stop
;;
status)
drbd_status
;;
*)
echo "Usage: drbddisk [resource] {start|stop|status}"
exit 1
;;
esac
exit 0
Chris
> Hey All,
> Just wondering if someone could shed some light on my situation? I
> currently have one cluster with 3 servers in it and I want to use drbd
> between 2 of them. But instead of using heart beat I want to use the
> failover option in cluster suite 5. This is what I have so far. My
> question is once I have drbd up what do I do next? My next question is
> will this work or do I have to have HeartBeat?
> 1. Cluster is up and operating fine.
> 2. rgmanager is up and monitoring the services
> 3. clvmd is running
> 4. gnbd_export is running on the 2 nodes using drbd and gnbd_import is
> running on the other and is point to the vip of the 2 drbd nodes.
> 5. I have a 10g test partition created on both of the drbd hosts.
> 6. drbd is running on 2 of the nodes, meaning I have successfully issued
> the drbdadm up all command with no errors.