[DRBD-user] Setting Up Red Hat Cluster..

Brian Candler B.Candler at pobox.com
Thu Jul 3 09:39:00 CEST 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

On Thu, Jul 03, 2008 at 09:06:30AM +0530, Singh Raina, Ajeet wrote:
>    Now We are in attempt to Setup Cluster. Cluster controlled through
>    Opforce. So Say I have two nodes Node-1 and Node-2 and a Management
>    Station(with Opforce).
>    Now I don't have Shared Storage(We ordered for MS1500 but it will come
>    after few months).What I want to try is Alternative so that I can setu
>    meanwhile).
>    I have a RHEL 4.0 Machine with 40GB.Will that do?

I think perhaps you have missed the point of what DRBD is.

DRBD mirrors data between a *pair* of servers. One side is primary, the
other side is secondary. The primary side has read/write access, and any
changes made there are replicated, in real time, to the secondary. (*)

If you want to do this on a single server, then you can put two disks in a
single server and configure "mirroring", e.g. with md. However the server
itself becomes a single point of failure in that case. DRBD allows you to
build a solution without this point of failure, since there are two servers.

So if you're looking for a solution which involves a single server, then
you're looking in the wrong place. Go search for information on mirroring
and /proc/md. If you want to provide block-level access to this data to
several other servers, search for information on iSCSI. If you want to
provide filesystem-level access then look for NFS. None of this is anything
to do with DRBD.

If you do decide to use DRBD, then you need to read the user guide. This is
one of the best examples of open-source documentation I've seen. Once you
have a *specific* problem, that is, something which doesn't work in the way
you expect when you try it out on a test system, then ask again. There's
some helpful advice on how to get the best results from mailing lists at



(*) Normally the data is not available for use at the secondary side, it's
there just in case the primary fails. The exception is an advanced setup
with a cluster filesystem like GFS or OCFS2, in which case you can configure
primary-primary operation; the cluster filesystem takes care of locking, so
the two sides don't stomp on each others' data.

More information about the drbd-user mailing list