[DRBD-user] DRDB + OCF Active Active

Dan Frincu df.cluster at gmail.com
Thu Sep 8 11:09:12 CEST 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Wed, Sep 7, 2011 at 10:28 PM, Nick Khamis <symack at gmail.com> wrote:
> Hello Everyone,
> We are looking to setup write intesive services using database
> technologies. Doing some research I yielded
> the attached document. Is there a issue in terms of performance using
> DRBD on an active active, and say
> mysql database? That being said, what is the best combination for
> clustering using DRBD:
> OCFS2 active active
> EXT3 active passive

The second one.

> On top of HA, load balancing is also important to us.

The document also detailed MySQL Cluster using the ndb engine. There
are some benefits for using such a solution, but also downsides,
therefore it is best to evaluate all of your requirements and see
which fits best.

On the MySQL Cluster with ndb approach, you have to assess what will
be the database/s size/s, what would be an estimated growth per week,
month, year, and plan your hardware requirements accordingly, as well
as plan for expansion, as ndb is an in-memory database, which on MySQL
Cluster scales to multiple nodes by partitioning the database and
having a primary copy of a partition on a node, and one or more
(minimum is one) backup copy of the same partition on another node
(all stored in RAM). As more nodes are added to the MySQL Cluster, the
partitions are split further and replicated onto new nodes as well, it
allows linear scaling iirc.

Every node maintains a transaction log on disk, therefore allowing
recovery of a node based on this log. However, a node failure does not
 lead to service interruption, as there always is at least one other
node maintaining a backup copy of the partition of the failed node in
memory. When failure is detected, the node keeping the backup copy
makes his copy primary and sets a new node to keep a backup copy.
Also, all transactions are performed atomically via a two-phase commit

MySQL Cluster usually does not imply the usage of another clustering
technology on the same nodes, and given the high memory consumption,
it's usually not the case to mix things where it isn't needed. One
possible scenario would be to have all writes done on the MySQL
Cluster, and from it have N frontends set up as Replication Slaves for
Read requests. Load balancing writes can be done by having a frontend
issue requests to each Data Node, but it's recommended that requests
are sent to the DC (IIRC) and it will (based on whom has the writeable
copy of a partition) send the request to the Data Node storing it,
then that Node will send the reply to the frontend.

There are multiple scenarios possible, but they usually involve having
writes performed on the MySQL Cluster (it supports simultaneous access
for read/write operations on every node via the two-phase commit
protocol) and having read requests either on the Cluster or on
Replication Slaves with the second option being the recommended one.

MySQL Cluster holds all the databases in memory, so it's very fast,
has self healing capabilities, built-in high availability, it can also
use all of the CPU cores in a system and it relies on network
transport for communication between nodes, thus one can upgrade the
interconnects to Infiniband or some other solution for maximum

In terms of planning for MySQL Cluster, the following links will give
some insight:

The only use case for DRBD in a MySQL Cluster would be to also
replicate the logs that it flushes to disk. In the event of a node
failure, the cluster is still fine, as explained above, but restoring
a node might take some time (get the log from the failed node, copy it
to a new node, load it into RAM, join the node to the cluster, the
node updates it's data to match the cluster state || fix the node,
restore or not the logs, load them into RAM, etc.). By having the logs
on a DRBD partition, and replicating it to another server, the data is
still available, if you have a spare server, then it's just a matter
of promoting the DRBD partition to Primary, mounting it, then export
it via NFS or whatever, mount it on the spare server, start loading
the logs into RAM, join the node to the Cluster. It reduces MTTR.

Hope it sheds some light onto the picture.


> Thanks in Advnace,
> Nick
> _______________________________________________
> drbd-user mailing list
> drbd-user at lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

Dan Frincu

More information about the drbd-user mailing list