[DRBD-user] best way to achieve a load balancer with drbd + ocfs2 + openais

unni krishnan unnikrishnan.a at gmail.com
Fri Dec 11 14:55:41 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I had gone through the documentations. But I am not able to understand
the concept of dlm in case of the ocfs2 drbd.

My setup is a load balancer + fail over for OpenVZ vpss.

1. There are two physical servers
2. The drbd1 formatted with ocfs2 in two nodes and mounted at /vz
3. The two drbd devices are in dual primary mode.
4. One VPS will be running in each node and, if one fails the openais
+ pacemaker will fail over the VPS running there to another online
node.

1. The openais is communication through two interfaces. They are eth0 and bond0
2. But drbd is communicating only through bond0 which is the bonded
master of eth1 and eth2.

The openais is managing the two nodes and two VPSs ( one vps in one
node and another in another node ). The OCFS2 uses its own heartbeat.

3. drbd and ocfs2 are not added in crm.

Here I am confused and note sure, how to add the drbd + ocfs2 ( ocfs2
uses its own heartbeat ).

PROBLEM :

in case If there is any problem with the bonding then the drbd devices
will disconnect. But the openais will work through eth0 and since the
drbd + ocsf2 is not added in crm, crm will not fail over the VPS to
another node and the two VPSs will resume working as it is.

But after connecting the drbd again, then it will detect the split
brain and it will ask to discard data in any of the node. I want to
save the data, so need a solution to this problem.

NOTE : I am running centos 5, so there is no user space dlm available
as in suse.

-- 
Regards,
Unni



More information about the drbd-user mailing list