[DRBD-user] RE: Food for thoughts please - DRBD GFS2 CLVM etc

Theophanis Kontogiannis theophanis_kontogiannis at yahoo.gr
Fri Mar 14 21:51:28 CET 2008

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello again,

After one more good thought, it looks to me that my major unknown parameter
is how to control the Split Brain situation via Cluster Manager.
All those days I am googling but could not find anything apart from HA
references.
To resolve SB, I can only use heart beat? There is no way to use the dopd
and the other scripts via cman?

Thank you again for your time.
Theophanis Kontogiannis



		_____________________________________________
		From: Theophanis Kontogiannis 
		Sent: Wednesday, March 12, 2008 5:17 PM
		To: drbd-user at lists.linbit.com
		Subject: Food for thoughts please - DRBD GFS2 CLVM etc

		Hello All,

		I am sending this e-mail to the list to ask for food for
thoughts.

		I have two same servers based on AMD x2 64bit loaded with
CENTOS5. 
		Memory and resources are not a problem since I begun with
small systems and can expand up to 8 SATA disks, 2 IDE, 16GB RAM.

		Now I have the same setup for HDs on both of them. One
80Gbyte and one 320Gbyte both IDE and sharing the /boot and / in RAID-1. 
		Also the left over space on both of them will be used as
DRBD devices (that is /dev/hda4 --> /dev/drbd0 and /dev/hdb4 --> /dev/drbd1)
in primary/primary config.

		What I plan to have as active applications are:

1.	ORACLE
2.	MySQL Production
3.	MySQL Development
4.	Apache
5.	storage for documents 
6.	and in the future compilation and execution of MPI based code
			
		I have thought about the possible implementation and I end
up with two scenarios.

		FIRST:

		/dev/drbd0 and /dev/drbd1 are members of one physical
volume. Then running cman, gfs2 and clvmd, I create on large logical volume
out of them, and then I format it as GFS2. All the disk space is
simultaneously available and mounted as GFS on both servers. I create the
directories with proper permissions that the applications will use. Using
system-config-cluster or COGNA, I configure the services I want, and "run"
them on the nodes I select.

		It looks that this scenario gives me the good option of live
backup on either node (since the whole FS is mounted at the same time to
both nodes), simple application migration if a node fails and easy expansion
of the file system (I add on the PV the new /dev/drbd when I add a new disk
and then expand gfs2). That way I keep a coherent image of my files and at
the same time I can keep on adding disks, applications or whatever easily. 

		Now it seems to me that the problem with this setup, is how
to make exploitation of the cluster manager (RHEL 5), in a way that will
auto migrate the applications if the DRBD fails on one node. Is there any
way to exploit the CMAN instead of heartbeat to manage first of all the
underneath DRBD devices? And then how will cman be aware that it must not
start the application because the FS could not be mounted? Maybe use a
custom script as resource?


		SECOND: 

		Again /dev/drbdX are members of the same physical volume. 

		However this time, I create separate LVs formatted with
ext3, one for every application that will run. Then with CMAN I create as
resource also the LV to be mounted prior to the start of the service that is
using it. In this it looks like the question of the previous scenario, on
how I will control via CMAN the DRBD devices is some how solved. If it can
not mount the LV this means that there is a problem with DRBD. But in that
case the service is related to prior mounting the respective LV so it will
not start.

		However I think that this does not scale well and also it
looks like I will have an issue with backup.

		For backup I will use an external USB disk connect to one of
the server, via which the already mounted FSs will be backed up!

		I appreciate your comments and sharing your experiences.


		Sincerely,

		Theophanis Kontogiannis

		
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20080314/521820d4/attachment.htm>


More information about the drbd-user mailing list