[DRBD-user] Best practice advice needed

Andreas Heinlein aheinlein at gmx.com
Fri Dec 28 22:49:04 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I am currently working on converting our single-server system two a 
2-node failover cluster. I need some advice from more experienced people 
and hope you can help or point me where I can get it.

We currently have multiple services running on that single server (Web, 
Mail, Storage via CIFS and NFS...). Each has its data under one 
directory under /srv, i.e. /srv/www for Web, /srv/mail for Mail, 
/srv/data for storage and so on. The whole /srv thing is one LV on an 
LVM, the only other LV being used for /home, which is also exported via 
NFS. The whole LVM resides on an encrypted partition.

I currently have a test setup working where I just made one large DRBD 
device which is encrypted, the encrypted device is then used as PV for 
the LVM. But I just realized that this would only allow me to failover 
all services at once, even if only a single service fails, because only 
one node can write to the DRBD device.

What would be the best way to allow failing over individual services? I 
can currently think of several different approaches:
* Split it up into multiple DRBD devices, one for each service, and put 
the LVM underneath the DRBD layer to be able to resize when needed. 
Would mean several devices would have to be encrypted individually, 
which adds complexity, but it's not impossible.
* Convert the filesystem to GFS or OCFS and run DRBD in dual-primary 
mode. Would allow for a load-balancing active/active-cluster in the 
future, but I'm afraid of side effects. For example I read that GFS does 
not fully support inotify/dnotify, which our mail service currently 
makes use of.
* Export the whole /srv via NFS, then remount subdirectories of it e.g. 
under /mnt/mail, /mnt/www... In the case of a service failure, the 
service itself could be moved from A to B, while accessing its data via 
NFS from A. Probably a weird idea not worth looking at.

Thank you very much,

Andreas



More information about the drbd-user mailing list