Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
/ 2004-07-16 12:10:35 +0800 \ Federico Sevilla III: > Hi Jean-Guillaume, > (cc DRBD mailing list) > > On Thu, Jul 15, 2004 at 05:17:03PM +0200, Jean-Guillaume LALANNE wrote: > > I have seen your post on the DRBD mailing-list dated from the Friday, > > 7 May 2004. And because I have seen that you have been successfull > > installing DRBD and POSTGRESQL in a production environment, I am > > contacting you if this solution is still acceptable? Have you got > > problems since your post? Do you have tried to install Heartbeat also > > for managing fail-over mechanism? > > > > I am currently trying to find out an high availibility architecture > > for a Postgresql box and I am wondering if DRBD could be the > > replication solution I am looking for > > We have a solution using Debian GNU/Linux, DRBD, Heartbeat, PostgreSQL > and Mon and continue to be happy with its performance in a production > environment (one of the biggest hypermarts in the Philippines). I > continue to recommend this solution to clients who need highly-available > PostgreSQL servers. My only qualm with this setup is that one of the two > nodes will always be idle, and that's a waste in my opinion. > > What would be great is if PostgreSQL had an active/active clustering > solution that does load balancing and failover on its own. I don't see > any acceptable implementation of this at the moment, though, so > active/inactive using Heartbeat and DRBD is still the way to go. > > --> Jijo Yes. If you have some APP you very much like, and now you want to make it load-balanced with two nodes and shared storage: Make sure that you can have two instances of APP running on the same set of backing storage files _ON ONE NODE_ first. So if APP can NOT do that, replicating its files over to some other node can only possibly solve this by magic. We may sometimes be close, but we are only craftsmen :) If APP IS DESIGNED to run on several nodes with shared backing storage, ok now... then to add some availability into it you just have to wait until we decide that DRBD is ready for concurrent access ... Of course, a specialized application specific clustering and replicating solution is probably the best way (if one exists). What you can do anyways (depending on your data set) is to serve seperate parts of the data by two different servers, two different sets of backing storage data base files, two instances of daemons, probably two different service ports. So now you have two DRBD resources, and a service stack on top of each, which you can failover independently. And, which during normal operation will reside on different nodes, so neither is idle. Of course, for this to be usefull you need to have two or more logically mostly independent data sets. hth, Lars Ellenberg