[DRBD-user] Two NFS servers in passive-active

Christian Balzer chibi at gol.com
Tue Apr 6 04:51:34 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Mon, 5 Apr 2010 08:52:21 -0500 Alex Dean wrote:
> On Apr 5, 2010, at 5:16 AM, Olivier Le Cam wrote:
[active/active but with 2 separate resources DRBD cluster]
> HA means redundant hardware.  Full utilization of all hardware  
> (including your secondary nodes) means you don't really have HA.
> Can a single server handle the load of serving both shares at an  
> acceptable level of performance?  If the answer is yes, why not just  
> put them on your current single primary?  If the answer is no, you  
> could setting yourself up for problems.
All the DRBD/HA clusters I run here in production are of the same basic
design as Olivier laid out. But no NFS servers, so I can't really comment
on that particular bit. Other than that NFS seems to be a quite wormy can
in general.
As for Alex' comment, if both machines are 100% busy all the time than
his argument very well stands. But machines in that state would be
upgraded or augmented with another cluster pair in any real life situation
with sufficient funding anyway. Our rule of thumb here is 60% utilization
(off all/any resources) will result in the next pair being rolled out.
What that means is that in a failure case the surviving node would get
busy but not likely in a debilitating fashion. While in normal operations
(which makes about 99.9% of our cluster lifetime and most of the downtime
consists of planned maintenance in non-busy times) this "active-active"
approach means that peak loads are handled in a MUCH improved fashion over
an active-passive solution with just one node doing all the heavy
lifting, ALL the time. 

Very good remark about the third server, something that many people
overlook when planning DRBD clusters.


Christian Balzer        Network/Systems Engineer                
chibi at gol.com   	Global OnLine Japan/Fusion Communications

More information about the drbd-user mailing list