[DRBD-user] DRBD Recovery actions without Pacemaker

Digimer lists at alteeve.ca
Fri Jul 8 05:40:44 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 07/07/16 11:14 PM, Klint Gore wrote:
> *From:*drbd-user-bounces at lists.linbit.com
> [mailto:drbd-user-bounces at lists.linbit.com] *On Behalf Of *James Ault
> *Sent:* Friday, 8 July 2016 2:02 AM
> *To:* James Ault; drbd-user at lists.linbit.com
> *Subject:* Re: [DRBD-user] DRBD Recovery actions without Pacemaker
> 
>  
> 
> I see the Manual Failover section of the DRBD 8.4.x manual, and I see
> that it requires that the file system be umounted before attempting to
> promote and mount the file system on the secondary.
> 
> What I meant by "those status flags" in my first message is that when a
> node mounts a file system, that file system is marked as mounted
> somewhere on that device.   The "mounted" status flag is what I'm trying
> to describe, and I'm not sure if I have the correct name for it.
> 
> Does pacemaker or manual failover handle the case where a file server
> experiences a hard failure where the umount operation is impossible?   
> How can the secondary copy of the file system be mounted if the umount
> operation never occurred and cannot occur on server1?
> 
>  
> 
> The basic manual failover described in the manual is when you choose
> manually switch over to the other machine.  You would use that if you
> wanted to do maintenance on the primary.
> 
> If the primary dies by itself, you don’t need to unmount it - that’s
> where fencing comes into play.  You need to make sure that the “dead”
> node is well and truly dead and going to stay that way.

"and going to stay that way."

A bit of nuance;

It's perfectly fine for the fenced node to reboot, and that is the most
common configuration. All that really matters from a fence/stonith
action's perspective is that the node was "off". When it boots back up
(assuming it can), the software (pacemaker, drbd, etc) is in a "clean"
state. It will not try to do anything related to the cluster until it
rejoins.

> What you’re trying to achieve is the same as what my setup is.  On both
> servers, nothing starts at boot and only the root file system mounts
> from fstab.  A script is run to make one of them primary, mount the file
> systems of the drbd devices, add the external ip,  start the network
> services.  The other node just becomes drbd secondary and that’s it.  At
> failover, the dead machine is pulled from the rack and taken away, then
> the secondary becomes primary, mounts the file systems of the drbd
> devices, adds the external ip, starts the network services.  If I’m
> doing manual failover for maintenance, then on the primary, the network
> services are stopped, the external ip address is removed, the file
> systems are unmounted, drbd is demoted to secondary.  The other machine
> is promoted just like hardware failover.
> 
> Klint.


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?



More information about the drbd-user mailing list