[DRBD-user] Split brain auto recovery with DRBD9

Digimer lists at alteeve.ca
Wed Mar 2 05:10:41 CET 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On 01/03/16 08:56 PM, Mark Wu wrote:
> Hi Digimer,
> 
> Thanks for your reply! 
> 
> Yes,  I understand fencing can prevent split-brains.  But in my case it
> maybe not sufficient.  Let me clarify my use case.  Basically, you can
> get the architecture from http://ibin.co/2YoKo5VOmRYn    The app on each
> client server will do I/O via block interface directly instead using a
> cluster file system. Every client server need write access to all drdb
> volumes to build a VG group. But the application can guarantee that no
> two servers write to the same data area at the same time because it just
> writes to the data belonging to itself even when spli-brain happens.  So
> merging data I mentioned before means just pull all newest data from
> different volumes together. 
> 
> For the suggestion of fencing,  I think it still can cause somehow
> out-of-sync because the fencing is invoked by cluster management
> software, but in data plane the I/O could be done before the fencing
> happens.
> 
> my understanding is that fencing is invoked by the cluster management
> software asynchronously.   

First; My experience is with 8.4, not 9. So caveat emptor.

DRBD has its own concept of fencing; 'fencing; resource-and-stonith;'
and 'fence-handler "/path/to/some/script";'. If DRBD detects the peer
fail, it blocks IO and calls the fence handler immediately. Most often,
this handler passes the request up to the cluster manager and waits for
a success message.

So no, DRBD will not fall out of sync.

As for your use case; I honestly don't understand it. It looks rather
complicated.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?



More information about the drbd-user mailing list