[DRBD-user] Drbd/pacemaker active/passive san failover

Marco Marino marino.mrc at gmail.com
Tue Sep 20 17:17:53 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


As told by Lars Ellenberg, one first problem with the configuration
http://pastebin.com/r3N1gzwx
is that on-io-error should be
on-io-error call-local-io-error;
and not detach. Furthermore, in the configuration there is also another
error:
fencing should be
fencing resource-and-stonith;
and not resource-only.

But I don't understand (again) why the secondary node becomes diskless
(UpToDate -> Failed and then Failed -> Diskless).

I'd like to do one (stupid) example: if I have 2 nodes with 1 disk for
each node used as backend for a drbd resource and one of these disks
fails, nothing should happen on the secondary node.....

Igor Cicimov: why removing the write-back cache drive on the primary
node cause problems also on the secondary node? What is the dynamics
involved?

However, root file system is not part of the CacheCade virtual drive
and yes, one possible solution could be create a mirror of ssd drives
for CacheCade. But I'm using drbd/pacemaker because
in a similar situation I need to switch resources automatically on the
other node.




2016-09-20 13:12 GMT+02:00 Igor Cicimov <igorc at encompasscorporation.com>:

> On Tue, Sep 20, 2016 at 7:13 PM, Marco Marino <marino.mrc at gmail.com>
> wrote:
>
>> mmm... This means that I do not understood this policy. I thought that
>> I/O error happens only on the primary node, but it seems that all nodes
>> become diskless in this case. Why? Basically I have an I/O error on the
>> primary node because I removed wrongly the ssd (cachecade) disk. Why also
>> the secondary node is affected??
>>
>
> The problem is as I see it that when the io-error happened on the
> secondary the disk was not UpToDate any more:
>
> Sep  7 19:55:19 iscsi2 kernel: block drbd1: disk( *UpToDate -> Failed* )
>
> in which case it can not be promoted to primary. I don't think what ever
> policy you had in those handlers it would had made any difference in your
> case. By removing the write-back cache drive in the mid of operation you
> caused damage on both ends. Even if you had any chance by force, would you
> really want to promote a secondary that has a corrupt data to primary at
> this point?
>
> You might try the call-local-io-error option as suggested by Lars or even
> the pass_on and let the file system handle it. You should also take
> Digimer's suggestion and let Pacemaker take care of everything since you
> have it already installed so why not use it. You need proper functioning
> fencing though in that case.
>
> As someone else suggested you should also remove the root file system from
> the CacheCade virtual drive (just an assumption but looks like that is the
> case). Creating a mirror of SSD drives for the CacheCade is also an option
> to avoid similar accidents in the future (what is the chance that someone
> removes 2 drives in the same time??). And finally putting a "DON'T REMOVE"
> sticker on the drive might work if nothing else does :-D
>
>
>> And furthermore, using
>>
>> local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
>>
>> will be shut down both nodes? and again, should I remove on-io-error detach; if I use local-io-error?
>>
>> Thank you
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20160920/346f990d/attachment.htm>


More information about the drbd-user mailing list