[DRBD-user] small patch for drbddisk

Junko IKEDA tsukishima.ha at gmail.com
Fri Jul 9 10:15:26 CEST 2010

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hi,

I am running DRBD 8.3.8 +  Heartbeat 2.1.4,
and using drbddisk script as LSB RA.
I know that Heatbeat 2.1.4 is too old and we shouldn't use this version,
so this is really limited case.

My disk setup is here;
- Array1(RAID1+0) : /dev/cciss/c0d0 : install RHEL5.5
- Array2(RAID1+0) : /dev/cciss/c0d1 : data and mera-data area for DRBD

Heartbeat setup;
- ACT/SBY cluster using drbddisk
     node N1 = ACT
     node N2 = SBY
- diskd RA ( this RA is custom-made one to check the disk status ).
- STONITH  HP iLO2

When I remove all Array1 disks (OS area) from node N1,
diskd RA notices this disk failure and Heartbeat starts the fail-over procedure.
then, drbddisk calls "drbdadm secondary <resource-name>".
this command fails unfortunately, but, drbddisk exits with "0 ".

at this point, DRBD status on node N1 is still primary,
and Heartbeat tries to promote node N2 from secondary to primary,
but dual primary is forbidden by drbd.conf,
so node N2 can not start the service.

We have STONITH device (iLO2),
so if drbddisk can exits 1 when it fails to demote the disk (drbdadm secondary),
the following flow is available.
stop NG -> STONITH (shutdown N1) -> N2 starts service

It seems that drbdsetup.c/print_config_
error() returns "20" in this case.
so I just add one condition to call "exit 1".
Please see the attached.

Thanks,
Junko IKEDA
NTT DATA INTELLILINK COOPERATION
-------------- next part --------------
A non-text attachment was scrubbed...
Name: drbddisk.patch
Type: text/x-patch
Size: 512 bytes
Desc: not available
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20100709/265d1d8c/attachment.bin>


More information about the drbd-user mailing list