Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Fri, Jul 09, 2010 at 05:15:26PM +0900, Junko IKEDA wrote: > Hi, > > I am running DRBD 8.3.8 + Heartbeat 2.1.4, > and using drbddisk script as LSB RA. > I know that Heatbeat 2.1.4 is too old and we shouldn't use this version, > so this is really limited case. > > My disk setup is here; > - Array1(RAID1+0) : /dev/cciss/c0d0 : install RHEL5.5 > - Array2(RAID1+0) : /dev/cciss/c0d1 : data and mera-data area for DRBD > > Heartbeat setup; > - ACT/SBY cluster using drbddisk > node N1 = ACT > node N2 = SBY > - diskd RA ( this RA is custom-made one to check the disk status ). > - STONITH HP iLO2 > > When I remove all Array1 disks (OS area) from node N1, > diskd RA notices this disk failure and Heartbeat starts the fail-over procedure. > then, drbddisk calls "drbdadm secondary <resource-name>". > this command fails unfortunately, but, drbddisk exits with "0 ". > > at this point, DRBD status on node N1 is still primary, > and Heartbeat tries to promote node N2 from secondary to primary, > but dual primary is forbidden by drbd.conf, > so node N2 can not start the service. > > We have STONITH device (iLO2), > so if drbddisk can exits 1 when it fails to demote the disk (drbdadm secondary), > the following flow is available. > stop NG -> STONITH (shutdown N1) -> N2 starts service > > It seems that drbdsetup.c/print_config_ > error() returns "20" in this case. Can you please provide kernel and resource agent (heartbeat) logs for such an incident. -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed