[DRBD-user] Little help (right replacing procedure) with failed hard disk on DRBD 0.7.25

Lars Ellenberg lars.ellenberg at linbit.com
Thu Feb 5 12:02:45 CET 2009

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Thu, Feb 05, 2009 at 12:14:25AM +0100, Franco Cristini wrote:
> Anyone che help me?
>
> I re-send my request :
> I have a BIG doubt in how to proceed in restore a DRBD to normal
> operation after a failed HD on node 1 (the primary).
>
> The 2 nodes are two identical RedHat ES3 with DRBD 0.7.25, only with one
> drbd resource, named vm1 (configured with internal metadata, and, on
> both server, the local backing storage is on /dev/sdc2)
>
> The situation is that :
> 1) The node 1 have a problem on /dev/sdc (is a HW raid 5, on witch 2
> hard-drive have a fault at 10 second distance!).
> 2) The drbd automatically detach the backing storage.
> 3) So the node 1 state become :
>    cs:DiskLessClient st:Primary/Secondary ld:Inconsistent
>    And on node 2 the state become :
>    cs:ServerForDLess st:Secondary/Primary ld:Consistent
>
> The applications (a VMWare server virtual machines) on node1 still
> continue to work as usual, with virtual disk file (.vmdk) file on
> /dev/drbd0.
>
> So, now, for return back to a "safe" situation, i need to reboot the
> node 1 for replace HD and reconfigure the Raid, but with minimum
> downtime possible and of course, without lose the the data that is still
> present on node 2.
>

so currently the "node1" gets its data from "node2".

I suggest that
you should simply do a switchover now to node2, i.e.

 stop vms on the "diskless" node,
 make drbd Primary on the still good node,
 start the vms on the still good node.

then do whatever is neccessary to replace the bad parts in the broken
node.

once that is done,
reconnect DRBD, (vms still running on the "node2", the still good node)
let it resync,
then start whatever cluster manager you use.

downtime: no more than whatever it takes you to cleanly shutdown the
vms, and restart them one the still healthy node.

> But my version 0.7.25 don't have this option on drbdadm,

meta data gets created implicitly with drbd 0.7.

> 8) During the synchronization process, i can mount and use /dev/drbd on
> node1, right? :)

you may. after stopping services on the healthy node,
and making the sync target Primary.

but why would you?
your vms run on the healthy node just fine, I think...

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed



More information about the drbd-user mailing list