Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
Anyone che help me? I re-send my request : I have a BIG doubt in how to proceed in restore a DRBD to normal operation after a failed HD on node 1 (the primary). The 2 nodes are two identical RedHat ES3 with DRBD 0.7.25, only with one drbd resource, named vm1 (configured with internal metadata, and, on both server, the local backing storage is on /dev/sdc2) The situation is that : 1) The node 1 have a problem on /dev/sdc (is a HW raid 5, on witch 2 hard-drive have a fault at 10 second distance!). 2) The drbd automatically detach the backing storage. 3) So the node 1 state become : cs:DiskLessClient st:Primary/Secondary ld:Inconsistent And on node 2 the state become : cs:ServerForDLess st:Secondary/Primary ld:Consistent The applications (a VMWare server virtual machines) on node1 still continue to work as usual, with virtual disk file (.vmdk) file on /dev/drbd0. So, now, for return back to a "safe" situation, i need to reboot the node 1 for replace HD and reconfigure the Raid, but with minimum downtime possible and of course, without lose the the data that is still present on node 2. Witch is the right procedure? Is correct that : 1) Stop the VMWare machine on node 1 2) Stop the drbd on node 1 with : drbdadm disconnect 3) Stop the drbd on node 2 with : drbdadm down 4) Shutdown the node 1 & change the hd 5) After boot-up node 1, is correct to do, on node 2 a drbd up, and on node 1 ONLY a dbbd connect, so i don't attach and empty /dev/sdc1 device that can destory the data on node2? In fact, is like to start node 1 in diskless node. 6) After drbd connect on node1, i can promote the node 1 to primary also without attached the local backing storage? I don't risk to destroy the data on node 2 when promoting node 1 to primary? with : drbdsetup /dev/drbd0 primary --do-what-I-say Or i risk to destroy something? 7) After that, for return to use the local backing storage also on node 1, is correct to do, ON NODE 1: drbdadm attach vm1 drbdadm invalidate vm1 ? On the last online manual, of version 0.8.x, is written to do, as first step when replacing an hd with a new one, create the metadata on it with : drbdadm create-md resource But my version 0.7.25 don't have this option on drbdadm, for crating the metadata, and, anyway, the first time i have used the drbd, i don't remember to have issued a commend to create the metadata, it will be created automatically i think. 8) During the synchronization process, i can mount and use /dev/drbd on node1, right? :) Many many thanks in advance for any help! Regards. Franco