[DRBD-user] can i clone primary after failure of secondary?

Arnold Krille arnold at arnoldarts.de
Sun Jan 8 20:38:56 CET 2012

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


On Sunday 08 January 2012 15:34:29 Maurizio Marini Gmail wrote:
> Hello,
> 
> We have two Dell 1425 servers running CentOS 5.4, with RAID5, 3 disks
> in each one,
> and drbd 8.3 between them.
> 
> We see a disk failure on _all_ 3 disks on the secondary node.
> The disks were sent us to new by Dell.
> 
> We could at this point clone the 3 disks of the primary node, using
> the controller,
> without booting CentOS (not sure if this will work though).
> 
> Then we can put the 3 cloned disks in the secondary node, switch on
> the primary first,
> then switch on the 2nd but keeping it disconnected.
> 
> We could change the network configuration on the 2nd node before
> reconnecting it to the network.
> 
> We're very worried about the drbd partitions metadata being the same
> on each node:
> does this method have any chance of success, or are we wasting our time?

This is essentially a truck-based-replication, so it should be solvable by 
"just" looking at the relevant part of the docs...

Have a nice week,

Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20120108/cf280eb3/attachment.pgp>


More information about the drbd-user mailing list