[DRBD-user] Failing to migrate two DRBD nodes

Marc Richter drbd at zoosau.de
Wed Jan 12 13:48:29 CET 2011

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.

Hi Felix,

Am 12.01.2011 13:36, schrieb Felix Frank:
> On 01/12/2011 01:26 PM, Marc Richter wrote:
>> Hi There.
>> I'm still failing in replacing a HA - node and home someone may help me.
>> I'm trying the following:
>> I have two nodes which serve as a HA - NAS and are connected by DRBD. We
>> have bought new Hardware and installed a new version of the Linux
>> Distribution onto this new devices. Since the complete initial sync
>> takes a long time and makes the devices quite unusable for 5 hours, I'm
>> planing to do the following:
>> 1) Remove the secondary node from the cluster by issuing "drbdadm
>> disconnect r0".
>> 2) Connecting the first new node to this removed node and have the
>> initial re sync done without affecting the live node.
> Do you make the SyncSource Primary? Because you shouldn't.
> Instead, it should stay Secondary and the new node should connect with
> --discard-my-data.
> Not sure if that's enough. If in doubt, don't bother syncing at all.
> Just disconnect your secondary and dd its backing device to the new
> node's backing device.
> Regards,
> Felix

The SyncSource was secondary in the productive cluster and stayed
secondary during sync.
Also, the new node (SyncTarget) stayed secondary during the whole sync
between this old node and when connected to the productive node.
"--discard-my-data" wasn't necessary for the sync between the old
productive SyncSource and the new node, since DRBD managed to detect
that the SyncTarget is a fully new backing device.

Your suggestion to dd the currently productive node's disc might
technically work, but I'd like to not go this way because of two reasons:

This would mean to take the node down for several hours, which I cannot
do, since it is required for a productive situation and would be very

As I wrote in the initial mail, I managed this scenario between 4
virtual machines without having these problems. So I'd like to first
understand what's the problem here, before I decide to initiate such an
expensive downtime.

But thanks for your suggestion anyways! Such hints are great welcome at
the moment! :)

Best regards,

More information about the drbd-user mailing list