Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.
On Mon, Jun 19, 2017 at 07:46:29AM +0000, Dominic Pratt wrote: > Hi there guys, > > I haven't found a bugtracker, so I hope this is the right place to submit a bug... > > We're using DRBD9 in a PVE4-Cluster with 3 Nodes, one of them being a no-storage node. We're unable to clone disks/VMs between nodes, because the created disk needs a few seconds to become available. > > Package: > drbdmanage-proxmox 1.0-1 > File: > DRBDPlugin.pm > Function: > alloc_image > Error-Message: > create full clone of drive scsi0 (VG1:vm-113-disk-1) > drive mirror is starting for drive-scsi0 > drive-scsi0: Cancelling block job > drive-scsi0: Done. > drbd error: Could not forward data to leader Looks like the node was not able to communicate to the leader node via TCP port 6996. > TASK ERROR: storage migration failed: mirroring error: VM 113 qmp command 'drive-mirror' failed - Could not open '/dev/drbd/by-res/vm-113-disk-1/0': No such file or directory > > Workaround/Bugfix: > ($rc, $res) = $hdl->auto_deploy($name, $redundancy, 0, 0); > check_drbd_res($rc); > > + warn "sleeping 10s"; > + sleep(10); > > return $name; > } Hm. The solution is to have a number of nodes up and running that can reach a quorum. How someone ensures that is IMO out of scope for drbdmanage itself. Basically it can only fail in one way or the other, which it did. Regards, rck