[DRBD-user] high io when diskless node added to the storage pool
Robert Altnoeder
robert.altnoeder at linbit.com
Thu Sep 5 16:15:18 CEST 2019
On 9/3/19 2:01 PM, Alex Kolesnik wrote:
> moving a drive to drbdpool increases nodes' IO enormously while nothing seems to
> be going on (well, the disk seems to be moving but VERY slow).
Does writing anything else to the volume show normal performance, or is
the performance degraded as well?
> The log displays
> just this w/o any progress, so I had to stop the disk moving:
> create full clone of drive scsi0 (LVM-Storage:126/vm-126-disk-0.qcow2)
> trying to acquire cfs lock 'storage-drbdpool' ...
> transferred: 0 bytes remaining: 10739277824 bytes total: 10739277824 bytes progression: 0.00 %
I cannot provide much help with those messages, since they originate
neither from LINSTOR nor from DRBD.
The "trying to acquire cfs lock" message appears to be issued by
Proxmox, and may be related to communication problems with Corosync's
cluster link.
br,
Robert
More information about the drbd-user
mailing list