[DRBD-user] linstor-proxmox : online grow of resources

Julien Escario julien.escario at altinea.fr
Tue Sep 25 09:50:42 CEST 2018


Le 24/09/2018 à 13:19, Robert Altnoeder a écrit :
> On 09/24/2018 01:03 PM, Julien Escario wrote:
>> Hello,
>> When trying to resize disk (aka grow only) on Proxmox interface for a
>> linstor-backed device, this error is thrown :
>> VM 2000 qmp command 'block_resize' failed - Cannot grow device files (500)
>>
>> BUT resource is effectively growed in linstor and out of sync datas are synced.
>> drbdpool/vm-2000-disk-1_00000  27,2G  6,91T  27,2G  -
> 
> You can check the size of the DRBD device, /dev/drbd1003 according to
> information below, to ensure that the size change was completed by the
> LINSTOR & DRBD layers. If the size of the DRBD device has also been
> updated, then the problem is somewhere outside of LINSTOR & DRBD.

I really don't think it's a problem with Linstor or DRBD but more specific to
linstor-proxmox plugin.

For an unkown reason, the plugin return an error code to Proxmox backend when
resizing and Proxmox config isn't update with new size and doesn't inform the VM
of the change (there should be a KVM-specific call for this).

For example, with VM running :

<on hypervisor>
# qm resize 2000 virtio1 +5G
SUCCESS:
Description:
    Volume definition with number '0' of resource definition 'vm-2000-disk-2'
modified.
Details:
    Volume definition with number '0' of resource definition 'vm-2000-disk-2'
UUID is: dea1ca6b-a2af-445a-8005-65a12974779e
VM 2000 qmp command 'block_resize' failed - Cannot grow device files

<inside VM>
% fdisk -l /dev/vdb
Disk /dev/vdb: 26 GiB, 27917287424 bytes, 54525952 sectors

<on hypervisor>
# fdisk -l /dev/drbd1003
Disk /dev/drbd1003 : 31 GiB, 33285996544 octets, 65011712 sectors

# qm rescan
rescan volumes...
VM 2000: update disk 'virtio1' information.

But still the same size inside VM. I can't how Proxmox inform the VM of the size
change.


>> At last remark, still for the same resource, ZFS shows much larger volume :
>> ╭──────────────────────────────────────────────────────────╮
>> ┊ ResourceName   ┊ VolumeNr ┊ VolumeMinor ┊ Size   ┊ State ┊
>> ╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
>> ┊ vm-2000-disk-2 ┊ 0        ┊ 1003        ┊ 26 GiB ┊ ok    ┊
>> ╰──────────────────────────────────────────────────────────╯
>>
>> # zfs list drbdpool/vm-2000-disk-2_00000
>> NAME                            USED  AVAIL  REFER  MOUNTPOINT
>> drbdpool/vm-2000-disk-2_00000  41,6G  6,87T  41,6G  -
>>
>> This is just after full resync (resource delete/create on this node).
>>
>> 41GB used for a 26GB volume isn't a bit much ?
>> Using zpool history, I can find the used line for this resource :
>> 2018-09-24.12:14:43 zfs create -s -V 27268840KB drbdpool/vm-2000-disk-2_00000
> 
> 27,268,840 kiB is consistent with a 26 GiB DRBD 9 device for 8 peers, so
> the reason for the effective size of ~42 GiB would probably be outside
> of LINSTOR, unless there was some resize operation in progress that did
> not finish.

Probably, yes. I'll investigate on the ZFS side.

Best regards,
Julien Escario


More information about the drbd-user mailing list