[DRBD-user] DRBD9, lvmthin and free space (was: DRBD9 - drbdmanage wrong free pool size)

Roberto Resoli roberto at resolutions.it
Mon May 2 13:34:45 CEST 2016

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Hello,

I'm interested in the DRBD9 usage with single pool thin provisioning, (
lvm_thinlv.LvmThinLv storage plugin) which is the default on latest
Proxmox Virtual Environment (PVE).

I am under the latest version of drbd/drbdmanage from
pve-no-subscription repo:

# uname -r
4.4.6-1-pve

# cat /proc/drbd
version: 9.0.2-1 (api:2/proto:86-111)
GIT-hash: bdcc2a765a9a80c8b263c011a6508cb6e0c3e4d2 build by root at elsa,
2016-04-21 11:31:10
Transports (api:14): tcp (1.0.0)

# drbdmanage -v
drbdmanage 0.95; GIT-hash: UNKNOWN

# dpkg -l drbdmanage
Voluto=U (non noto)/I (installato)/R (rimosso)/P (rimosso totale)/H (in
attesa)
|
Stato=Non/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(nessuno)/R (reinstallazione richiesta) (Stato,Err: maiuscolo=grave)
||/ Nome                Versione       Architettura   Descrizione
+++-===================-==============-==============-============================================
ii  drbdmanage          0.95-1         amd64          Distributed
configuration management for DRB


I saw the tread the subject refers to, sorry for not replying to it
directly, but I have subscribed to this list just now.

I'm facing exactly the same issue, free space calculation is reporting
different (and lesser than current) values from different nodes, even if
all resources are deployed on all the nodes (three in my case).

# drbdmanage list-nodes
+-------------------------------------------+
| Name | Pool Size | Pool Free |    | State |
|-------------------------------------------|
| pve1 |   1638180 |     83874 |    |    ok |
| pve2 |   1638180 |    422322 |    |    ok |
| pve3 |   1638188 |     45541 |    |    ok |
+-------------------------------------------+

the "Pool Free" value appears to be correct only for pve2 node, while
having "45541" as minimum on pve3 binds maximum space allocation with
redundancy 3 to this value.

It's strange, but I can still create a resource with volume size say,
100GB, assign it to pve2, and then manually assigning it to pve1 and
pve2 without problems.

The reported "Pool Free" on pve1 and pve3 will eventually becomes 0 as
reported in the original thread.

I've tried adding and removing resources and volumes, and also running

# drbdmanage update-pool

on all three nodes.

I'm aware of "Free Space reporting" on lvm manual:

https://www.drbd.org/en/doc/users-guide-90/ch-drbdmanage-more#s-drbdmanage-free-space

It seems that currently free space is not correctly reported when a
resource is unassigned.

Here what lvs reports on the three nodes:

============
root at pve1:~# lvs drbdpool
  LV               VG       Attr       LSize   Pool         Origin Data%
 Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----   4,00m

  .drbdctrl_1      drbdpool -wi-ao----   4,00m

  drbdthinpool     drbdpool twi-aotz--   1,56t                     63,38
 31,50
  vm-100-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-101-disk-1_00 drbdpool Vwi-aotz--   4,00g drbdthinpool        99,93

  vm-101-disk-2_00 drbdpool Vwi-aotz--  60,02g drbdthinpool
100,00
  vm-102-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-103-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-104-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        47,25

  vm-104-disk-2_00 drbdpool Vwi-aotz-- 900,20g drbdthinpool
100,00
  vm-120-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-121-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        50,02

============
root at pve2:~# lvs drbdpool
  LV               VG       Attr       LSize   Pool         Origin Data%
 Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----   4,00m

  .drbdctrl_1      drbdpool -wi-ao----   4,00m

  drbdthinpool     drbdpool twi-aotz--   1,56t                     49,03
 25,19
  vm-100-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-101-disk-1_00 drbdpool Vwi-aotz--   4,00g drbdthinpool        99,93

  vm-101-disk-2_00 drbdpool Vwi-aotz--  60,02g drbdthinpool
100,00
  vm-102-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-103-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-104-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-104-disk-2_00 drbdpool Vwi-aotz-- 900,20g drbdthinpool        73,36

  vm-120-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-121-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

============
root at pve3:~# lvs drbdpool
  LV               VG       Attr       LSize   Pool         Origin Data%
 Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----   4,00m

  .drbdctrl_1      drbdpool -wi-ao----   4,00m

  drbdthinpool     drbdpool twi-aotz--   1,56t                     64,02
 33,20
  vm-100-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-101-disk-1_00 drbdpool Vwi-aotz--   4,00g drbdthinpool        99,93

  vm-101-disk-2_00 drbdpool Vwi-aotz--  60,02g drbdthinpool
100,00
  vm-102-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-103-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-104-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-104-disk-2_00 drbdpool Vwi-aotz-- 900,20g drbdthinpool
100,00
  vm-120-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99

  vm-121-disk-1_00 drbdpool Vwi-aotz--  10,00g drbdthinpool        99,99


Any hint?

Thanks,
rob



More information about the drbd-user mailing list