[DRBD-user] DRBD9 - No free space on pool

Roberto Resoli roberto at resolutions.it
Thu Aug 31 19:25:46 CEST 2017

Note: "permalinks" may not be as permanent as we would like,
direct links of old sources may well be a few messages off.


Il 31/08/2017 08:13, Roberto Resoli ha scritto:
> Il 30/08/2017 18:39, Roberto Resoli ha scritto:
>> Il 30/08/2017 18:28, Roberto Resoli ha scritto:
>>> Hello,
>>>
>>> I am facing unability to create new volumes or enlarge old ones on a
>>> three node DRBD9 cluster.
>>>
>>> DRBD9 packages are latest from proxmox-4 linbit repo.
>> In detail:
>>
>> drbd-dkms 9.0.8+linbit-1
>> drbd-utils 9.0.0+linbit-1
>> drbdmanage-proxmox 1.0-1
>> python-drbdmanage 0.99.9-1
> 
> More details:
> 
> Kernel 4.10.17-2-pve, nodes recently upgraded to PVE v.5

...

>>> The sum of single volume sizes is 1093664768; so 583831552 bytes are
>>> missing.
>>>
>>> I have tried issuing
>>>
>>> drbdmanage update-pool
>>>
>>> without success.

Ok, I have looked at the code of lvmthin plugin, and i found something
interesting:

1) the lvs command that is run to capture actual size of the pool
sepecifies a comma ("'") separator, and the actual output is as follows:

# lvs --noheadings --nosuffix --units k --separator "," --options
size,data_percent,snap_percent drbdpool/drbdthinpool
  1677496320,00,61,15,61,15

The parsing code uses "," to split the sting, so imho it cannot
discriminate between the decimal comma and the separator. May be is an
issue only for italian localized installation ("," is the decimal
separator, not ".")

I changed the separator to ";" an so:
# lvs --noheadings --nosuffix --units k --separator ";" --options
size,data_percent,snap_percent drbdpool/drbdthinpool
  1677496320,00;61,15;61,15

2) For some reason the snapshot percentage is not null, even if i have
no snapshot in the pool. I suspect a bug in lvs code, because:
a) the size of the snap is the same of data occupation, for all volumes
b) on another nmachine with previous kernel there is no value for
snapshot percentage.

I restarted drbdmanage, after having applied this patch:

--- /usr/lib/python2.7/dist-packages/drbdmanage/storage/lvm_thinlv.py
2017-08-31 16:11:20.539325893 +0200
+++ drbdmanage/storage/lvm_thinlv.py	2017-05-24 14:11:36.000000000 +0200
@@ -233,7 +233,7 @@
         try:
             exec_args = [
                 self._cmd_lvs, "--noheadings", "--nosuffix",
-                "--units", "k", "--separator", ";",
+                "--units", "k", "--separator", ",",
                 "--options",
                 "size,data_percent,snap_percent",
                 self._conf[consts.KEY_VG_NAME] + "/" +
@@ -250,7 +250,7 @@
                 pool_data.strip()
                 try:
                     size_data, data_part, snap_part = (
-                        pool_data.split(";")
+                        pool_data.split(",")
                     )
                     size_data = self.discard_fraction(size_data)
                     space_size = long(size_data)
@@ -275,8 +275,7 @@
                     data_used = data_perc * space_size
                     snap_used = snap_perc * space_size

-                    #space_used = data_used + snap_used
-                    space_used = data_used
+                    space_used = data_used + snap_used

                     space_free = int(space_size - space_used)
                     if space_free < 0:

and rerun

drbdmanage update-pool

on each node.

Now free space is correct:

# drbdmanage list-nodes -m
pve1,4,10.1.1.1,1677496320,660933550,drbdctrl|storage,N/A
pve2,4,10.1.1.2,1677496320,651707320,drbdctrl|storage,N/A
pve3,4,10.1.1.3,1677504512,648858745,drbdctrl|storage,N/A

bye,
rob




More information about the drbd-user mailing list