[DRBD-user] Linstor | Peer's disk size too small

Robert Altnoeder robert.altnoeder at linbit.com
Fri Aug 24 10:35:16 CEST 2018


On 08/24/2018 08:54 AM, Yannis Milios wrote:
> "[81307.730199] drbd test/0 drbd1000 pve3: The peer's disk size is too
> small! (20971520 < 20975152 sectors)
>
> This is reproducible in all my attempts to create any random resource
> on the 3rd node. Could this be happening due to the alignment issues
> that Robert mentioned on the previous post?

Most probably that's the reason. If the LVM volumes turn out bigger than
the ZFS volumes, due to LVM aligning the volume size to its extent size,
then DRBD will use the additional space, and when the ZFS volumes are
added later, they turn out to be too small.

It should be possible to avoid this by setting an explicit size for the
DRBD volume in the resource configuration file, so that DRBD will only
use that much space even if more is available. We will likely have to
integrate this in LINSTOR to support mixed backends properly, however,
implementing this will take some time, because it makes other
transitions (such as resize) more complicated.
We might also need to make this configurable, because it might break
migration of existing volumes (e.g. from drbdmanage, etc.) otherwise,
due to mismatches of the peer count or AL size, which result in a
different exact size being calculated by the DRBD meta data size
calculation module in LINSTOR.

> When I was using drbdmanage for managing the same cluster, I never had
> this problem.

drbdmanage used the "size" option in resource configuration files.

It was dropped in LINSTOR to simplify resize, but apparently, that was a
bad decision that we need to rethink.

br,
Robert



More information about the drbd-user mailing list