[DRBD-user] Not enough free bitmap slots when assigning a resource on an additional node

Yannis Milios yannis.milios at gmail.com
Wed Apr 11 18:47:24 CEST 2018


After digging a bit more in both user's guide and in ML, managed to assign
the resource on the 3rd node.
It was required to manually dump,modify and restore the metadata.
I had to increase the value of the 'max-peers' option, which for this
particular resource was set to '0' for some reason.

All other resources seem to have the correct values.


On Mon, Apr 9, 2018 at 2:44 PM, Yannis Milios <yannis.milios at gmail.com>
wrote:

> Hello,
>
> On a 3 node/zfs backed drbd9 cluster, while trying to assign-resource on
> an additional node, I'm getting "Not enough free bitmap slots" and the
> resync does not start.
>
> Removing/reassigning the resource does not help either. I couldn't find
> enough information about this error when searching ML archives.
>
> Any ideas what is causing this ?
>
> Thanks
>
> Some logs:
> ========
> │[137368.869743] drbd vm-122-disk-1: Preparing cluster-wide state change
> 3649068569 (0->-1 3/1)
>> │[137368.870076] drbd vm-122-disk-1: State change 3649068569:
> primary_nodes=5, weak_nodes=FFFFFFFFFFFFFFF8
>> │[137368.870078] drbd vm-122-disk-1: Committing cluster-wide state change
> 3649068569 (0ms)
>> │[137368.870082] drbd vm-122-disk-1: role( Secondary -> Primary )
>
>> │[142124.120066] drbd vm-122-disk-1 pve1: Preparing remote state change
> 152764597 (primary_nodes=1, weak_nodes=FFFFFFFFFFFFFFFC)
>> │[142124.120284] drbd vm-122-disk-1 pve1: Committing remote state change
> 152764597
>> │[142124.120289] drbd vm-122-disk-1: State change failed: Refusing to be
> Outdated while Connected
>> │[142124.120399] drbd vm-122-disk-1/0 drbd103: Failed: disk( UpToDate ->
> Outdated )
>> │[142124.120410] drbd vm-122-disk-1: FATAL: Local commit of prepared
> 152764597 failed!
>> │[142124.350767] drbd vm-122-disk-1: Preparing cluster-wide state change
> 3667315219 (0->1 496/16)
>> │[142124.350948] drbd vm-122-disk-1: State change 3667315219:
> primary_nodes=5, weak_nodes=FFFFFFFFFFFFFFFB
>> │[142124.350949] drbd vm-122-disk-1 pve2: Cluster is now split
>
>> │[142124.350950] drbd vm-122-disk-1: Committing cluster-wide state change
> 3667315219 (0ms)
>> │[142124.350970] drbd vm-122-disk-1 pve2: conn( Connected -> Disconnecting
> ) peer( Secondary -> Unknown )
>> │[142124.350973] drbd vm-122-disk-1/0 drbd103 pve2: pdsk( Diskless ->
> DUnknown ) repl( Established -> Off )
>> │[142124.350994] drbd vm-122-disk-1 pve2: ack_receiver terminated
>
>> │[142124.350996] drbd vm-122-disk-1 pve2: Terminating ack_recv thread
>
>> │[142124.467248] drbd vm-122-disk-1 pve2: Connection closed
>
>> │[142124.467428] drbd vm-122-disk-1 pve2: conn( Disconnecting ->
> StandAlone )
>> │[142124.467432] drbd vm-122-disk-1 pve2: Terminating receiver thread
>
>> │[142124.467504] drbd vm-122-disk-1 pve2: Terminating sender thread
>
>> │[142133.966538] drbd vm-122-disk-1 pve2: Starting sender thread (from
> drbdsetup [1502468])
>> │[142133.968349] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[142135.077014] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[142227.488145] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[142232.509729] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[142272.526580] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[143361.840005] drbd vm-122-disk-1 pve2: Terminating sender thread
>
>> │[143505.002551] drbd vm-122-disk-1 pve2: Starting sender thread (from
> drbdsetup [3396832])
>> │[143505.004330] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[143506.090369] drbd vm-122-disk-1/0 drbd103 pve2: Not enough free bitmap
> slots
>> │[143633.980319] drbd vm-122-disk-1 pve2: Terminating sender thread
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20180411/28c59295/attachment.htm>


More information about the drbd-user mailing list