[DRBD-user] [DRBD-announce] linstor-server 1.10.0 release

Rene Peinthor rene.peinthor at linbit.com
Thu Nov 12 08:59:42 CET 2020


Forwarded from our auto-evict dev:

In the case you described, if node1 and node2 had a normal replica whereas
> node3 had a diskless one, the replacing of the resource will turn the
> diskless replica into a diskful one.
> When node1 comes back online, it will still be EVICTED. Additionally,
> linstor will then actually delete the previously moved DRBD-resources.
> However, everything else (such as storage pools, net-interfaces, etc.) is
> still there. The node can be restored (that is, moved out of the EVICTED
> status so that it can be used again) by using linstor node restore
> [nodename]
>

Cheers,
Rene

On Tue, Nov 10, 2020 at 9:30 PM Yannis Milios <yannis.milios at gmail.com>
wrote:

> Hello,
>
> Quick question, just wondering how will "auto-evict" affect a 3 node
> linstor cluster with a replica number of 2? Say that node1 goes down for
> more than an 1h, linstor will try to replace its drbd resources to either
> node2 or node3 assuming that the redundacy level falls below 2 and there's
> enough free space in the backing device on remaining nodes (will diskless
> clients count in this case?).
>
> How will linstor respond in the case where node1 comes back online?  Will
> it just restore back drbd resources on it ? or it will just reject that
> node from being a part of the cluster and in this case the node will have
> to be rejoined to the cluster.
>
> Thank you,
> Yannis
>
> On Mon, 9 Nov 2020 at 11:16, Rene Peinthor <rene.peinthor at linbit.com>
> wrote:
>
>> Hi!
>>
>> This release brings 2 new added features, auto-evict and configurable
>> ETCD prefixes:
>>
>> Auto-Evict:
>> If a satellite has no connection to the controller for more than an hour,
>> the controller will mark that node as EVICTED and remove all its
>> DRBD-resources. Should this cause the total number of replicas for those
>> resources to fall below a user-set minimum, it will then try to place new
>> replicas on other satellites to keep enough replicas available.
>>
>> ETCD-prefixes:
>> You can now configure the used ETCD prefix within the linstor.toml file,
>> of course this needs to be done before the first start of the controller.
>> As a little drawback(cleanup) of this change, it isn't possible anymore
>> to directly upgrade ETCD backed Linstor-Controller installation from a
>> version prior < 1.4.3. If you have such a situation
>> upgrade to 1.9.0 first and then to 1.10.0.
>>
>> linstor-server 1.10.0
>> --------------------
>>  * Added auto-evict feature
>>  * ETCD prefix is now configurable (migration only works now starting
>> from version 1.4.3)
>>  * Block IO can now be throttled also by iops
>>  * Fixed REST-API single snapshot filtering
>>  * Fixed drbd-events2 parsing race condition
>>  * Fixed toggle-disk doesn't work if an unrelated node is offline
>>  * Fixed race-condition in auto-tiebreaker
>>  * Fixed usage of wait for snapshot-shipping
>>  * REST-API version 1.5.0
>>
>> https://www.linbit.com/downloads/linstor/linstor-server-1.10.0.tar.gz
>>
>> Linstor PPA:
>> https://launchpad.net/~linbit/+archive/ubuntu/linbit-drbd9-stack
>>
>> Cheers,
>> Rene
>> _______________________________________________
>> drbd-announce mailing list
>> drbd-announce at lists.linbit.com
>> https://lists.linbit.com/mailman/listinfo/drbd-announce
>>
> --
> Sent from Gmail Mobile
> _______________________________________________
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user at lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linbit.com/pipermail/drbd-user/attachments/20201112/3c9f3521/attachment.htm>


More information about the drbd-user mailing list