<div dir="ltr">>>
Error message: Received unknown storage resource from satellite<div><br></div><div>According to the error above, there seems to be an issue with the "storage resource" on san7.</div><div>Have you checked if the storage pool (pool_hdd) has enough space and is healthy ?</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 29 Sep 2020 at 16:15, Adam Goryachev <<a href="mailto:mailinglists@websitemanagers.com.au">mailinglists@websitemanagers.com.au</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I'm still progressively testing linstor on a group of test servers, and <br>
now came across a new problem.<br>
<br>
I had everything working nicely with 4 x nodes, linstor n l showed all 4 <br>
working, machines would automatically come back after a reboot, and it <br>
all looked good. I deleted all the storage-pools and re-created them <br>
(same name across all servers), and then followed the docs to create my <br>
first resource-definition, volume-definition and then used <br>
auto-placement on 3 nodes.<br>
<br>
I then decided to get clever, and started creating another 9 tests with <br>
auto-placement on 4 nodes. All of this worked well, and I got a status <br>
like this:<br>
<br>
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮<br>
┊ Node ┊ Resource ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName ┊ <br>
Allocated ┊ InUse ┊ State ┊<br>
╞══════════════════════════════════════════════════════════════════════════════════════════════════════╡<br>
┊ castle ┊ testvm1 ┊ pool_hdd ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm1 ┊ pool_hdd ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm1 ┊ pool_hdd ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm1 ┊ pool_hdd ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm2 ┊ pool_hdd ┊ 0 ┊ 1002 ┊ /dev/drbd1002 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm2 ┊ pool_hdd ┊ 0 ┊ 1002 ┊ /dev/drbd1002 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm2 ┊ pool_hdd ┊ 0 ┊ 1002 ┊ /dev/drbd1002 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm2 ┊ pool_hdd ┊ 0 ┊ 1002 ┊ /dev/drbd1002 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm3 ┊ pool_hdd ┊ 0 ┊ 1003 ┊ /dev/drbd1003 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm3 ┊ pool_hdd ┊ 0 ┊ 1003 ┊ /dev/drbd1003 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm3 ┊ pool_hdd ┊ 0 ┊ 1003 ┊ /dev/drbd1003 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm3 ┊ pool_hdd ┊ 0 ┊ 1003 ┊ /dev/drbd1003 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm4 ┊ pool_hdd ┊ 0 ┊ 1004 ┊ /dev/drbd1004 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm4 ┊ pool_hdd ┊ 0 ┊ 1004 ┊ /dev/drbd1004 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm4 ┊ pool_hdd ┊ 0 ┊ 1004 ┊ /dev/drbd1004 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm4 ┊ pool_hdd ┊ 0 ┊ 1004 ┊ /dev/drbd1004 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm5 ┊ pool_hdd ┊ 0 ┊ 1005 ┊ /dev/drbd1005 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm5 ┊ pool_hdd ┊ 0 ┊ 1005 ┊ /dev/drbd1005 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm5 ┊ pool_hdd ┊ 0 ┊ 1005 ┊ /dev/drbd1005 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm5 ┊ pool_hdd ┊ 0 ┊ 1005 ┊ /dev/drbd1005 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm6 ┊ pool_hdd ┊ 0 ┊ 1006 ┊ /dev/drbd1006 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm6 ┊ pool_hdd ┊ 0 ┊ 1006 ┊ /dev/drbd1006 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm6 ┊ pool_hdd ┊ 0 ┊ 1006 ┊ /dev/drbd1006 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm6 ┊ pool_hdd ┊ 0 ┊ 1006 ┊ /dev/drbd1006 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm7 ┊ pool_hdd ┊ 0 ┊ 1007 ┊ /dev/drbd1007 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm7 ┊ pool_hdd ┊ 0 ┊ 1007 ┊ /dev/drbd1007 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm7 ┊ pool_hdd ┊ 0 ┊ 1007 ┊ /dev/drbd1007 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm7 ┊ pool_hdd ┊ 0 ┊ 1007 ┊ /dev/drbd1007 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm8 ┊ pool_hdd ┊ 0 ┊ 1008 ┊ /dev/drbd1008 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm8 ┊ pool_hdd ┊ 0 ┊ 1008 ┊ /dev/drbd1008 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm8 ┊ pool_hdd ┊ 0 ┊ 1008 ┊ /dev/drbd1008 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm8 ┊ pool_hdd ┊ 0 ┊ 1008 ┊ /dev/drbd1008 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ testvm9 ┊ pool_hdd ┊ 0 ┊ 1009 ┊ /dev/drbd1009 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ testvm9 ┊ pool_hdd ┊ 0 ┊ 1009 ┊ /dev/drbd1009 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ testvm9 ┊ pool_hdd ┊ 0 ┊ 1009 ┊ /dev/drbd1009 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san7 ┊ testvm9 ┊ pool_hdd ┊ 0 ┊ 1009 ┊ /dev/drbd1009 ┊ <br>
102.42 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ castle ┊ windows-wm ┊ pool_hdd ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ <br>
49.16 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san5 ┊ windows-wm ┊ pool_hdd ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ <br>
49.16 MiB ┊ Unused ┊ UpToDate ┊<br>
┊ san6 ┊ windows-wm ┊ pool_hdd ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ <br>
49.16 MiB ┊ Unused ┊ UpToDate ┊<br>
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯<br>
<br>
You can see at the end the windows-wm is only on 3 nodes, while the <br>
other testvm are on 4. So I ran this command to add the windows-wm to <br>
the 4th node:<br>
<br>
linstor resource create san7 windows-wm --storage-pool pool_hdd<br>
SUCCESS:<br>
Successfully set property key(s): StorPoolName<br>
SUCCESS:<br>
Description:<br>
New resource 'windows-wm' on node 'san7' registered.<br>
Details:<br>
Resource 'windows-wm' on node 'san7' UUID is: <br>
50e34cac-6702-45c4-b242-b644415a7644<br>
SUCCESS:<br>
Description:<br>
Volume with number '0' on resource 'windows-wm' on node 'san7' <br>
successfully registered<br>
Details:<br>
Volume UUID is: f6529463-03ac-4b31-af69-67b3153de355<br>
SUCCESS:<br>
Added peer(s) 'san7' to resource 'windows-wm' on 'san6'<br>
SUCCESS:<br>
Added peer(s) 'san7' to resource 'windows-wm' on 'castle'<br>
SUCCESS:<br>
Added peer(s) 'san7' to resource 'windows-wm' on 'san5'<br>
SUCCESS:<br>
Created resource 'windows-wm' on 'san7'<br>
SUCCESS:<br>
Description:<br>
Resource 'windows-wm' on 'san7' ready<br>
Details:<br>
Node(s): 'san7', Resource: 'windows-wm'<br>
<br>
This looked promising, and seemed to work, so I ran another "linstor <br>
volume list" but got an error:<br>
<br>
linstor volume list<br>
ERROR:<br>
Show reports:<br>
linstor error-reports show 5F733CD9-00000-000004<br>
<br>
The contents of that are below:<br>
ERROR REPORT 5F733CD9-00000-000004<br>
<br>
============================================================<br>
<br>
Application: LINBIT® LINSTOR<br>
Module: Controller<br>
Version: 1.9.0<br>
Build ID: 678acd24a8b9b73a735407cd79ca33a5e95eb2e2<br>
Build time: 2020-09-23T10:27:49+00:00<br>
Error time: 2020-09-30 00:13:34<br>
Node: <a href="http://castle.websitemanagers.com.au" rel="noreferrer" target="_blank">castle.websitemanagers.com.au</a><br>
<br>
============================================================<br>
<br>
Reported error:<br>
===============<br>
<br>
Category: RuntimeException<br>
Class name: NullPointerException<br>
Class canonical name: java.lang.NullPointerException<br>
Generated at: Method 'deviceProviderKindAsString', <br>
Source file 'Json.java', Line #73<br>
<br>
<br>
Call backtrace:<br>
<br>
Method Native Class:Line number<br>
deviceProviderKindAsString N <br>
com.linbit.linstor.api.rest.v1.serializer.Json:73<br>
apiToVolume N <br>
com.linbit.linstor.api.rest.v1.serializer.Json:664<br>
lambda$apiToResourceWithVolumes$2 N <br>
com.linbit.linstor.api.rest.v1.serializer.Json:477<br>
accept N <br>
java.util.stream.ReferencePipeline$3$1:195<br>
forEachRemaining N <br>
java.util.ArrayList$ArrayListSpliterator:1655<br>
copyInto N <br>
java.util.stream.AbstractPipeline:484<br>
wrapAndCopyInto N <br>
java.util.stream.AbstractPipeline:474<br>
evaluateSequential N <br>
java.util.stream.ReduceOps$ReduceOp:913<br>
evaluate N <br>
java.util.stream.AbstractPipeline:234<br>
collect N <br>
java.util.stream.ReferencePipeline:578<br>
apiToResourceWithVolumes N <br>
com.linbit.linstor.api.rest.v1.serializer.Json:506<br>
lambda$listVolumesApiCallRcWithToResponse$1 N <br>
com.linbit.linstor.api.rest.v1.View:112<br>
accept N <br>
java.util.stream.ReferencePipeline$3$1:195<br>
forEachRemaining N <br>
java.util.ArrayList$ArrayListSpliterator:1655<br>
copyInto N <br>
java.util.stream.AbstractPipeline:484<br>
wrapAndCopyInto N <br>
java.util.stream.AbstractPipeline:474<br>
evaluateSequential N <br>
java.util.stream.ReduceOps$ReduceOp:913<br>
evaluate N <br>
java.util.stream.AbstractPipeline:234<br>
collect N <br>
java.util.stream.ReferencePipeline:578<br>
lambda$listVolumesApiCallRcWithToResponse$2 N <br>
com.linbit.linstor.api.rest.v1.View:113<br>
onNext N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:378<br>
onNext N <br>
reactor.core.publisher.FluxContextStart$ContextStartSubscriber:96<br>
onNext N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:242<br>
onNext N <br>
reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber:385<br>
onNext N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:242<br>
request N <br>
reactor.core.publisher.Operators$ScalarSubscription:2317<br>
onSubscribeInner N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:143<br>
onSubscribe N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:237<br>
trySubscribeScalarMap N <br>
reactor.core.publisher.FluxFlatMap:191<br>
subscribeOrReturn N <br>
reactor.core.publisher.MonoFlatMapMany:49<br>
subscribe N <br>
reactor.core.publisher.Flux:8311<br>
onNext N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188<br>
request N <br>
reactor.core.publisher.Operators$ScalarSubscription:2317<br>
onSubscribe N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:134<br>
subscribe N <br>
reactor.core.publisher.MonoCurrentContext:35<br>
subscribe N <br>
reactor.core.publisher.Flux:8325<br>
onNext N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188<br>
onNext N <br>
reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber:121<br>
complete N <br>
reactor.core.publisher.Operators$MonoSubscriber:1755<br>
onComplete N <br>
reactor.core.publisher.MonoStreamCollector$StreamCollectorSubscriber:167<br>
onComplete N <br>
reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber:395<br>
onComplete N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:252<br>
checkTerminated N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:838<br>
drainLoop N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:600<br>
innerComplete N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:909<br>
onComplete N <br>
reactor.core.publisher.FluxFlatMap$FlatMapInner:1013<br>
onComplete N <br>
reactor.core.publisher.FluxMap$MapSubscriber:136<br>
onComplete N <br>
reactor.core.publisher.Operators$MultiSubscriptionSubscriber:1989<br>
onComplete N <br>
reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber:78<br>
complete N <br>
reactor.core.publisher.FluxCreate$BaseSink:438<br>
drain N <br>
reactor.core.publisher.FluxCreate$BufferAsyncSink:784<br>
complete N <br>
reactor.core.publisher.FluxCreate$BufferAsyncSink:732<br>
drainLoop N <br>
reactor.core.publisher.FluxCreate$SerializedSink:239<br>
drain N <br>
reactor.core.publisher.FluxCreate$SerializedSink:205<br>
complete N <br>
reactor.core.publisher.FluxCreate$SerializedSink:196<br>
apiCallComplete N <br>
com.linbit.linstor.netcom.TcpConnectorPeer:455<br>
handleComplete N <br>
com.linbit.linstor.proto.CommonMessageProcessor:363<br>
handleDataMessage N <br>
com.linbit.linstor.proto.CommonMessageProcessor:287<br>
doProcessInOrderMessage N <br>
com.linbit.linstor.proto.CommonMessageProcessor:235<br>
lambda$doProcessMessage$3 N <br>
com.linbit.linstor.proto.CommonMessageProcessor:220<br>
subscribe N <br>
reactor.core.publisher.FluxDefer:46<br>
subscribe N <br>
reactor.core.publisher.Flux:8325<br>
onNext N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:418<br>
drainAsync N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:414<br>
drain N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:679<br>
onNext N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:243<br>
drainFused N <br>
reactor.core.publisher.UnicastProcessor:286<br>
drain N <br>
reactor.core.publisher.UnicastProcessor:322<br>
onNext N <br>
reactor.core.publisher.UnicastProcessor:401<br>
next N <br>
reactor.core.publisher.FluxCreate$IgnoreSink:618<br>
next N <br>
reactor.core.publisher.FluxCreate$SerializedSink:153<br>
processInOrder N <br>
com.linbit.linstor.netcom.TcpConnectorPeer:373<br>
doProcessMessage N <br>
com.linbit.linstor.proto.CommonMessageProcessor:218<br>
lambda$processMessage$2 N <br>
com.linbit.linstor.proto.CommonMessageProcessor:164<br>
onNext N <br>
reactor.core.publisher.FluxPeek$PeekSubscriber:177<br>
runAsync N <br>
reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:439<br>
run N <br>
reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:526<br>
call N <br>
reactor.core.scheduler.WorkerTask:84<br>
call N <br>
reactor.core.scheduler.WorkerTask:37<br>
run N <br>
java.util.concurrent.FutureTask:264<br>
run N <br>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304<br>
runWorker N <br>
java.util.concurrent.ThreadPoolExecutor:1128<br>
run N <br>
java.util.concurrent.ThreadPoolExecutor$Worker:628<br>
run N java.lang.Thread:834<br>
<br>
<br>
END OF ERROR REPORT.<br>
<br>
Since it looks relevant, error reports 1, 2 and 3 are all similar for <br>
nodes castle, san5 and san6 (note san7 was the 4th/newest node where I <br>
tried to add the resource to it). Error report is:<br>
ERROR REPORT 5F733CD9-00000-000001<br>
<br>
============================================================<br>
<br>
Application: LINBIT® LINSTOR<br>
Module: Controller<br>
Version: 1.9.0<br>
Build ID: 678acd24a8b9b73a735407cd79ca33a5e95eb2e2<br>
Build time: 2020-09-23T10:27:49+00:00<br>
Error time: 2020-09-30 00:13:01<br>
Node: <a href="http://castle.websitemanagers.com.au" rel="noreferrer" target="_blank">castle.websitemanagers.com.au</a><br>
Peer: Node: 'san6'<br>
<br>
============================================================<br>
<br>
Reported error:<br>
===============<br>
<br>
Category: Error<br>
Class name: ImplementationError<br>
Class canonical name: com.linbit.ImplementationError<br>
Generated at: Method 'createStorageRscData', <br>
Source file 'CtrlRscLayerDataMerger.java', Line #214<br>
<br>
Error message: Received unknown storage resource <br>
from satellite<br>
<br>
Asynchronous stage backtrace:<br>
<br>
Error has been observed at the following site(s):<br>
|_ checkpoint ⇢ Execute single-stage API NotifyRscApplied<br>
|_ checkpoint ⇢ Fallback error handling wrapper<br>
Stack trace:<br>
<br>
Call backtrace:<br>
<br>
Method Native Class:Line number<br>
createStorageRscData N <br>
com.linbit.linstor.core.apicallhandler.CtrlRscLayerDataMerger:214<br>
<br>
Suppressed exception 1 of 1:<br>
===============<br>
Category: RuntimeException<br>
Class name: OnAssemblyException<br>
Class canonical name: <br>
reactor.core.publisher.FluxOnAssembly.OnAssemblyException<br>
Generated at: Method 'createStorageRscData', <br>
Source file 'CtrlRscLayerDataMerger.java', Line #214<br>
<br>
Error message:<br>
Error has been observed at the following site(s):<br>
|_ checkpoint ⇢ Execute single-stage API NotifyRscApplied<br>
|_ checkpoint ⇢ Fallback error handling wrapper<br>
Stack trace:<br>
<br>
Call backtrace:<br>
<br>
Method Native Class:Line number<br>
createStorageRscData N <br>
com.linbit.linstor.core.apicallhandler.CtrlRscLayerDataMerger:214<br>
createStorageRscData N <br>
com.linbit.linstor.core.apicallhandler.CtrlRscLayerDataMerger:65<br>
mergeStorageRscData N <br>
com.linbit.linstor.core.apicallhandler.AbsLayerRscDataMerger:285<br>
merge N <br>
com.linbit.linstor.core.apicallhandler.AbsLayerRscDataMerger:138<br>
merge N <br>
com.linbit.linstor.core.apicallhandler.AbsLayerRscDataMerger:148<br>
mergeLayerData N <br>
com.linbit.linstor.core.apicallhandler.AbsLayerRscDataMerger:93<br>
mergeLayerData N <br>
com.linbit.linstor.core.apicallhandler.CtrlRscLayerDataMerger:82<br>
updateVolume N <br>
com.linbit.linstor.core.apicallhandler.controller.internal.RscInternalCallHandler:228<br>
execute N <br>
com.linbit.linstor.api.protobuf.internal.NotifyResourceApplied:45<br>
executeNonReactive N <br>
com.linbit.linstor.proto.CommonMessageProcessor:525<br>
lambda$execute$13 N <br>
com.linbit.linstor.proto.CommonMessageProcessor:500<br>
doInScope N <br>
com.linbit.linstor.core.apicallhandler.ScopeRunner:147<br>
lambda$fluxInScope$0 N <br>
com.linbit.linstor.core.apicallhandler.ScopeRunner:75<br>
call N <br>
reactor.core.publisher.MonoCallable:91<br>
trySubscribeScalarMap N <br>
reactor.core.publisher.FluxFlatMap:126<br>
subscribeOrReturn N <br>
reactor.core.publisher.MonoFlatMapMany:49<br>
subscribe N <br>
reactor.core.publisher.Flux:8311<br>
onNext N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188<br>
request N <br>
reactor.core.publisher.Operators$ScalarSubscription:2317<br>
onSubscribe N <br>
reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:134<br>
subscribe N <br>
reactor.core.publisher.MonoCurrentContext:35<br>
subscribe N <br>
reactor.core.publisher.InternalFluxOperator:62<br>
subscribe N <br>
reactor.core.publisher.FluxDefer:54<br>
subscribe N <br>
reactor.core.publisher.Flux:8325<br>
onNext N <br>
reactor.core.publisher.FluxFlatMap$FlatMapMain:418<br>
drainAsync N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:414<br>
drain N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:679<br>
onNext N <br>
reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:243<br>
drainFused N <br>
reactor.core.publisher.UnicastProcessor:286<br>
drain N <br>
reactor.core.publisher.UnicastProcessor:322<br>
onNext N <br>
reactor.core.publisher.UnicastProcessor:401<br>
next N <br>
reactor.core.publisher.FluxCreate$IgnoreSink:618<br>
next N <br>
reactor.core.publisher.FluxCreate$SerializedSink:153<br>
processInOrder N <br>
com.linbit.linstor.netcom.TcpConnectorPeer:373<br>
doProcessMessage N <br>
com.linbit.linstor.proto.CommonMessageProcessor:218<br>
lambda$processMessage$2 N <br>
com.linbit.linstor.proto.CommonMessageProcessor:164<br>
onNext N <br>
reactor.core.publisher.FluxPeek$PeekSubscriber:177<br>
runAsync N <br>
reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:439<br>
run N <br>
reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:526<br>
call N <br>
reactor.core.scheduler.WorkerTask:84<br>
call N <br>
reactor.core.scheduler.WorkerTask:37<br>
run N <br>
java.util.concurrent.FutureTask:264<br>
run N <br>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304<br>
runWorker N <br>
java.util.concurrent.ThreadPoolExecutor:1128<br>
run N <br>
java.util.concurrent.ThreadPoolExecutor$Worker:628<br>
run N java.lang.Thread:834<br>
<br>
<br>
END OF ERROR REPORT.<br>
<br>
<br>
So, questions:<br>
<br>
1) Why did I end up in this state? I assume something was configured on <br>
castle/san5/san6 but not on san7.<br>
<br>
2) How can I fix it?<br>
<br>
Thanks,<br>
Adam<br>
<br>
_______________________________________________<br>
Star us on GITHUB: <a href="https://github.com/LINBIT" rel="noreferrer" target="_blank">https://github.com/LINBIT</a><br>
drbd-user mailing list<br>
<a href="mailto:drbd-user@lists.linbit.com" target="_blank">drbd-user@lists.linbit.com</a><br>
<a href="https://lists.linbit.com/mailman/listinfo/drbd-user" rel="noreferrer" target="_blank">https://lists.linbit.com/mailman/listinfo/drbd-user</a><br>
</blockquote></div>