Description
I have noticed my app keeps taking up more and more space in memory as I map more data. Eventually I reach a heap space error and crash.
`reactor.core.Exceptions : throwIfFatal detected a jvm fatal exception, which is thrown and logged below:
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.resize(Unknown Source) ~[na:na]
at java.base/java.util.HashMap.putVal(Unknown Source) ~[na:na]
at java.base/java.util.HashMap.put(Unknown Source) ~[na:na]
at java.base/java.util.HashSet.add(Unknown Source) ~[na:na]
at java.base/java.util.AbstractCollection.addAll(Unknown Source) ~[na:na]
at java.base/java.util.HashSet.(Unknown Source) ~[na:na]
at org.springframework.data.neo4j.core.ReactiveNeo4jTemplate.lambda$iterateAndMapNextLevel$59(ReactiveNeo4jTemplate.java:859) ~[spring-data-neo4j-7.3.3.jar!/:7.3.3]
at org.springframework.data.neo4j.core.ReactiveNeo4jTemplate$$Lambda$1985/0x00000098018f6528.apply(Unknown Source) ~[na:na]
at reactor.core.publisher.FluxDeferContextual.subscribe(FluxDeferContextual.java:49) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.Flux.subscribe(Flux.java:8848) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.FluxExpand$ExpandBreathSubscriber.drainQueue(FluxExpand.java:179) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.FluxExpand$ExpandBreathSubscriber.onComplete(FluxExpand.java:147) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.MonoUsingWhen$MonoUsingWhenSubscriber.deferredComplete(MonoUsingWhen.java:270) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.FluxUsingWhen$CommitInner.onComplete(FluxUsingWhen.java:532) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onComplete(MonoPeekTerminal.java:299) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.MonoIgnoreElements$IgnoreElementsSubscriber.onComplete(MonoIgnoreElements.java:89) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.MonoIgnoreElements$IgnoreElementsSubscriber.onComplete(MonoIgnoreElements.java:89) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.FluxContextWriteRestoringThreadLocals$ContextWriteRestoringThreadLocalsSubscriber.onComplete(FluxContextWriteRestoringThreadLocals.java:149) ~[reactor-core-3.6.9.jar!/:3.6.9]
at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:144) ~[reactor-core-3.6.9.jar!/:3.6.9]
at org.neo4j.driver.internal.reactive.RxUtils.lambda$createEmptyPublisher$0(RxUtils.java:42) ~[neo4j-java-driver-5.23.0.jar!/:5.23.0-9b266bcb3c88c01e72d7c925b7c9647b45f5027d]
at org.neo4j.driver.internal.reactive.RxUtils$$Lambda$1975/0x00000098018f1b00.accept(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.complete(Unknown Source) ~[na:na]
at org.neo4j.driver.internal.handlers.ChannelReleasingResetResponseHandler.lambda$resetCompleted$2(ChannelReleasingResetResponseHandler.java:61) ~[neo4j-java-driver-5.23.0.jar!/:5.23.0-9b266bcb3c88c01e72d7c925b7c9647b45f5027d]
at org.neo4j.driver.internal.handlers.ChannelReleasingResetResponseHandler$$Lambda$1980/0x00000098018f59e0.accept(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.whenComplete(Unknown Source) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.whenComplete(Unknown Source) ~[na:na]
`
I looked at the iterateAndMapNextLevel method and I think it occured on
Map<String, Set<String>> relatedNodesVisited = new HashMap(relationshipsToRelatedNodeIds);
is it possible that there is some data accumulated in the context that isn't freed up?
can I employ some type of work around to avoid running out of memory?
I'm using a ReactiveCrudRepository to fetch a list of relationships for a node and then processing all of these relationships by creating even more relationships. I'm working with a single node at a time
nodeRepository .findOneByText(itemsJsonNode .get(0) .get("text") .asText()) .switchIfEmpty(nodeEntityFlux .take(1) .next()) .flatMap(nodeEntity -> { if (!nodeEntity.isChildrenFetched()) { return nodeEntityFlux .skip(1) .flatMap(child -> nodeRepository .findOneByText(child.getText()) .defaultIfEmpty(child)) .map(child -> child .toBuilder() .children(List.of(nodeEntity)) .build()) .collectList() .flatMapMany(nodeRepository::saveAll) .then(Mono.defer(() -> { nodeEntity.setChildrenFetched(true); return nodeRepository.save(nodeEntity);