Skip to content

Crash after redistributing the mesh- interFlow #19

@Ananth-Narayan-IITM

Description

@Ananth-Narayan-IITM

Hello Henning,

I have recently came across the interFlow solver across two repositories (VoFLibrary and TwoPhaseFlow). I am trying out this two phase problem (Hysing benchmark problem) with AMR, while redistributing, the solver crashes. This happens with both the repo solvers iof interFlow. I am using OpenFOAM 1812 and find the case file

e0R16.zip

The error I got is,

PIMPLE: iteration 1
Selected 117653 cells for refinement out of 130305.
Refined from 130305 to 483264 cells.
Selected 0 split edges out of a possible 117653.
Maximum imbalance = 12.5430406569 %

**Solver hold for redistribution at time = 0.00862961 s
Selecting decompositionMethod ptscotch [8]
Selecting decompositionConstraint refinementHistoryMultiDim
refinementHistoryMultiDim : setting constraints to refinement history
Distributing the mesh ...
Mapping the fields ...
Distribute the map ...
Successfully distributed mesh
New distribution: 8(60056 59900 61816 59584 59992 59814 60391 61711)
Max deviation: 2.33081711032 %
GAMG:  Solving for pcorr, Initial residual = 1, Final residual = 7.98231788597e-11, No Iterations 96
GAMG:  Solving for pcorr, Initial residual = 0.00135507326358, Final residual = 9.47914525177e-11, No Iterations 60
time step continuity errors : sum local = 2.58210776801e-07, global = -3.057057071e-22, cumulative = 5.61541775845e-20
[6] #0  Foam::error::printStack(Foam::Ostream&)[3] #0  Foam::error::printStack(Foam::Ostream&) at ??:?
[3] #1  Foam::sigSegv::sigHandler(int) at ??:?
[6] #1  Foam::sigSegv::sigHandler(int) at ??:?
[3] #2  ? at ??:?
.
.
.
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 3 with PID 0 on node user exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

Is there anything we need to modify the solver for dynamic mesh update. What I feel is that the error occurs when the Uf is being recomputed, as the mesh size doesn't match causing the solver crash. Please let me know if I am missing out anything.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions