CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Foam::error::PrintStack (https://www.cfd-online.com/Forums/openfoam-solving/89644-foam-error-printstack.html)

almir June 18, 2011 10:00

Foam::error::PrintStack
 
hi,
i have following errormessage in OpenFoam, as solver I use BuoyantSimpleFoam. I don´t understand that error.

Maybe someone can help me?


almir@ubuntu:~/OpenFOAM/zylinder$ buoyantSimpleFoam
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 1.7.x |
| \\ / A nd | Web: www.OpenFOAM.com |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 1.7.x-3776603e4c6c
Exec : buoyantSimpleFoam
Date : Jun 15 2011
Time : 12:26:48
Host : ubuntu
PID : 5430
Case : /home/almir/OpenFOAM/zylinder
nProcs : 1
SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0


Reading g
Reading thermophysical properties

Selecting thermodynamics package hPsiThermo<pureMixture<constTransport<specieThermo <hConstThermo<perfectGas>>>>>
Reading field U

Reading/calculating face flux field phi

Creating turbulence model

Selecting RAS turbulence model kOmegaSST
kOmegaSSTCoeffs
{
alphaK1 0.85034;
alphaK2 1;
alphaOmega1 0.5;
alphaOmega2 0.85616;
Prt 1;
gamma1 0.5532;
gamma2 0.4403;
beta1 0.075;
beta2 0.0828;
betaStar 0.09;
a1 0.31;
c1 10;
}

Calculating field g.h

Reading field p_rgh


Starting time loop

Time = 1

DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 0.00987294, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 1, Final residual = 0.0157, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 1, Final residual = 0.00987846, No Iterations 1
DILUPBiCG: Solving for h, Initial residual = 1, Final residual = 0.0105804, No Iterations 1
GAMG: Solving for p_rgh, Initial residual = 0.899378, Final residual = 0.00305647, No Iterations 4
time step continuity errors : sum local = 18.9686, global = -1.40909e-15, cumulative = -1.40909e-15
rho max/min : 1.22108 1.13449
DILUPBiCG: Solving for omega, Initial residual = 0.999913, Final residual = 0.0105502, No Iterations 2
bounding omega, min: -902.694 max: 24331.5 average: 741.476
DILUPBiCG: Solving for k, Initial residual = 1, Final residual = 0.0973743, No Iterations 1
bounding k, min: -0.000335494 max: 0.0031317 average: 0.000848312
ExecutionTime = 0.11 s ClockTime = 0 s

Time = 2

DILUPBiCG: Solving for Ux, Initial residual = 0.11425, Final residual = 0.00308422, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 0.0715711, Final residual = 2.95363e-05, No Iterations 2
DILUPBiCG: Solving for Uz, Initial residual = 0.109402, Final residual = 0.00196119, No Iterations 1
DILUPBiCG: Solving for h, Initial residual = 0.177633, Final residual = 0.00377685, No Iterations 1
GAMG: Solving for p_rgh, Initial residual = 0.997056, Final residual = 0.00735437, No Iterations 4
time step continuity errors : sum local = 11.1294, global = 4.01181e-15, cumulative = 2.60272e-15
rho max/min : 309747 -321318
DILUPBiCG: Solving for omega, Initial residual = 0.594095, Final residual = 0.0340323, No Iterations 1
bounding omega, min: -7.04999e+17 max: 1.99134e+09 average: -1.78979e+14
DILUPBiCG: Solving for k, Initial residual = 0.999984, Final residual = 0.0558935, No Iterations 2
bounding k, min: -7.30668e+08 max: 1.51545e+08 average: -1.226e+06
ExecutionTime = 0.15 s ClockTime = 0 s

Time = 3

DILUPBiCG: Solving for Ux, Initial residual = 0.125933, Final residual = 0.0012217, No Iterations 1
DILUPBiCG: Solving for Uy, Initial residual = 0.097672, Final residual = 0.000856571, No Iterations 1
DILUPBiCG: Solving for Uz, Initial residual = 0.130559, Final residual = 0.00122656, No Iterations 1
DILUPBiCG: Solving for h, Initial residual = 0.467341, Final residual = 0.00933279, No Iterations 1
#0 Foam::error::PrintStack(Foam::Ostream&) in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#1 Foam::sigFpe::sigFpeHandler(int) in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#2 in "/lib/libc.so.6"
#3 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::Field<double> const&, Foam::Field<double> const&, Foam::Field<double> const&) const in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#4 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::lduMatrix const&, Foam::Field<double>&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#5 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMa trix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#6 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
#7 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/opt/openfoam171/lib/linux64GccDPOpt/libfiniteVolume.so"
#8
in "/opt/openfoam171/applications/bin/linux64GccDPOpt/buoyantSimpleFoam"
#9 __libc_start_main in "/lib/libc.so.6"
#10
in "/opt/openfoam171/applications/bin/linux64GccDPOpt/buoyantSimpleFoam"
Gleitkomma-Ausnahme
almir@ubuntu:~/OpenFOAM/zylinder$


greets

almir

wyldckat June 18, 2011 16:53

Greetings Almir,

At the risk of sending you off in the wrong direction, you can try this answer: My program stops with an output that starts with #0 Foam::error:: PrintStack(Foam::Ostream&)

But in an attempt to send you in the right direction:
  1. You should pay closer attention to the output. For example:
    Code:

    Build  : 1.7.x-3776603e4c6c
    Exec  : buoyantSimpleFoam
    Date  : Jun 15 2011
    Time  : 12:26:48
    Host  : ubuntu
    PID    : 5430
    Case  : /home/almir/OpenFOAM/zylinder
    nProcs : 1
    SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).

    The line in bold tells you what SigFpe is: Floating point exception @wikipedia
  2. The second line in the print stack says this:
    Code:

    #1  Foam::sigFpe::sigFpeHandler(int) in "/opt/openfoam171/lib/linux64GccDPOpt/libOpenFOAM.so"
    Which means that some bad math went about doing something wrong... in other words, division by infinite or by zero or something like that.
  3. Examining the iteration outputs, you will see the following breadcrumbs about the impeding doom that looms in the solver's horizon:
    Code:

    Time = 1

    DILUPBiCG:  Solving for Ux, Initial residual = 1, Final residual = 0.00987294, No Iterations 1
    DILUPBiCG:  Solving for Uy, Initial residual = 1, Final residual = 0.0157, No Iterations 1
    DILUPBiCG:  Solving for Uz, Initial residual = 1, Final residual = 0.00987846, No Iterations 1
    DILUPBiCG:  Solving for h, Initial residual = 1, Final residual = 0.0105804, No Iterations 1
    GAMG:  Solving for p_rgh, Initial residual = 0.899378, Final residual = 0.00305647, No Iterations 4
    time step continuity errors : sum local = 18.9686, global = -1.40909e-15, cumulative = -1.40909e-15
    rho max/min : 1.22108 1.13449
    DILUPBiCG:  Solving for omega, Initial residual = 0.999913, Final residual = 0.0105502, No Iterations 2
    bounding omega, min: -902.694 max: 24331.5 average: 741.476
    DILUPBiCG:  Solving for k, Initial residual = 1, Final residual = 0.0973743, No Iterations 1
    bounding k, min: -0.000335494 max: 0.0031317 average: 0.000848312
    ExecutionTime = 0.11 s  ClockTime = 0 s

    Time = 2

    DILUPBiCG:  Solving for Ux, Initial residual = 0.11425, Final residual = 0.00308422, No Iterations 1
    DILUPBiCG:  Solving for Uy, Initial residual = 0.0715711, Final residual = 2.95363e-05, No Iterations 2
    DILUPBiCG:  Solving for Uz, Initial residual = 0.109402, Final residual = 0.00196119, No Iterations 1
    DILUPBiCG:  Solving for h, Initial residual = 0.177633, Final residual = 0.00377685, No Iterations 1
    GAMG:  Solving for p_rgh, Initial residual = 0.997056, Final residual = 0.00735437, No Iterations 4
    time step continuity errors : sum local = 11.1294, global = 4.01181e-15, cumulative = 2.60272e-15
    rho max/min : 309747 -321318
    DILUPBiCG:  Solving for omega, Initial residual = 0.594095, Final residual = 0.0340323, No Iterations 1
    bounding omega, min: -7.04999e+17 max: 1.99134e+09 average: -1.78979e+14
    DILUPBiCG:  Solving for k, Initial residual = 0.999984, Final residual = 0.0558935, No Iterations 2
    bounding k, min: -7.30668e+08 max: 1.51545e+08 average: -1.226e+06
    ExecutionTime = 0.15 s  ClockTime = 0 s

    As I've noted in bold+underline: rho, omega, k and continuity errors indicate that something very not physical is happening!! Expressions like über compression and super turbulence come to mind! ;)
So, to sum up: you are giving very bad boundary conditions to your case!

Best regards,
Bruno

tfuwa August 3, 2011 11:00

Awesome analysis. Also solved my problem. Thanks.

Kanarya February 9, 2012 07:22

Hi Foamers,

I am running twoPhaseEulerFoam and i have increased the mesh size 6000(which was in tutorial bed2) to 24000 and I am getting following error. I tried for 12000 again same.in blockMeshDict it was (30 200 1) first I have changed it to (30 400 1) then (60 400 2) an so on.Another problem is that I have to change the file 0/alpha everytime.is there any other practical solution for that?


Courant Number mean: 0.263255 max: 12.2832
Max Ur Courant Number = 3.77181e+06
Time = 0.071

DILUPBiCG: Solving for alpha, Initial residual = 1.1014e-05, Final residual = 6.17658e-11, No Iterations 33
Dispersed phase volume fraction = 0.3 Min(alpha) = -1.92847 Max(alpha) = 2.81043
DILUPBiCG: Solving for alpha, Initial residual = 0.00010103, Final residual = 5.2889e-11, No Iterations 8
Dispersed phase volume fraction = 0.3 Min(alpha) = -0.247369 Max(alpha) = 1.92082
kinTheory: max(Theta) = 1000
kinTheory: min(nua) = 1.3774e-12, max(nua) = 0.0231854
kinTheory: min(pa) = -9295.94, max(pa) = 1.14803e+09
GAMG: Solving for p, Initial residual = 0.996287, Final residual = 0.0429939, No Iterations 1
time step continuity errors : sum local = 201567, global = 28.235, cumulative = 28.235
#0 Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1 Foam::sigFpe::sigHandler(int) in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2 in "/lib/x86_64-linux-gnu/libc.so.6"
#3 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::Field<double> const&, Foam::Field<double> const&, Foam::Field<double> const&) const in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#4 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::lduMatrix const&, Foam::Field<double>&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#5 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMa trix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#6 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#7 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/opt/openfoam210/platforms/linux64GccDPOpt/lib/libfiniteVolume.so"
#8
in "/opt/openfoam210/platforms/linux64GccDPOpt/bin/twoPhaseEulerFoam"
#9 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#10
in "/opt/openfoam210/platforms/linux64GccDPOpt/bin/twoPhaseEulerFoam"
Floating point exception

Please help me...sorry for stupid questions

thanksss a lot!!!

Kanarya February 9, 2012 09:17

Hi all,

thanks I did it alone...

Thanks...

ebah6 February 17, 2012 21:33

Hello all,

I am having a similar error in using pimpleDyMFoam.
Below is the error output:

------------------------------------
Courant Number mean: 0.00997899 max: 0.807965
deltaT = 1.32295e-104
--> FOAM Warning :
From function Time::operator++()
in file db/Time/Time.C at line 982
Increased the timePrecision from 267 to 268 to distinguish between timeNames at time 1.97982e-05
Time = 1.979823486337861903608045799352055382769322022795 67718505859375e-05

solidBodyMotionFunctions::rotatingMotion::transfor mation(): Time = 1.97982e-05 transformation: ((0 0 0) (1 (0 0 0.000103663)))
AMI: Creating addressing and weights between 16 source faces and 16 target faces
AMI: Patch source weights min/max/average = 1, 1.0007, 1.00035
AMI: Patch target weights min/max/average = 0.986951, 0.987248, 0.987099
smoothSolver: Solving for Ux, Initial residual = 0.140328, Final residual = 5.75579e-08, No Iterations 3
smoothSolver: Solving for Uy, Initial residual = 0.140962, Final residual = 5.70666e-08, No Iterations 3
GAMG: Solving for p, Initial residual = 0.814891, Final residual = 0.00591061, No Iterations 3
time step continuity errors : sum local = 0.00031744, global = 5.68686e-06, cumulative = 0.00082973
#0 Foam::error::printStack(Foam::Ostream&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1 Foam::sigFpe::sigHandler(int) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2 in "/lib/x86_64-linux-gnu/libc.so.6"
#3 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::Field<double> const&, Foam::Field<double> const&, Foam::Field<double> const&) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#4 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::lduMatrix const&, Foam::Field<double>&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#5 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMa trix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#6 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#7 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libfiniteVolume.so"
#8
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/bin/pimpleDyMFoam"
#9 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#10
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/bin/pimpleDyMFoam"
--------------------------

Can you tell how you went along in solving the issue?

Thank you for your help.

wyldckat February 18, 2012 05:56

Greetings ebah6,

Quote:

Originally Posted by ebah6 (Post 345031)
Courant Number mean: 0.00997899 max: 0.807965
deltaT = 1.32295e-104
--> FOAM Warning :
From function Time::operator++()
in file db/Time/Time.C at line 982
Increased the timePrecision from 267 to 268 to distinguish between timeNames at time 1.97982e-05
Time = 1.979823486337861903608045799352055382769322022795 67718505859375e-05


solidBodyMotionFunctions::rotatingMotion::transfor mation(): Time = 1.97982e-05 transformation: ((0 0 0) (1 (0 0 0.000103663)))
AMI: Creating addressing and weights between 16 source faces and 16 target faces
AMI: Patch source weights min/max/average = 1, 1.0007, 1.00035
AMI: Patch target weights min/max/average = 0.986951, 0.987248, 0.987099
smoothSolver: Solving for Ux, Initial residual = 0.140328, Final residual = 5.75579e-08, No Iterations 3
smoothSolver: Solving for Uy, Initial residual = 0.140962, Final residual = 5.70666e-08, No Iterations 3
GAMG: Solving for p, Initial residual = 0.814891, Final residual = 0.00591061, No Iterations 3
time step continuity errors : sum local = 0.00031744, global = 5.68686e-06, cumulative = 0.00082973
#0 Foam::error::printStack(Foam::Ostream&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1 Foam::sigFpe::sigHandler(int) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2 in "/lib/x86_64-linux-gnu/libc.so.6"
#3 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::Field<double> const&, Foam::Field<double> const&, Foam::Field<double> const&) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#4 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::lduMatrix const&, Foam::Field<double>&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#5 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMa trix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#6 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#7 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libfiniteVolume.so"
#8
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/bin/pimpleDyMFoam"
#9 __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#10
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/bin/pimpleDyMFoam"
--------------------------

Can you tell how you went along in solving the issue?

You've got a whole other set of problems on your case. In bold and underlined are the major indicators of what might be wrong:
  • Courant number seems just fine, or at least isn't going overboard...
  • ... at the cost of having a deltaT at a magnitude of 1e-104!? I don't even know what kind of level of simulation interaction this might be, but possibly at the quark level (http://en.wikipedia.org/wiki/Quark)?
  • Time precision at 268 digits... 64bit floating point can't go beyond... I can't remember right, but isn't the limit 14 or 19?
  • AMI... and using 2.1.0... mmm... you're trying to use a new feature that was released on OpenFOAM 2.1.0, so bugs are bound to pop up anywhere!
So, the diagnosis is this:
  1. Using an adaptive deltaT based on Courant, the solver was forced to virtually go into sub-atomic simulations, which isn't its natural operational zone. The "adaptive deltaT based on Courant" feature is usually based on the smallest cell your mesh has got. Did you run checkMesh to verify the quality of the mesh? Do you see the "minimum volume" value? Is it something like 1e-40?
  2. Also, go see the first tutorial on the user guide: Increasing the mesh resolution - read that section and you will see how the cell size relates to the deltaT needed for a stable Courant number.
  3. AMI, that's a new feature. If you are going to use new features, you better be ready to use the bleeding edge ... pardon, the bug fix version of OpenFOAM, namely 2.1.x, not 2.1.0 ;)
  4. If it's something new to you, Always start small!!
    You didn't specify anything about the case you are trying to simulate, so I'm assuming from the results that you are trying to simulate something that is really fast moving and has really a lot of refined zones in the mesh.
    Therefore, to avoid these kinds of issues, you should always start with something more simple. For example, if you want to simulate a fast moving F-16 going at Mach 2.1 (probably doing a full afterburner free fall? :rolleyes:), you first have to start simulating a simple trapezoid that barely resembles an airplane (ahmed body?), using a very coarse mesh and going at 1m/s. Then gradually work to a more complex mesh and geometry, so you can see the gradual needs for smaller deltaT and stuff...
Best regards and good luck!
Bruno

ebah6 February 18, 2012 16:54

Thank you Bruno.

I appreciate. Let me go through this and to see how I can correct my mistakes.
I will probably get back to you for more help.

My best regards.

ebah6 April 3, 2012 00:20

1 Attachment(s)
Hello Bruno and everyone else,

Allow that I follow up on this thread for I am experience similar issues as those for which the thread was initiated.
My log file is as follows:
PHP Code:

Courant Number mean0.00401268 max0.141369
deltaT 
1e-05
Time 
6e-05

solidBodyMotionFunctions
::rotatingMotion::transformation(): Time 6e-05 transformation: ((0 0 -8.67362e-19) ((0 0 0.0006111)))
AMICreating addressing and weights between 152 source faces and 152 target faces
AMI
Patch source weights min/max/average 1.000461.000591.00051
AMI
Patch target weights min/max/average 1.000271.000411.00034
PIMPLE
iteration 1
smoothSolver
:  Solving for UxInitial residual 0.556334, Final residual 0.0528731No Iterations 2
smoothSolver
:  Solving for UyInitial residual 0.548617, Final residual 0.0461752No Iterations 2
smoothSolver
:  Solving for UzInitial residual 0.815939, Final residual 0.0320452No Iterations 3
GAMG
:  Solving for pInitial residual 0.921675, Final residual 0.00854805No Iterations 2
time step continuity errors 
sum local 0.000398218, global = -2.4468e-06cumulative = -4.27843e-06
PIMPLE
iteration 2
smoothSolver
:  Solving for UxInitial residual 0.355747, Final residual 0.0276016No Iterations 3
smoothSolver
:  Solving for UyInitial residual 0.383216, Final residual 0.034923No Iterations 2
smoothSolver
:  Solving for UzInitial residual 0.532627, Final residual 0.0249993No Iterations 3
GAMG
:  Solving for pInitial residual 0.873111, Final residual 0.00495616No Iterations 3
time step continuity errors 
sum local 0.000579107, global = 3.73469e-05cumulative 3.30685e-05
PIMPLE
iteration 3
DILUPBiCG
:  Solving for UxInitial residual 0.464469, Final residual 7.77744e-07No Iterations 24
DILUPBiCG
:  Solving for UyInitial residual 0.5108, Final residual 5.63977e-07No Iterations 23
DILUPBiCG
:  Solving for UzInitial residual 0.472435, Final residual 1.84551e-07No Iterations 7
GAMG
:  Solving for pInitial residual 0.960559, Final residual 6.37384e-07No Iterations 22
time step continuity errors 
sum local 1.31687e-07, global = -1.43223e-08cumulative 3.30542e-05
DILUPBiCG
:  Solving for epsilonInitial residual 0.984696, Final residual 7.74724e-07No Iterations 25
DILUPBiCG
:  Solving for kInitial residual 0.971516, Final residual 7.10152e-07No Iterations 25
ExecutionTime 
5.57 s  ClockTime 6 s

Courant Number mean
0.0553216 max1.49585
deltaT 
1e-05
Time 
7e-05

solidBodyMotionFunctions
::rotatingMotion::transformation(): Time 7e-05 transformation: ((0 0 0) ((0 0 0.00071295)))
AMICreating addressing and weights between 152 source faces and 152 target faces
AMI
Patch source weights min/max/average 1.000451.000591.00051
AMI
Patch target weights min/max/average 1.000271.000411.00034
PIMPLE
iteration 1
smoothSolver
:  Solving for UxInitial residual 0.821499, Final residual 0.0529954No Iterations 2
smoothSolver
:  Solving for UyInitial residual 0.772528, Final residual 0.0435714No Iterations 2
smoothSolver
:  Solving for UzInitial residual 0.889394, Final residual 0.0573016No Iterations 3
GAMG
:  Solving for pInitial residual 0.927555, Final residual 0.00793319No Iterations 3
time step continuity errors 
sum local 0.00232858, global = 1.09112e-05cumulative 4.39653e-05
PIMPLE
iteration 2
smoothSolver
:  Solving for UxInitial residual 0.624835, Final residual 0.0579445No Iterations 8
smoothSolver
:  Solving for UyInitial residual 0.528869, Final residual 0.0500303No Iterations 7
smoothSolver
:  Solving for UzInitial residual 0.667562, Final residual 0.0371676No Iterations 4
GAMG
:  Solving for pInitial residual 0.921425, Final residual 0.00829498No Iterations 2
time step continuity errors 
sum local 0.0183302, global = 0.000417751cumulative 0.000461716
PIMPLE
iteration 3
DILUPBiCG
:  Solving for UxInitial residual 0.469402, Final residual 0.00155559No Iterations 1001
DILUPBiCG
:  Solving for UyInitial residual 0.456055, Final residual 0.00192389No Iterations 1001
DILUPBiCG
:  Solving for UzInitial residual 0.657918, Final residual 7.2394e-07No Iterations 26
GAMG
:  Solving for pInitial residual 0.940662, Final residual 9.79822e-07No Iterations 23
time step continuity errors 
sum local 5.83896e-06, global = -5.30525e-07cumulative 0.000461185
DILUPBiCG
:  Solving for epsilonInitial residual 0.989478, Final residual 10537.6No Iterations 1001
bounding epsilon
min: -1.28949e+18 max9.22403e+17 average: -3.22331e+14
DILUPBiCG
:  Solving for kInitial residual 1.42853e-05, Final residual 6.87541e-07No Iterations 30
bounding k
min: -1.84393e+10 max2.85172e+12 average1.06974e+10
ExecutionTime 
12.22 s  ClockTime 13 s

Courant Number mean
4.69634 max8810.84
deltaT 
2.26963e-09
Time 
7.00023e-05

solidBodyMotionFunctions
::rotatingMotion::transformation(): Time 7.00023e-05 transformation: ((0 0 -8.67362e-19) ((0 0 0.000712973)))
AMICreating addressing and weights between 152 source faces and 152 target faces
AMI
Patch source weights min/max/average 1.000451.000591.00051
AMI
Patch target weights min/max/average 1.000271.000411.00034
PIMPLE
iteration 1
smoothSolver
:  Solving for UxInitial residual 0.924264, Final residual 0.0530127No Iterations 2
smoothSolver
:  Solving for UyInitial residual 0.641598, Final residual 0.0386788No Iterations 2
smoothSolver
:  Solving for UzInitial residual 0.921792, Final residual 0.0461483No Iterations 2
GAMG
:  Solving for pInitial residual 1, Final residual 5705.13No Iterations 50
time step continuity errors 
sum local 9.40062e+08, global = 3.78949e+07cumulative 3.78949e+07
PIMPLE
iteration 2
smoothSolver
:  Solving for UxInitial residual 1, Final residual 0.0224152No Iterations 1
smoothSolver
:  Solving for UyInitial residual 1, Final residual 0.0293676No Iterations 1
smoothSolver
:  Solving for UzInitial residual 0.00299007, Final residual 6.31039e-05No Iterations 1
GAMG
:  Solving for pInitial residual 0.595319, Final residual 9.86458e-05No Iterations 1
time step continuity errors 
sum local 3.27194e+16, global = 4.33629e+15cumulative 4.33629e+15
PIMPLE
iteration 3
DILUPBiCG
:  Solving for UxInitial residual 0.999996, Final residual 5.86893e-07No Iterations 28
DILUPBiCG
:  Solving for UyInitial residual 0.999998, Final residual 7.80082e-07No Iterations 20
DILUPBiCG
:  Solving for UzInitial residual 0.000201204, Final residual 9.98882e-07No Iterations 5
GAMG
:  Solving for pInitial residual 0.879029, Final residual 7.05858e-07No Iterations 47
time step continuity errors 
sum local 4.76118e+19, global = -8.01772e+18cumulative = -8.01339e+18
DILUPBiCG
:  Solving for epsilonInitial residual 1, Final residual 48876.6No Iterations 1001
bounding epsilon
min: -1.7324e+51 max2.47572e+51 average5.46302e+48
DILUPBiCG
:  Solving for kInitial residual 1, Final residual 6.79817e-07No Iterations 96
bounding k
min: -1.36796e+48 max7.09554e+59 average6.62561e+56
ExecutionTime 
15.82 s  ClockTime 16 s

Courant Number mean
2.99449e+25 max3.21959e+28
deltaT 
1.40989e-37
--> FOAM Warning :
    
From function Time::operator++()
    
in file db/Time/Time.C at line 1010
    Increased the timePrecision from 6 to 7 to distinguish between timeNames at time 7.00023e-05
Time 
7.000227e-05

solidBodyMotionFunctions
::rotatingMotion::transformation(): Time 7.00023e-05 transformation: ((0 0 -8.67362e-19) ((0 0 0.000712973)))
AMICreating addressing and weights between 152 source faces and 152 target faces
AMI
Patch source weights min/max/average 1.000451.000591.00051
AMI
Patch target weights min/max/average 1.000271.000411.00034
PIMPLE
iteration 1
smoothSolver
:  Solving for UxInitial residual 0.904795, Final residual 0.0198399No Iterations 2
smoothSolver
:  Solving for UyInitial residual 0.920089, Final residual 0.00357272No Iterations 1
smoothSolver
:  Solving for UzInitial residual 0.770617, Final residual 0.0112283No Iterations 1
GAMG
:  Solving for pInitial residual 1, Final residual 3.34459e+74No Iterations 50
time step continuity errors 
sum local 2.15853e+83, global = 2.27242e+82cumulative 2.27242e+82
PIMPLE
iteration 2
smoothSolver
:  Solving for UxInitial residual 0.998805, Final residual 1.80506No Iterations 1000
smoothSolver
:  Solving for UyInitial residual 1, Final residual 1.93583e-05No Iterations 1
smoothSolver
:  Solving for UzInitial residual 0.000531627, Final residual 1.42642e-07No Iterations 1
#0  Foam::error::printStack(Foam::Ostream&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1  Foam::sigFpe::sigHandler(int) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2   in "/lib/x86_64-linux-gnu/libc.so.6"
#3  Foam::DICPreconditioner::calcReciprocalD(Foam::Field<double>&, Foam::lduMatrix const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#4  Foam::DICSmoother::DICSmoother(Foam::word const&, Foam::lduMatrix const&, Foam::FieldField<Foam::Field, double> const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#5  Foam::DICGaussSeidelSmoother::DICGaussSeidelSmoother(Foam::word const&, Foam::lduMatrix const&, Foam::FieldField<Foam::Field, double> const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#6  Foam::lduMatrix::smoother::addsymMatrixConstructorToTable<Foam::DICGaussSeidelSmoother>::New(Foam::word const&, Foam::lduMatrix const&, Foam::FieldField<Foam::Field, double> const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
 #7  Foam::lduMatrix::smoother::New(Foam::word const&, Foam::lduMatrix const&, Foam::FieldField<Foam::Field, double> const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::dictionary const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#8  Foam::GAMGSolver::initVcycle(Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::lduMatrix::smoother>&) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#9  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#10  Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/lib/libfiniteVolume.so"
#11
 
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/bin/pimpleDyMFoam"
#12  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#13
 
in "/home/alpha/OpenFOAM/OpenFOAM-2.1.x/platforms/linux64GccDPOpt/bin/pimpleDyMFoam" 

I also attached my initial condition files in 0.zip.

Could you please have a look at this issue.

Thanks in advance.

samiam1000 April 3, 2012 03:32

Look at the courant number: it's increasing a lot. I think you should reduce the `maxCo' value in the system/controlDict file.

The solution will be slower, but I think it'll works,

wyldckat April 3, 2012 05:45

Greetings to all!

To add to samiam1000's answer:
edit: I based my answer on Samuele's answer... but apparently the problem is something else, simply because the time step is being automatically adjusted! So, if the time step is automatically adjusted and the solver still crashes, then it's either: a very bad mesh; or boundary conditions; or wrong fvSchemes... Either way, please create a small example case that reproduces your problem and post it here. Otherwise, it'll just be a long and tedious guessing game :(

Best regards,
Bruno

samiam1000 April 3, 2012 05:55

Dear Bruno, Dear All,

thanks for the links that you added.

I think they are very useful.

Also, I do have a problem with buoyantPimpleFoam.

I am trying to impose either temerature or velocity in some cells.

I did the same with buoyanSimpleFoam, but I have problems with the steady state. Could you check the folder I modified, please?

I am attaching the latest version of my solver, here.

Thanks a lot,

Samuele

ebah6 April 4, 2012 19:14

4 Attachment(s)
Quote:

Originally Posted by wyldckat (Post 352865)
Greetings to all!

To add to samiam1000's answer:
edit: I based my answer on Samuele's answer... but apparently the problem is something else, simply because the time step is being automatically adjusted! So, if the time step is automatically adjusted and the solver still crashes, then it's either: a very bad mesh; or boundary conditions; or wrong fvSchemes... Either way, please create a small example case that reproduces your problem and post it here. Otherwise, it'll just be a long and tedious guessing game :(

Best regards,
Bruno

Hello Bruno,

I did some dummy test cases:
1) a square box that rotates with the AMI; structured mesh
2) same thing with unstructured mesh.
Both these cases don't seem to give any error.
3) I did my learning case with unstructured mesh; it consists on two cylindrical rotors (Darrieus). But here I run into trouble with the problem described above.
Attached are some pictures to see how the meshes look like.
For the latter case, you can see the velocity field is messing in the outer domain where no rotation is happening.

Another question I had is how to export hybrid mesh from pointwise to openfoam?
By hybrid I mean unstructured in x-y place and we extrude in the z-direction which will then be structured.
I tried that but only the structured boundary faces are exported not the unstructured ones.

Thank you for your and my best regards.

wyldckat April 5, 2012 09:03

Hi ebah6,

Mmm... I'm not an expert on this subject, but this is what I can see that might be the source of the problems:
  • The cylindrical paddles are very thin, which usually leads to the requirement of additional resolution.
  • The sliding interface seems to be also lacking resolution, but again I'm no expert on this.
  • Still associated to this previous points, why doesn't your Darrieus structure have more cells from the tips of the cylindrical surfaces to the sliding interface, just like you have in the square-box experiments?
  • Replace the square version with a single thin linear paddle (basically a squished square :)), with the same space you've got on the square experiments. If it works well, then there are two kinds of tests to be done:
    1. Try making the paddle thinner, to see how thickness might affect the solver.
    2. Make the paddle longer, so the number of cells reduces between the tips of the paddle and the sliding interface.
My bet is that as soon as the paddle is too close to the sliding interface, in regards to cell count, the solver will start having serious problems.
The other theory is that the thickness of the paddles is having a very big effect on the development of vortexes... and if these are not properly solved, it's only natural that some seriously crazy "fluid pressure shocks" (not a very technical term) will occur.

Another issue might be the speed at which the rotor is running. Proper field initialization might be required to induce the solver to start with good starting values; otherwise, you probably will have to simulate starting with the rotation speed at 0 RPM.

I'm not very familiar with these solvers, but my guess is that if you only want to have an "averaging" result, then one of the LTS solvers might come in handy... although you would have to create one that would LTS with AMI...

Best regards,
Bruno

ebah6 April 10, 2012 18:02

Hello,

Yes Bruno, some of the possible issues are as you mentioned; thanks for your insight.
In particular, as the body becomes thinner, I run into problems.
However, I am only encountering problems when using a turbulence model: the laminar case runs fine.
For the cases with a turbulent flow, I refined the mesh again and again but it still crashes with a skyrocketing Courant Number.
My pressing issue is that I need to deal with thin bodies, so I need to a work-around.

Also, you suggested the STL snappyHexMesh. I did that in a recent past but the sliding interfaces show step like shape dispite the refinement.

Thanks for your help.

wyldckat April 11, 2012 04:38

Hi ebah6,

Mmm, if it's not the mesh, then you've got to start tuning the "fvSchemes" file and possibly the "fvSolution" one as well. Unfortunately I don't know much how to configure them properly for each scenario, so I suggest that you check all of the relevant tutorials in OpenFOAM, as well as the User Guide.

Good luck!
Bruno

iamed18 May 31, 2012 12:11

Similar Quandary
 
Good Afternoon, Everyone!

I come to this place with a similar issue, and upon reading the above comments and filtering through the User Guide for more information about initial conditions for k-epsilon and about the Courant number, I'm still having a heck of a time performing a run.

Let me explain the situation to you (I can't post the 0/ files for various reasons):

A 7m long blunt object is situated in a 10m/s wind-tunnel, with the floor of the tunnel moving with it (so we're in the blunt object's reference frame). The "ground" is of species 1 (alpha1), the wind-tunnel (or atmosphere) is of species 2 (alpha2) and the blunt object is spewing species 3 out of its' side at 40m/s (alpha3).

So, in my back-of-the-envelope calculations (inspired by the User Guide), I set my initial value of k=2.5 and epsilon=0.25. I also set the initial time-step to 0.005sec, and for the sake of early testing I turned off "adjustTimeStep." With all of that said and done, it only completes one iteration of calculation, the output for which is here:

Code:

Courant Number mean: 0.256274 max: 3.73333

PIMPLE: Operating solver in PISO mode

time step continuity errors : sum local = 0.000307692, global = -1.74165e-05, cumulative = -1.74165e-05
DICPCG:  Solving for pcorr, Initial residual = 1, Final residual = 9.01496e-11, No Iterations 589
time step continuity errors : sum local = 4.60233e-10, global = -4.49286e-16, cumulative = -1.74165e-05

Starting time loop

Courant Number mean: 0.27796 max: 9.09506
Interface Courant Number mean: 0 max: 0
Time = 0.005

diagonal:  Solving for alpha1, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for alpha2, Initial residual = 1, Final residual = 4.83778e-07, No Iterations 1
Air phase volume fraction = 0  Min(alpha1) = 0  Max(alpha1) = 1
Liquid phase volume fraction = 1  Min(alpha2) = 1  Max(alpha2) = 1
diagonal:  Solving for alpha1, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for alpha2, Initial residual = 0.354522, Final residual = 1.43335e-07, No Iterations 1
Air phase volume fraction = 0  Min(alpha1) = 0  Max(alpha1) = 1
Liquid phase volume fraction = 1  Min(alpha2) = 1  Max(alpha2) = 1
DICPCG:  Solving for p_rgh, Initial residual = 1, Final residual = 0.0479761, No Iterations 305
time step continuity errors : sum local = 2.33578, global = 2.22882, cumulative = 2.2288
[kaleva:14255] 5 more processes have sent help message help-mpi-common-sm.txt / mmap on nfs
[kaleva:14255] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
DICPCG:  Solving for p_rgh, Initial residual = 0.683683, Final residual = 0.0337928, No Iterations 320
time step continuity errors : sum local = 3.34528, global = 2.22882, cumulative = 4.45762
DICPCG:  Solving for p_rgh, Initial residual = 0.473874, Final residual = 9.50935e-08, No Iterations 467
time step continuity errors : sum local = 2.35844, global = 2.22882, cumulative = 6.68644
DILUPBiCG:  Solving for epsilon, Initial residual = 0.999999, Final residual = 25.758, No Iterations 1001
bounding epsilon, min: -7.75298e+11 max: 1.02524e+12 average: -256836
DILUPBiCG:  Solving for k, Initial residual = 1, Final residual = 89.7085, No Iterations 1001
bounding k, min: -7.39749e+11 max: 6.80932e+11 average: -9.18358e+06
time step continuity errors : sum local = 2.35844, global = 2.22882, cumulative = 8.91526
ExecutionTime = 22.39 s  ClockTime = 23 s

Courant Number mean: 611.356 max: 4.60701e+06
Interface Courant Number mean: 0 max: 0
Time = 0.01

diagonal:  Solving for alpha1, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for alpha2, Initial residual = 1, Final residual = 1.1291e-10, No Iterations 1
Air phase volume fraction = -1.11442  Min(alpha1) = -339.921  Max(alpha1) = 1
Liquid phase volume fraction = -0.114296  Min(alpha2) = -4.60701e+06  Max(alpha2) = 614.595
[0] #0  Foam::error::printStack(Foam::Ostream&) in "/home/leonard/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #1  Foam::sigFpe::sigHandler(int) in "/home/leonard/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #2  __restore_rt at sigaction.c:0
[0] #3  void Foam::MULES::limiter<Foam::geometricOneField, Foam::zeroField, Foam::zeroField>(Foam::Field<double>&, Foam::geometricOneField const&, Foam::GeometricField<double,Foam::fvPatchField, Foam::volMesh> const&, Foam::GeometricField<double, Foam::fvsPatchField, Foam::surfaceMesh> const&, Foam::GeometricField<double, Foam::fvsPatchField, Foam::surfaceMesh> const&, Foam::zeroField const&, Foam::zeroField const&, double, double, int) in "/home/leonard/OpenFOAM/OpenFOAM-2.1.0/platforms/linux64GccDPOpt/bin/interMixingFoam"

I can see where it blows up, I just don't know how to fix it!

I would much appreciate any input anyone has on this matter.
Thanks!

wyldckat May 31, 2012 14:32

Greetings Edward,

OK, if you've read about the Courant number, then you should know that you should check the smallest cell size you've got:
  1. Run:
    Code:

    checkMesh
  2. Search for the "Minimum volume" value.
That smallest cell is the one limiting everything!

Oh, and if checkMesh tells you that you've got bad cells or faces, then that's another source of your problems ;)

Best regards,
Bruno

iamed18 June 1, 2012 09:40

Quote:

Originally Posted by wyldckat (Post 364133)
...check the smallest cell size you've got:

...That smallest cell is the one limiting everything!

I had forgotten about a set of cells I had that were two orders smaller than the rest!

However, I've since decided to go a different route because this takes a painful amount of time to process. I had interFoam running on this large mesh (see checkMesh output below) on 8 CPUs, left it over-night and it had only gotten to 0.08sec by the following morning. Since my goal is a steady-state solution, I think what I want to try is to add the phase mixing of interFoam to the SIMPLE solver of simpleFoam. I took a quick look at it yesterday, and it seems like it will be a formidable task.

Any insight on the matter before I hit the ground running?
Thanks!
~Ed

wyldckat June 1, 2012 09:50

Hi Edward,

Mmm, you forgot to attach your checkMesh log. ;)

Anyway, if you want the steady state solution with interFoam, then probably this is what you want: http://www.openfoam.org/version2.0.0/steady-vof.php

Best regards,
Bruno

yash.aesi August 27, 2013 06:17

regaeding error while running rhoSimplecFoam
 
greetings bruno and everyone ,

i am trying to simulate my case of cold flow simulation using the rhoSimplecFoam solver but after some time of run it ends with the following error ..

can you plz help me to understand where i am doin wrong ? thanks in advance :)


Code:

Time = 43

GAMG:  Solving for Ux, Initial residual = 0.19957, Final residual = 0.0130805, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.432992, Final residual = 0.030986, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00150254, Final residual = 0.000111488, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.164657, Final residual = 0.122731, No Iterations 1000
time step continuity errors : sum local = 0.0390443, global = -0.0148295, cumulative = -0.639136
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000488404, Final residual = 2.53703e-08, No Iterations 1
bounding epsilon, min: -4.33742e-06 max: 857.902 average: 1.75282
GAMG:  Solving for k, Initial residual = 0.128439, Final residual = 0.0118675, No Iterations 1
bounding k, min: 1e-16 max: 5005.77 average: 1.11456
ExecutionTime = 1274.69 s  ClockTime = 1275 s

Time = 44

GAMG:  Solving for Ux, Initial residual = 0.00778892, Final residual = 0.000427613, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.262673, Final residual = 0.0165556, No Iterations 1
GAMG:  Solving for e, Initial residual = 1.08917e-05, Final residual = 6.74763e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.198508, Final residual = 0.14558, No Iterations 1000
time step continuity errors : sum local = 0.0445931, global = -0.0148271, cumulative = -0.653963
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000403613, Final residual = 1.28867e-07, No Iterations 1
bounding epsilon, min: -3.4861e-08 max: 857.872 average: 1.76686
GAMG:  Solving for k, Initial residual = 0.0963778, Final residual = 0.00239046, No Iterations 2
bounding k, min: -2.2769e-08 max: 18855.3 average: 2.6373
ExecutionTime = 1302.92 s  ClockTime = 1303 s

Time = 45

GAMG:  Solving for Ux, Initial residual = 0.823156, Final residual = 0.0483317, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.673439, Final residual = 0.034677, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00301749, Final residual = 0.000137252, No Iterations 1
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2  in "/lib/x86_64-linux-gnu/libc.so.6"
#3  double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) at ??:?
#4  Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#5  Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const at ??:?
#6  Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const at ??:?
#7  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#8  Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) at ??:?
#9  Foam::fvMatrix<double>::solve(Foam::dictionary const&) at ??:?
#10  Foam::fvMatrix<double>::solve() at ??:?
#11 
 at ??:?
#12  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#13 
 at ??:?
Floating point exception (core dumped)


wyldckat August 27, 2013 17:37

Greetings yash.aesi,

Not much information to work with. But from the output you've showed:
Quote:

Originally Posted by yash.aesi (Post 448280)
Code:

Time = 43

GAMG:  Solving for Ux, Initial residual = 0.19957, Final residual = 0.0130805, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.432992, Final residual = 0.030986, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00150254, Final residual = 0.000111488, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.164657, Final residual = 0.122731, No Iterations 1000
time step continuity errors : sum local = 0.0390443, global = -0.0148295, cumulative = -0.639136
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000488404, Final residual = 2.53703e-08, No Iterations 1
bounding epsilon, min: -4.33742e-06 max: 857.902 average: 1.75282
GAMG:  Solving for k, Initial residual = 0.128439, Final residual = 0.0118675, No Iterations 1
bounding k, min: 1e-16 max: 5005.77 average: 1.11456
ExecutionTime = 1274.69 s  ClockTime = 1275 s

Time = 44

GAMG:  Solving for Ux, Initial residual = 0.00778892, Final residual = 0.000427613, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.262673, Final residual = 0.0165556, No Iterations 1
GAMG:  Solving for e, Initial residual = 1.08917e-05, Final residual = 6.74763e-07, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.198508, Final residual = 0.14558, No Iterations 1000
time step continuity errors : sum local = 0.0445931, global = -0.0148271, cumulative = -0.653963
rho max/min : 1 0.382345
GAMG:  Solving for epsilon, Initial residual = 0.000403613, Final residual = 1.28867e-07, No Iterations 1
bounding epsilon, min: -3.4861e-08 max: 857.872 average: 1.76686
GAMG:  Solving for k, Initial residual = 0.0963778, Final residual = 0.00239046, No Iterations 2
bounding k, min: -2.2769e-08 max: 18855.3 average: 2.6373
ExecutionTime = 1302.92 s  ClockTime = 1303 s

Time = 45

GAMG:  Solving for Ux, Initial residual = 0.823156, Final residual = 0.0483317, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.673439, Final residual = 0.034677, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00301749, Final residual = 0.000137252, No Iterations 1
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2  in "/lib/x86_64-linux-gnu/libc.so.6"
#3  double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) at ??:?
#4  Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#5  Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const at ??:?
#6  Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const at ??:?
#7  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#8  Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) at ??:?
#9  Foam::fvMatrix<double>::solve(Foam::dictionary const&) at ??:?
#10  Foam::fvMatrix<double>::solve() at ??:?
#11 
 at ??:?
#12  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#13 
 at ??:?
Floating point exception (core dumped)


In bold are the things to take into account:
  1. The pressure field/equation is not being solved properly. After 1000 iterations, it does not have a solution.
  2. Minimum "epsilon" is negative. If this "epsilon" is the one for the turbulence model, then the minimum value should be positive and greater than zero.
  3. SIGFPE triggered, at the sumProd method or function. In other words, it tried to do a summation and a product of 2 or more values and it went wrong, probably because one of the values was probably NaN or Inf.
All indicates that you have incorrectly defined the boundary conditions.

Best regards
Bruno

yash.aesi August 28, 2013 03:58

thanks bruno ,
i wl try to check BC's :)

yash.aesi August 28, 2013 08:00

1 Attachment(s)
helo bruno ,

i tried to check my BC's. Now after giving a run its goin fine but i think the problem is not solved yet as the point you mentioned in last post about the pressure is not solved yet . The output is showing as (without giving error ):

Code:

Time = 298

GAMG:  Solving for Ux, Initial residual = 0.00248195, Final residual = 2.77057e-05, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.00686025, Final residual = 0.000236565, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00343494, Final residual = 5.85371e-05, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.20715, Final residual = 0.152599, No Iterations 1000
time step continuity errors : sum local = 0.0140403, global = -0.011032, cumulative = -0.311197
rho max/min : 1 0.352192
GAMG:  Solving for epsilon, Initial residual = 0.000197142, Final residual = 7.86035e-06, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.00474871, Final residual = 0.000184764, No Iterations 1
ExecutionTime = 860.06 s  ClockTime = 860 s

Time = 299

GAMG:  Solving for Ux, Initial residual = 0.00247746, Final residual = 2.75936e-05, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.0068694, Final residual = 0.000236605, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.00342726, Final residual = 5.82198e-05, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.209186, Final residual = 0.153251, No Iterations 1000
time step continuity errors : sum local = 0.0140198, global = -0.0110261, cumulative = -0.322224
rho max/min : 1 0.352192
GAMG:  Solving for epsilon, Initial residual = 0.000196554, Final residual = 7.82037e-06, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.00473606, Final residual = 0.000183578, No Iterations 1
ExecutionTime = 889.64 s  ClockTime = 890 s

i am attaching the file of my BC's . can you please have a look on it and tell me whr i am doin wrong ??

thanks alot in advance :)

wyldckat August 31, 2013 15:05

Hi Sonu,

From what I can see, the pressure in the outlet should not be defined as being of type "calculated". Because that way you have an undefined boundary on the outlet, since you say that the velocity is of type "zeroGradient".

For ideas on what the boundary conditions should be, see the tutorials on OpenFOAM and see the link "Boundary Conditions" on this page: http://foam.sourceforge.net/docs/cpp/index.html

Best regards,
Bruno

yash.aesi August 31, 2013 15:19

Greetings Bruno ,
thanks bruno for suggesting these link which are useful to me for better understanding .
but i already changed the outlet BC to zeroGradient then simulation keep on running its not converging . Rite now i dnt have output to show but show you other day .


Regards ,
sonu

yash.aesi August 31, 2013 15:26

helo
here is what is shown in the output :

Code:

Time = 1499

GAMG:  Solving for Ux, Initial residual = 0.000370436, Final residual = 1.78681e-06, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.000186964, Final residual = 4.67884e-06, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.000322874, Final residual = 4.85704e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.0030172, Final residual = 0.000281682, No Iterations 8
time step continuity errors : sum local = 4.1905e-08, global = -8.58901e-19, cumulative = 2.08053e-17
rho max/min : 0.1 0.1
GAMG:  Solving for epsilon, Initial residual = 1.37661e-06, Final residual = 4.21716e-08, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.000106335, Final residual = 3.04613e-06, No Iterations 1
ExecutionTime = 1238.57 s  ClockTime = 1404 s

Time = 1500

GAMG:  Solving for Ux, Initial residual = 0.000370261, Final residual = 1.78544e-06, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.000187208, Final residual = 4.67032e-06, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.000322533, Final residual = 4.85232e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.00300666, Final residual = 0.000292086, No Iterations 8
time step continuity errors : sum local = 4.34695e-08, global = -1.09357e-18, cumulative = 1.97117e-17
rho max/min : 0.1 0.1
GAMG:  Solving for epsilon, Initial residual = 1.37571e-06, Final residual = 4.21285e-08, No Iterations 1
GAMG:  Solving for k, Initial residual = 0.000106287, Final residual = 3.04439e-06, No Iterations 1
ExecutionTime = 1241.65 s  ClockTime = 1407 s.

this is the modified BC's pressure BC'S
Code:

dimensions      [1 -1 -2 0 0 0 0];

internalField  uniform 101325;


    boundaryField
{
  fuel_inlet         
    {
      type            zeroGradient;

        //type            mixed;
        refValue        uniform 101325;
        refGradient    uniform 0;
        valueFraction  uniform 0.3;
    }

  coflow_inlet         
    {
        type            zeroGradient;

        //type            mixed;
        refValue        uniform 101325;
        refGradient    uniform 0;
        valueFraction  uniform 0.3;
    }

    Outlet
    {
        type            zeroGradient;

        //type            mixed;
        //refValue        uniform 101325;
        //refGradient    uniform 0;
        //valueFraction  uniform 1;
        //type            transonicOutletPressure;
        //U              U;
        //phi            phi;
        //gamma          1.4;
        //psi            psi;
        //pInf            uniform 101325;
    }
Axis
    {
        type            symmetryPlane;

    }
    Upper_wall     
    {
        type            zeroGradient;

    }

    frontAndBack
    {
        type            empty;

    }

thanks in advance for your concern .
Regards ,
Sonu

immortality August 31, 2013 15:41

outlet BC for pressure should be fixedValue,assign a back pressure there.

yash.aesi September 1, 2013 06:31

1 Attachment(s)
greetings Ehsan and bruno ,

i changed my pressure outlet BC's from zeroGradient to fixedValue but now again its giving error :

Code:

Time = 19

GAMG:  Solving for Ux, Initial residual = 0.140082, Final residual = 0.0129345, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.111632, Final residual = 0.00749058, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.111638, Final residual = 0.00888687, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.0145399, Final residual = 0.00133764, No Iterations 14
time step continuity errors : sum local = 0.000262946, global = 2.02579e-05, cumulative = -3.88151e-05
rho max/min : 1.35725 0.657245
GAMG:  Solving for epsilon, Initial residual = 0.0506423, Final residual = 0.000552153, No Iterations 1
bounding epsilon, min: -0.164911 max: 1392.55 average: 1.11572
GAMG:  Solving for k, Initial residual = 0.0791398, Final residual = 0.000638985, No Iterations 1
ExecutionTime = 112.35 s  ClockTime = 115 s

Time = 20

GAMG:  Solving for Ux, Initial residual = 0.163277, Final residual = 0.00968811, No Iterations 1
GAMG:  Solving for Uy, Initial residual = 0.102144, Final residual = 0.0040134, No Iterations 1
GAMG:  Solving for e, Initial residual = 0.999875, Final residual = 0.0148186, No Iterations 2
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2  in "/lib/x86_64-linux-gnu/libc.so.6"
#3  Foam::hePsiThermo<Foam::psiThermo, Foam::pureMixture<Foam::sutherlandTransport<Foam::species::thermo<Foam::hConstThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleInternalEnergy> > > >::calculate() at ??:?
#4  Foam::hePsiThermo<Foam::psiThermo, Foam::pureMixture<Foam::sutherlandTransport<Foam::species::thermo<Foam::hConstThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleInternalEnergy> > > >::correct() at ??:?
#5 
 at ??:?
#6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#7 
 at ??:?
Floating point exception (core dumped)

now i am totally clueless whats goin wrong . can you suggest something ??

thanks

wyldckat September 1, 2013 07:46

Hi Sonu,

First, please follow the instructions on my second signature link, for when you need to post output or code, namely: How to post code using [CODE]

Second, you still have epsilon values that are outside of the normal zone of operations, namely:
Code:

epsilon, min: -0.164911
Third, I'm guessing that by injecting fluid at 35 m/s and 5 m/s, you have Reynold values that are extremely high. You should first study the characteristics of your geometry and flow, to assess if you are trying to simulate in the zone of subsonic, transonic or supersonic. In addition, you should check if your mesh is proper for the simulation you are trying to perform.

Last but not least, you should not jump directly to such high flow rates. With OpenFOAM, as well with anything you don't know well enough, the approach is to not jump directly into the final case set-up, because it's very unlikely that you will succeed to have a working simulation. And in that situation, you'll have too many possible reasons for the simulation to not work, making it nearly impossible to fix all of the problems in a single step. Therefore, you should gradually evolve from the simplest form of your problem, increasing the level of complexity one detail at a time.

Best regards,
Bruno

guilha September 26, 2013 14:11

Some errors and doubts
 
I was running a compressible LES simulation when it stopped with a similar or the same error, displayed just below.

Code:

Mean and max Courant Numbers = 0.0327681 0.0831581
Time = 0.00128178

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.9902e-06, Final residual = 2.04476e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 3.69969e-06, Final residual = 3.95952e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 2.22835e-05, Final residual = 4.25565e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 9.02514e-06, Final residual = 8.30103e-13, No Iterations 3
ExecutionTime = 40520.5 s  ClockTime = 40657 s

Mean and max Courant Numbers = 0.0327679 0.083161
Time = 0.00128182

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 2.00094e-06, Final residual = 2.06337e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 3.70838e-06, Final residual = 4.11045e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 2.23139e-05, Final residual = 4.33116e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
[24] #0  Foam::error::printStack(Foam::Ostream&)[56] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged. 

The process that invoked fork was:

  Local host:          g03 (PID 33539)
  MPI_COMM_WORLD rank: 24

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
[40] #0  Foam::error::printStack(Foam::Ostream&)[8] #0  Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[24] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[8] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[56] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[40] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[24] #2  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[8] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[8] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[56] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[24] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[8] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[8] #5  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[24] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[40] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[56] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate()
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[24] #5  in "/lib/x86_64-linux-gnu/libc.so.6"
[40] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[56] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct()
[8]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[8] #6  __libc_start_main in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[56] #5  [24]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[24] #6  __libc_start_main in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[40] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/lib/x86_64-linux-gnu/libc.so.6"
[8] #7 
 in "/lib/x86_64-linux-gnu/libc.so.6"
[24] #7 

 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[40] #5  [56]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[56] #6  __libc_start_main[8]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g03:33523] *** Process received signal ***
[g03:33523] Signal: Floating point exception (8)
[g03:33523] Signal code:  (-6)
[g03:33523] Failing at address: 0x3f8000082f3
[g03:33523] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b8d0a824480]
[g03:33523] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b8d0a824405]
[g03:33523] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b8d0a824480]
[g03:33523] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b8d081471e9]
[g03:33523] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b8d0814d262]
[g03:33523] [ 5] rhoCentralFoam() [0x4236cb]
[g03:33523] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b8d0a810ead]
[g03:33523] [ 7] rhoCentralFoam() [0x41c709]
[g03:33523] *** End of error message ***
[24]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g03:33539] *** Process received signal ***
[g03:33539] Signal: Floating point exception (8)
[g03:33539] Signal code:  (-6)
[g03:33539] Failing at address: 0x3f800008303
[g03:33539] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2af004454480]
[g03:33539] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2af004454405]
[g03:33539] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2af004454480]
[g03:33539] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2af001d771e9]
[g03:33539] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2af001d7d262]
[g03:33539] [ 5] rhoCentralFoam() [0x4236cb]
[g03:33539] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2af004440ead]
[g03:33539] [ 7] rhoCentralFoam() [0x41c709]
[g03:33539] *** End of error message ***
 in "/lib/x86_64-linux-gnu/libc.so.6"
[56] #7 

[40]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[40] #6  __libc_start_main[56]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g02:10983] *** Process received signal ***
[g02:10983] Signal: Floating point exception (8)
[g02:10983] Signal code:  (-6)
[g02:10983] Failing at address: 0x3f800002ae7
[g02:10983] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b7f3585c480]
[g02:10983] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b7f3585c405]
[g02:10983] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b7f3585c480]
[g02:10983] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b7f3317f1e9]
[g02:10983] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b7f33185262]
[g02:10983] [ 5] rhoCentralFoam() [0x4236cb]
[g02:10983] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b7f35848ead]
[g02:10983] [ 7] rhoCentralFoam() [0x41c709]
[g02:10983] *** End of error message ***
 in "/lib/x86_64-linux-gnu/libc.so.6"
[40] #7 
[40]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g02:10967] *** Process received signal ***
[g02:10967] Signal: Floating point exception (8)
[g02:10967] Signal code:  (-6)
[g02:10967] Failing at address: 0x3f800002ad7
[g02:10967] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ae44b8f5480]
[g02:10967] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2ae44b8f5405]
[g02:10967] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ae44b8f5480]
[g02:10967] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2ae4492181e9]
[g02:10967] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2ae44921e262]
[g02:10967] [ 5] rhoCentralFoam() [0x4236cb]
[g02:10967] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2ae44b8e1ead]
[g02:10967] [ 7] rhoCentralFoam() [0x41c709]
[g02:10967] *** End of error message ***
[g03:33510] 3 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork
[g03:33510] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
--------------------------------------------------------------------------
mpirun noticed that process rank 56 with PID 10983 on node g02 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------

After watching the thread, I could not figure why I get the error. SigFpe, the moderator wyldckat said it maybe maths problems. But, I had the simulation stopped at 0.00128182s, running in 64 processors of 2 machines (each machine with 32 processors). However, the same simulation, but now running on 32 processors (in only one machine) has already passed the time which the previous had failed and no errors until now (0.0018 s). And the same case but with a coarser mesh, but the same Courant number, running in 64 processors gave me a similar error, and after changing from 64 to 32 processors it run alright until the end.
I also checked my mesh with the command checkMesh, the output was

Code:

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

Time = 0

Mesh stats
    points:          5440550
    faces:            15827778
    internal faces:  15337566
    cells:            5194224
    boundary patches: 6
    point zones:      0
    face zones:      0
    cell zones:      0

Overall number of cells of each type:
    hexahedra:    5194224
    prisms:        0
    wedges:        0
    pyramids:      0
    tet wedges:    0
    tetrahedra:    0
    polyhedra:    0

Checking topology...
    Boundary definition OK.
    Cell to face addressing OK.
    Point usage OK.
    Upper triangular ordering OK.
    Face vertices OK.
    Number of regions: 1 (OK).

Checking patch topology for multiply connected surfaces ...
    Patch              Faces    Points  Surface topology                 
    entrada            6840    7150    ok (non-closed singly connected) 
    topo                14784    15425    ok (non-closed singly connected) 
    saida              6840    7150    ok (non-closed singly connected) 
    parede              28896    30125    ok (non-closed singly connected) 
    tras                216426  217622  ok (non-closed singly connected) 
    frente              216426  217622  ok (non-closed singly connected) 

Checking geometry...
    Overall domain bounding box (-0.05 0 -0.06) (0.2 0.22 0.06)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.66602e-15 1.51299e-14 4.75939e-14) OK.
    Max cell openness = 2.21102e-16 OK.
    Max aspect ratio = 15.8197 OK.
    Minumum face area = 1.00095e-07. Maximum face area = 2.88462e-06.  Face area magnitudes OK.
    Min volume = 5.00473e-10. Max volume = 1.2275e-09.  Total volume = 0.00372.  Cell volumes OK.
    Mesh non-orthogonality Max: 0 average: 0
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 4.99527e-06 OK.

Mesh OK.

End

It seems ok...

And, as I see from the residuals and Courant number, all of them seem alright.

Resuming, when I run simulations with two computers, this errors happens. So is there a problem, when parallel computing is being performed with processors of two machines ? I mean the communication between them ?

wyldckat September 27, 2013 18:35

Greetings guilha,

To assess if the problem is related the communication between the two machines, try running with 16 cores on each machine, therefore having 32 cores in total. This way you can isolate if the problem is due to using too many cores, or a bad decomposition or if it's related to the communication.

In addition, there are a few other things that can affect this:
  1. If your case has any cyclic patches, then those might be introducing the problem here.
  2. Check what parameter is being used in the file "$WM_PROJECT_DIR/etc/controlDict" for the keyword "commsType".
  3. What file sharing system is being used?
    1. Is it NFS? If it is, which version is it using? NFS 3 or 4?
  4. What kind of connection is being used for communicating between machines? In other words: is it Ethernet or Infiniband?
  5. Run a full check on the mesh:
    Code:

    checkMesh -allGeometry -allTopology
Best regards,
Bruno

guilha September 30, 2013 10:02

Hello Bruno and all other FOAMers, thanks for your help and patience

I did not have time to test the communication between the machines.

Now, following your list:

1 - Yes, I have cyclic patches, my case is almost two dimensional and LES;

2 - The "commsType" is set with nonBlocking. If I change do I have to compile anything ?

3 - I know it not, I talked to the administrator and we both do not know, but probably it is because I do not understand what really means the file sharing, however it seems not to be NFS as she said we did not use it;

4 - The communication between the machines is Ethernet.

5 - It is ok, and the output is just below (checkMesh with more options).

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec  : checkMesh -allGeometry -allTopology
Date  : Sep 30 2013
Time  : 11:24:57
Host  : g01
PID    : 47387
Case  : /home/guilha/cavidade_216kx24_les_smagorinsky_galego_euler
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

Time = 0

Mesh stats
    points:          5440550
    faces:            15827778
    internal faces:  15337566
    cells:            5194224
    boundary patches: 6
    point zones:      0
    face zones:      0
    cell zones:      0

Overall number of cells of each type:
    hexahedra:    5194224
    prisms:        0
    wedges:        0
    pyramids:      0
    tet wedges:    0
    tetrahedra:    0
    polyhedra:    0

Checking topology...
    Boundary definition OK.
    Cell to face addressing OK.
    Point usage OK.
    Upper triangular ordering OK.
    Face vertices OK.
    Topological cell zip-up check OK.
    Face-face connectivity OK.
    Number of regions: 1 (OK).

Checking patch topology for multiply connected surfaces ...
    Patch              Faces    Points  Surface topology                  Bounding box
    entrada            6840    7150    ok (non-closed singly connected)  (-0.05 0.12 -0.06) (-0.05 0.22 0.06)
    topo                14784    15425    ok (non-closed singly connected)  (-0.05 0.22 -0.06) (0.2 0.22 0.06)
    saida              6840    7150    ok (non-closed singly connected)  (0.2 0.12 -0.06) (0.2 0.22 0.06)
    parede              28896    30125    ok (non-closed singly connected)  (-0.05 0 -0.06) (0.2 0.12 0.06)
    tras                216426  217622  ok (non-closed singly connected)  (-0.05 0 -0.06) (0.2 0.22 -0.06)
    frente              216426  217622  ok (non-closed singly connected)  (-0.05 0 0.06) (0.2 0.22 0.06)

Checking geometry...
    Overall domain bounding box (-0.05 0 -0.06) (0.2 0.22 0.06)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.66602e-15 1.51299e-14 4.75939e-14) OK.
    Max cell openness = 2.21102e-16 OK.
    Max aspect ratio = 15.8197 OK.
    Minumum face area = 1.00095e-07. Maximum face area = 2.88462e-06.  Face area magnitudes OK.
    Min volume = 5.00473e-10. Max volume = 1.2275e-09.  Total volume = 0.00372.  Cell volumes OK.
    Mesh non-orthogonality Max: 0 average: 0
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 4.99527e-06 OK.
    Face tets OK.
    Min/max edge length = 0.000316061 0.005 OK.
    All angles in faces OK.
    Face flatness (1 = flat, 0 = butterfly) : average = 1  min = 1
    All face flatness OK.
    Cell determinant (wellposedness) : minimum: 0.031003 average: 0.371808
    Cell determinant check OK.
    Concave cell check OK.

Mesh OK.

End

After talking to the administrator, I got some other views of the problem. The problem can be in the decomposition, she told me that if I decompose the problem with a script (in order to use 64 processors of two machines) maybe the machines or the mpi will be prepared, and run without any problems. While I was viewing some threads titles, I saw this one
http://www.cfd-online.com/Forums/ope...ple-cores.html I do not need to do the decomposition with parallel computing but it gave me the alert, is the problem in my decomposition ? I have been doing it by simply typing decomposePar -force, and if I wish to do it with more processors do I have to change anything in the command ? I ask it because I thought I could simply write on the script the number of processors used to do the decomposition. Although I am not using scripts to do the decomposition.

And to finish, the two cases I was running, now on SINGLE MACHINES, one of them stopped with almost the same erros (almost, because I checked the messages and they have very few differences). I must remember that the two cases are the same, but one with a more refined mesh. And the one that had given the error is the most refined case, the time of simulation at which appeared the error is almost the same as a particle at the speed of U0 have traveled the distance of the whole domain 2 times. The other case, I am running to get more samples for statistic issues, and until now no problems. So I am totally dizzied. It does not seem to be anything unphysical, I mean the residuals and Courant (and the other coarser mesh gave great results), the new error message is this one:

Code:

Mean and max Courant Numbers = 0.0354678 0.0909748
Time = 0.00225237

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.52504e-06, Final residual = 2.11393e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 2.81461e-06, Final residual = 2.76122e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 1.2395e-05, Final residual = 1.6969e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 5.03938e-06, Final residual = 4.94582e-13, No Iterations 3
ExecutionTime = 214443 s  ClockTime = 216156 s

Mean and max Courant Numbers = 0.0354677 0.0909748
Time = 0.00225242

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 1.52931e-06, Final residual = 2.19558e-14, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 2.82524e-06, Final residual = 2.91866e-14, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 1.24457e-05, Final residual = 1.75215e-13, No Iterations 3
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
[16] #0  Foam::error::printStack(Foam::Ostream&)[0] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged. 

The process that invoked fork was:

  Local host:          g01 (PID 38756)
  MPI_COMM_WORLD rank: 16

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #2  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #5  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #5 

[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[16] #6  __libc_start_main[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[0] #6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #7  in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #7 

[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38740] *** Process received signal ***
[g01:38740] Signal: Floating point exception (8)
[g01:38740] Signal code:  (-6)
[g01:38740] Failing at address: 0x3f800009754
[g01:38740] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b4c82bf3480]
[g01:38740] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2b4c82bf3405]
[g01:38740] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b4c82bf3480]
[g01:38740] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2b4c805161e9]
[g01:38740] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2b4c8051c262]
[g01:38740] [ 5] rhoCentralFoam() [0x4236cb]
[g01:38740] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b4c82bdfead]
[g01:38740] [ 7] rhoCentralFoam() [0x41c709]
[g01:38740] *** End of error message ***
[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38756] *** Process received signal ***
[g01:38756] Signal: Floating point exception (8)
[g01:38756] Signal code:  (-6)
[g01:38756] Failing at address: 0x3f800009764
[g01:38756] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2add71707480]
[g01:38756] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x2add71707405]
[g01:38756] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2add71707480]
[g01:38756] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE9calculateEv+0x3f9) [0x2add6f02a1e9]
[g01:38756] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so(_ZN4Foam10ePsiThermoINS_11pureMixtureINS_19sutherlandTransportINS_12specieThermoINS_12hConstThermoINS_10perfectGasEEEEEEEEEE7correctEv+0x32) [0x2add6f030262]
[g01:38756] [ 5] rhoCentralFoam() [0x4236cb]
[g01:38756] [ 6] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2add716f3ead]
[g01:38756] [ 7] rhoCentralFoam() [0x41c709]
[g01:38756] *** End of error message ***
[g01:38739] 1 more process has sent help message help-mpi-runtime.txt / mpi_init:warn-fork
[g01:38739] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
--------------------------------------------------------------------------
mpirun noticed that process rank 16 with PID 38756 on node g01 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------


wyldckat October 6, 2013 11:58

Hi guilha,

Quote:

Originally Posted by guilha (Post 454283)
I did not have time to test the communication between the machines.

Have you managed to the tests I suggested?

Quote:

Originally Posted by guilha (Post 454283)
1 - Yes, I have cyclic patches, my case is almost two dimensional and LES;

OK, special caution is necessary for cyclic patches, as explained here: Cyclic patches and parallel postprocessing problems #8

Either your case is 2D or it isn't. In OpenFOAM, "2D" is when we use front and back patches defined as "empty" and there is only one cell thickness in the Z direction.

As for LES in 2D... I vaguely remember that it's not exactly a good idea... because the turbulence is actually 3D. But then again, I vaguely remember that OpenFOAM has got one or two tutorials working with LES in 2D.

Quote:

Originally Posted by guilha (Post 454283)
2 - The "commsType" is set with nonBlocking. If I change do I have to compile anything ?

No need to rebuild. This controls how the parallel processes will handle the waiting period between data exchanges. And that's the default option, as shown here: https://github.com/OpenFOAM/OpenFOAM...ontrolDict#L58 - so there shouldn't be any problems here.

Quote:

Originally Posted by guilha (Post 454283)
3 - I know it not, I talked to the administrator and we both do not know, but probably it is because I do not understand what really means the file sharing, however it seems not to be NFS as she said we did not use it;

Soooo, how are you able to share the files between the two machines, while the simulation is running? I ask this because OpenFOAM does not do this automatically, unless you use the strategy described here: Running OpenFOAM in parallel with different locations for each process


Quote:

Originally Posted by guilha (Post 454283)
4 - The communication between the machines is Ethernet.

Since it's only two machines, I guess that's enough.

Quote:

Originally Posted by guilha (Post 454283)
5 - It is ok, and the output is just below (checkMesh with more options).

Yes, the mesh seems to be perfectly fine.

Quote:

Originally Posted by guilha (Post 454283)
After talking to the administrator, I got some other views of the problem. The problem can be in the decomposition, she told me that if I decompose the problem with a script (in order to use 64 processors of two machines) maybe the machines or the mpi will be prepared, and run without any problems. While I was viewing some threads titles, I saw this one
http://www.cfd-online.com/Forums/ope...ple-cores.html I do not need to do the decomposition with parallel computing but it gave me the alert, is the problem in my decomposition ? I have been doing it by simply typing decomposePar -force, and if I wish to do it with more processors do I have to change anything in the command ? I ask it because I thought I could simply write on the script the number of processors used to do the decomposition. Although I am not using scripts to do the decomposition.

I don't think that's the problem. The only few possible problems could be:
  • Problems with the cyclic patches, as indicate in the first point.
  • Using scotch vs simple vs hierarchical might be a good exercise, for making sure if it's a problem concerning the decomposition method itself.

Quote:

Originally Posted by guilha (Post 454283)
And to finish, the two cases I was running, now on SINGLE MACHINES, one of them stopped with almost the same erros (almost, because I checked the messages and they have very few differences). I must remember that the two cases are the same, but one with a more refined mesh. And the one that had given the error is the most refined case, the time of simulation at which appeared the error is almost the same as a particle at the speed of U0 have traveled the distance of the whole domain 2 times. The other case, I am running to get more samples for statistic issues, and until now no problems. So I am totally dizzied. It does not seem to be anything unphysical, I mean the residuals and Courant (and the other coarser mesh gave great results), the new error message is this one:

Code:

[...]

[16] #0  Foam::error::printStack(Foam::Ostream&)[0] #0  Foam::error::printStack(Foam::Ostream&)--------------------------------------------------------------------------

[...]

[16] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[16] #2  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[0] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[0] #5  in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
[16] #5 

[16]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[16] #6  __libc_start_main[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[0] #6  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[16] #7  in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #7 

[0]  in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
[g01:38740] *** Process received signal ***
[g01:38740] Signal: Floating point exception (8)
[g01:38740] Signal code:  (-6)

[...]


The last occurrence of "Signal: Floating point exception" relates to the few first error messages that say "Foam::sigFpe::sigHandler(int)". This means that there was a division by 0 somewhere. More specifically, it was when:
Code:

Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas>  > > > >::calculate()
was called, namely this method: https://github.com/OpenFOAM/OpenFOAM...siThermo.C#L33
Problem here is that there is no clear indication of which operation might have given a division by zero.


I would choose to write frequent time snapshots near the crashing time location and then visually inspect where the fields are getting high or rather low values.

Best regards,
Bruno

guilha October 14, 2013 12:37

Hello Bruno, thank you for your analysis. I have been too much busy lately.

The test between the machines, I did not do because I can not use x processors of one machine and y processors of the other. However, I did a test (the case is the same, only has a bigger time step), and I run it in a single processor, it failed here:
Code:

Mean and max Courant Numbers = 0.200668 0.571628
Time = 2.12e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000425058, Final residual = 9.90843e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000855807, Final residual = 9.09303e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00843877, Final residual = 1.09127e-11, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00155423, Final residual = 9.73893e-12, No Iterations 6
ExecutionTime = 1511.9 s  ClockTime = 1513 s

Mean and max Courant Numbers = 0.200544 0.571627
Time = 2.16e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000381028, Final residual = 9.74391e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000849587, Final residual = 8.09615e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00710211, Final residual = 8.182e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00131698, Final residual = 1.03579e-11, No Iterations 6
ExecutionTime = 1547.3 s  ClockTime = 1549 s

Mean and max Courant Numbers = 0.200412 0.593619
Time = 2.2e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000346998, Final residual = 8.89842e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000814717, Final residual = 7.78361e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.00670942, Final residual = 6.52104e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for e, Initial residual = 0.00145455, Final residual = 9.90931e-08, No Iterations 3
ExecutionTime = 1567.72 s  ClockTime = 1569 s

Mean and max Courant Numbers = 0.200281 0.711964
Time = 2.24e-05

diagonal:  Solving for rho, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUx, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUy, Initial residual = 0, Final residual = 0, No Iterations 0
diagonal:  Solving for rhoUz, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Ux, Initial residual = 0.000390677, Final residual = 8.75177e-09, No Iterations 3
smoothSolver:  Solving for Uy, Initial residual = 0.000795014, Final residual = 7.67302e-09, No Iterations 3
smoothSolver:  Solving for Uz, Initial residual = 0.0075196, Final residual = 8.23954e-12, No Iterations 6
diagonal:  Solving for rhoE, Initial residual = 0, Final residual = 0, No Iterations 0
#0  Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1  Foam::sigFpe::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2  in "/lib/libc.so.6"
#3  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::calculate() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
#4  Foam::ePsiThermo<Foam::pureMixture<Foam::sutherlandTransport<Foam::specieThermo<Foam::hConstThermo<Foam::perfectGas> > > > >::correct() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libbasicThermophysicalModels.so"
#5 
 in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
#6  __libc_start_main in "/lib/libc.so.6"
#7 
 in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/rhoCentralFoam"
Floating point exception

I have put the previous time steps purposely, because the Courant number was growing too much. Something tells me, that this error is due to the time step. And after running this case with a smaller time step, there is no problems, at least until where I stopped (3e-05 s).

About my simulations, they are LES 3D, of course. What I meant for almost 2D, was the type of flow, which is quasi-2D.

For the cases where my time step is smaller than the one I posted in code, and running in parallel, the errors are random. And I can not have the results stored since it leads to a lot of memory usage.
But for the last test (relatively big time step) I did (which was in a single processor), the error is not random, I run the case twice and confirmed it. From the post processing, my velocity grows in a sharp corner, and this is the reason why Courant increases, but I think it is compatible with the perfect fluid solution. But in this simulation, I think it gets unstable due to the Courant increase, that is with a smaller time step it might be bounded to the stability limit.

Also I saw the function, which you told me about, the ePsiThermo. Where there is an alpha. For certains boundaries conditions (alphaSGS, muSGS and muTilda), I used a standard one (as I could not find in the literature any value for these variables) which I saw on an OpenFOAM tutorial, and they are essentially 0.

Regarding the cyclic patch, in this link http://www.cfd-online.com/Forums/ope...tml#post241413, I think I have this in the computer,
Code:

//- Keep owner and neighbour on same processor for faces in patches:
//  (makes sense only for cyclic patches)
//preservePatches (cyclic_half0 cyclic_half1);

And the scotch method. Are you suggesting that I should uncomment the last line ?

wyldckat October 14, 2013 17:02

Hi guilha,

Quote:

Originally Posted by guilha (Post 456884)
I have put the previous time steps purposely, because the Courant number was growing too much. Something tells me, that this error is due to the time step. And after running this case with a smaller time step, there is no problems, at least until where I stopped (3e-05 s).

The rule of thumb is to only allow a maximum Courant number of 0.5; but if still diverges, try lower values.
Higher values should only be used if you know what you are doing ;)

Quote:

Originally Posted by guilha (Post 456884)
For the cases where my time step is smaller than the one I posted in code, and running in parallel, the errors are random. And I can not have the results stored since it leads to a lot of memory usage.

Honestly, I suspect that this is one of those bugs that have already been fixed in the more recent versions of OpenFOAM. If you were able to create a small test case that replicates the errors you are observing, I could test it with the more recent versions of OpenFOAM.

Quote:

Originally Posted by guilha (Post 456884)
But for the last test (relatively big time step) I did (which was in a single processor), the error is not random, I run the case twice and confirmed it. From the post processing, my velocity grows in a sharp corner, and this is the reason why Courant increases, but I think it is compatible with the perfect fluid solution. But in this simulation, I think it gets unstable due to the Courant increase, that is with a smaller time step it might be bounded to the stability limit.

Like I wrote above: the maximum Courant number that should be allowed is 0.5 or lower, depending on your simulation.

Quote:

Originally Posted by guilha (Post 456884)
Also I saw the function, which you told me about, the ePsiThermo. Where there is an alpha. For certains boundaries conditions (alphaSGS, muSGS and muTilda), I used a standard one (as I could not find in the literature any value for these variables) which I saw on an OpenFOAM tutorial, and they are essentially 0.

Yeah, about that... don't trust the name "alpha". It tends to appear in many shapes and forms and many of them aren't even related. If I'm not mistaken, there are at least 3 kinds of alpha in OpenFOAM: as turbulence, as heat and as phase (multiphase).

Quote:

Originally Posted by guilha (Post 456884)
Regarding the cyclic patch, in this link http://www.cfd-online.com/Forums/ope...tml#post241413, I think I have this in the computer,
Code:

//- Keep owner and neighbour on same processor for faces in patches:
//  (makes sense only for cyclic patches)
//preservePatches (cyclic_half0 cyclic_half1);

And the scotch method. Are you suggesting that I should uncomment the last line ?

Yes, uncomment that before running decomposePar. And don't forget to replace the names therein with yours, for example:
Code:

preservePatches (
    up_patch
    downPatch
    patch_left
    thenRight
);

Best regards,
Bruno

guilha February 15, 2014 16:32

Good evening,

I am again in this thread because recently I have had a wierd error. When I run my case in 16 processors or 24 (the cases tested), no problems appear, however with more processors like 30, 32 or 64 (the cases tested) it appears this error

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec  : rhoCentralFoam -parallel
Date  : Feb 15 2014
Time  : 21:23:14
Host  : g01
PID    : 35702
Case  : /home/guilha/cavidade_LES_130kx24_smagorinsky_v1_perfil_power_law_v4
nProcs : 32
Slaves :
31
(
g01.35703
g01.35704
g01.35705
g01.35706
g01.35707
g01.35708
g01.35709
g01.35710
g01.35711
g01.35712
g01.35713
g01.35714
g01.35715
g01.35716
g01.35717
g01.35718
g01.35719
g01.35720
g01.35721
g01.35722
g01.35723
g01.35724
g01.35725
g01.35726
g01.35727
g01.35728
g01.35729
g01.35730
g01.35731
g01.35732
g01.35733
)

Pstream initialized with:
    floatTransfer    : 0
    nProcsSimpleSum  : 0
    commsType        : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

[g01:35706] *** Process received signal ***
[g01:35706] Signal: Segmentation fault (11)
[g01:35706] Signal code: Address not mapped (1)
[g01:35706] Failing at address: 0xfffffffe03990ad8
[g01:35706] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ad84978a480]
[g01:35706] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x728fa) [0x2ad8497ca8fa]
[g01:35706] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x74d64) [0x2ad8497ccd64]
[g01:35706] [ 3] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x70) [0x2ad8497cf420]
[g01:35706] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x1d) [0x2ad84907268d]
[g01:35706] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znam+0x9) [0x2ad8490727a9]
[g01:35706] [ 6] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam5error10printStackERNS_7OstreamE+0x128b) [0x2ad848ad81db]
[g01:35706] [ 7] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7sigSegv10sigHandlerEi+0x30) [0x2ad848acaec0]
[g01:35706] [ 8] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2ad84978a480]
[g01:35706] [ 9] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam18processorPolyPatch10updateMeshERNS_14PstreamBuffersE+0x2da) [0x2ad84894af3a]
[g01:35706] [10] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam16polyBoundaryMesh10updateMeshEv+0x2b1) [0x2ad848951fc1]
[g01:35706] [11] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam8polyMeshC2ERKNS_8IOobjectE+0x10ea) [0x2ad8489a316a]
[g01:35706] [12] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libfiniteVolume.so(_ZN4Foam6fvMeshC1ERKNS_8IOobjectE+0x19) [0x2ad8462d25f9]
[g01:35706] [13] rhoCentralFoam() [0x41f624]
[g01:35706] [14] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2ad849776ead]
[g01:35706] [15] rhoCentralFoam() [0x41c709]
[5] #0  Foam::error::printStack(Foam::Ostream&)[g01:35706] *** End of error message ***
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged. 

The process that invoked fork was:

  Local host:          g01 (PID 35707)
  MPI_COMM_WORLD rank: 5

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #1  Foam::sigSegv::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[5] #3  Foam::processorPolyPatch::updateMesh(Foam::PstreamBuffers&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #4  Foam::polyBoundaryMesh::updateMesh()--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 35706 on node g01 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------

I run the checkMesh utility with the options -allGeometry and -allTopology and everything is fine.

wyldckat February 15, 2014 16:50

Hi guilha,

It could be a problem in the installation of OpenFOAM on one of the machines. Try running checkMesh in parallel, the same way you run rhoCentralFoam.

And a few questions (I don't remember the details):
  1. How many cells does the mesh have?
  2. Does it have any special patches/boundary conditions? Such as cyclics, mapped or wedges?
  3. How many machines are being used for each processor distribution?
  4. What does decomposePar tell you for the 32 and 64 decompositions?
Best regards,
Bruno

guilha February 15, 2014 17:37

Bruno thanks a lot for your replies and all the support.

Running the checkMesh in parallel gives an error yes

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec  : checkMesh -parallel
Date  : Feb 15 2014
Time  : 22:29:05
Host  : g04
PID    : 3313
Case  : /home/guilha/testes
nProcs : 32
Slaves :
31
(
g04.3314
g04.3315
g04.3316
g04.3317
g04.3318
g04.3319
g04.3320
g04.3321
g04.3322
g04.3323
g04.3324
g04.3325
g04.3326
g04.3327
g04.3328
g04.3329
g04.3330
g04.3331
g04.3332
g04.3333
g04.3334
g04.3335
g04.3336
g04.3337
g04.3338
g04.3339
g04.3340
g04.3341
g04.3342
g04.3343
g04.3344
)

Pstream initialized with:
    floatTransfer    : 0
    nProcsSimpleSum  : 0
    commsType        : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
[0]
[0]
[0] --> FOAM FATAL ERROR:
[0] checkMesh: cannot open case directory "/home/guilha/testes/processor0"
[0]
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 3313 on
node g04 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

My mesh has 3,136,272 cells.
I have cyclic boundary conditions.
The decomposePar I think it works perfectly fine, the output is for the 32 processors
Code:

Processor 0
    Number of cells = 97416
    Number of faces shared with processor 1 = 2736
    Number of faces shared with processor 2 = 1243
    Number of faces shared with processor 3 = 1205
    Number of faces shared with processor 30 = 720
    Number of processor patches = 4
    Number of processor faces = 5904
    Number of boundary faces = 9750

Processor 1
    Number of cells = 98736
    Number of faces shared with processor 0 = 2736
    Number of faces shared with processor 12 = 1632
    Number of faces shared with processor 13 = 864
    Number of faces shared with processor 14 = 768
    Number of faces shared with processor 30 = 1560
    Number of processor patches = 5
    Number of processor faces = 7560
    Number of boundary faces = 8444

Processor 2
    Number of cells = 97046
    Number of faces shared with processor 0 = 1243
    Number of faces shared with processor 3 = 2014
    Number of faces shared with processor 3 = 3
    Number of faces shared with processor 7 = 1272
    Number of processor patches = 4
    Number of processor faces = 4532
    Number of boundary faces = 10102

Processor 3
    Number of cells = 97426
    Number of faces shared with processor 0 = 1205
    Number of faces shared with processor 2 = 2014
    Number of faces shared with processor 2 = 3
    Number of faces shared with processor 6 = 1289
    Number of faces shared with processor 7 = 103
    Number of processor patches = 5
    Number of processor faces = 4614
    Number of boundary faces = 9914

Processor 4
    Number of cells = 97976
    Number of faces shared with processor 5 = 2142
    Number of faces shared with processor 5 = 3
    Number of faces shared with processor 5 = 1
    Number of faces shared with processor 6 = 1222
    Number of processor patches = 4
    Number of processor faces = 3368
    Number of boundary faces = 11280

Processor 5
    Number of cells = 97168
    Number of faces shared with processor 4 = 2142
    Number of faces shared with processor 4 = 1
    Number of faces shared with processor 4 = 3
    Number of faces shared with processor 6 = 74
    Number of faces shared with processor 7 = 1224
    Number of processor patches = 5
    Number of processor faces = 3444
    Number of boundary faces = 11238

Processor 6
    Number of cells = 98412
    Number of faces shared with processor 3 = 1289
    Number of faces shared with processor 4 = 1222
    Number of faces shared with processor 5 = 74
    Number of faces shared with processor 7 = 2178
    Number of faces shared with processor 7 = 3
    Number of processor patches = 5
    Number of processor faces = 4766
    Number of boundary faces = 10228

Processor 7
    Number of cells = 97044
    Number of faces shared with processor 2 = 1272
    Number of faces shared with processor 3 = 103
    Number of faces shared with processor 5 = 1224
    Number of faces shared with processor 6 = 2178
    Number of faces shared with processor 6 = 3
    Number of processor patches = 5
    Number of processor faces = 4780
    Number of boundary faces = 9894

Processor 8
    Number of cells = 99552
    Number of faces shared with processor 9 = 2040
    Number of faces shared with processor 10 = 216
    Number of faces shared with processor 11 = 1392
    Number of faces shared with processor 24 = 1392
    Number of processor patches = 4
    Number of processor faces = 5040
    Number of boundary faces = 9880

Processor 9
    Number of cells = 98677
    Number of faces shared with processor 8 = 2040
    Number of faces shared with processor 10 = 1152
    Number of faces shared with processor 12 = 1480
    Number of faces shared with processor 13 = 750
    Number of faces shared with processor 24 = 1008
    Number of faces shared with processor 29 = 600
    Number of processor patches = 6
    Number of processor faces = 7030
    Number of boundary faces = 8224

Processor 10
    Number of cells = 98044
    Number of faces shared with processor 8 = 216
    Number of faces shared with processor 9 = 1152
    Number of faces shared with processor 11 = 1728
    Number of faces shared with processor 13 = 528
    Number of faces shared with processor 15 = 1260
    Number of processor patches = 5
    Number of processor faces = 4884
    Number of boundary faces = 9534

Processor 11
    Number of cells = 98040
    Number of faces shared with processor 8 = 1392
    Number of faces shared with processor 10 = 1728
    Number of processor patches = 2
    Number of processor faces = 3120
    Number of boundary faces = 11242

Processor 12
    Number of cells = 99813
    Number of faces shared with processor 1 = 1632
    Number of faces shared with processor 9 = 1480
    Number of faces shared with processor 13 = 2268
    Number of faces shared with processor 29 = 1224
    Number of faces shared with processor 30 = 576
    Number of processor patches = 5
    Number of processor faces = 7180
    Number of boundary faces = 8322

Processor 13
    Number of cells = 100166
    Number of faces shared with processor 1 = 864
    Number of faces shared with processor 9 = 750
    Number of faces shared with processor 10 = 528
    Number of faces shared with processor 12 = 2268
    Number of faces shared with processor 14 = 1512
    Number of faces shared with processor 15 = 1968
    Number of processor patches = 6
    Number of processor faces = 7890
    Number of boundary faces = 8342

Processor 14
    Number of cells = 100368
    Number of faces shared with processor 1 = 768
    Number of faces shared with processor 13 = 1512
    Number of faces shared with processor 15 = 1944
    Number of processor patches = 3
    Number of processor faces = 4224
    Number of boundary faces = 11580

Processor 15
    Number of cells = 100364
    Number of faces shared with processor 10 = 1260
    Number of faces shared with processor 13 = 1968
    Number of faces shared with processor 14 = 1944
    Number of processor patches = 3
    Number of processor faces = 5172
    Number of boundary faces = 10312

Processor 16
    Number of cells = 96459
    Number of faces shared with processor 17 = 2438
    Number of faces shared with processor 17 = 2
    Number of faces shared with processor 18 = 1392
    Number of faces shared with processor 19 = 96
    Number of processor patches = 4
    Number of processor faces = 3928
    Number of boundary faces = 11084

Processor 17
    Number of cells = 97625
    Number of faces shared with processor 16 = 2438
    Number of faces shared with processor 16 = 2
    Number of faces shared with processor 19 = 1248
    Number of faces shared with processor 22 = 336
    Number of faces shared with processor 23 = 1707
    Number of faces shared with processor 23 = 2
    Number of processor patches = 6
    Number of processor faces = 5733
    Number of boundary faces = 9469

Processor 18
    Number of cells = 97056
    Number of faces shared with processor 16 = 1392
    Number of faces shared with processor 19 = 1944
    Number of faces shared with processor 31 = 1416
    Number of processor patches = 3
    Number of processor faces = 4752
    Number of boundary faces = 9816

Processor 19
    Number of cells = 97107
    Number of faces shared with processor 16 = 96
    Number of faces shared with processor 17 = 1248
    Number of faces shared with processor 18 = 1944
    Number of faces shared with processor 22 = 1986
    Number of faces shared with processor 28 = 1512
    Number of faces shared with processor 31 = 528
    Number of processor patches = 6
    Number of processor faces = 7314
    Number of boundary faces = 8090

Processor 20
    Number of cells = 95424
    Number of faces shared with processor 21 = 1824
    Number of faces shared with processor 23 = 1320
    Number of processor patches = 2
    Number of processor faces = 3144
    Number of boundary faces = 11048

Processor 21
    Number of cells = 96816
    Number of faces shared with processor 20 = 1824
    Number of faces shared with processor 22 = 1560
    Number of faces shared with processor 23 = 192
    Number of faces shared with processor 26 = 96
    Number of faces shared with processor 27 = 1512
    Number of processor patches = 5
    Number of processor faces = 5184
    Number of boundary faces = 9412

Processor 22
    Number of cells = 96477
    Number of faces shared with processor 17 = 336
    Number of faces shared with processor 19 = 1986
    Number of faces shared with processor 21 = 1560
    Number of faces shared with processor 23 = 1584
    Number of faces shared with processor 26 = 1848
    Number of faces shared with processor 28 = 48
    Number of processor patches = 6
    Number of processor faces = 7362
    Number of boundary faces = 8042

Processor 23
    Number of cells = 96844
    Number of faces shared with processor 17 = 1707
    Number of faces shared with processor 17 = 2
    Number of faces shared with processor 20 = 1320
    Number of faces shared with processor 21 = 192
    Number of faces shared with processor 22 = 1584
    Number of processor patches = 5
    Number of processor faces = 4805
    Number of boundary faces = 9659

Processor 24
    Number of cells = 98250
    Number of faces shared with processor 8 = 1392
    Number of faces shared with processor 9 = 1008
    Number of faces shared with processor 25 = 2225
    Number of faces shared with processor 29 = 1176
    Number of processor patches = 4
    Number of processor faces = 5801
    Number of boundary faces = 9339

Processor 25
    Number of cells = 96462
    Number of faces shared with processor 24 = 2225
    Number of faces shared with processor 26 = 1224
    Number of faces shared with processor 27 = 1176
    Number of faces shared with processor 28 = 216
    Number of faces shared with processor 29 = 744
    Number of processor patches = 5
    Number of processor faces = 5585
    Number of boundary faces = 9407

Processor 26
    Number of cells = 97678
    Number of faces shared with processor 21 = 96
    Number of faces shared with processor 22 = 1848
    Number of faces shared with processor 25 = 1224
    Number of faces shared with processor 27 = 2250
    Number of faces shared with processor 28 = 2232
    Number of processor patches = 5
    Number of processor faces = 7650
    Number of boundary faces = 8150

Processor 27
    Number of cells = 98402
    Number of faces shared with processor 21 = 1512
    Number of faces shared with processor 25 = 1176
    Number of faces shared with processor 26 = 2250
    Number of processor patches = 3
    Number of processor faces = 4938
    Number of boundary faces = 10182

Processor 28
    Number of cells = 99528
    Number of faces shared with processor 19 = 1512
    Number of faces shared with processor 22 = 48
    Number of faces shared with processor 25 = 216
    Number of faces shared with processor 26 = 2232
    Number of faces shared with processor 29 = 1656
    Number of faces shared with processor 30 = 192
    Number of faces shared with processor 31 = 1536
    Number of processor patches = 7
    Number of processor faces = 7392
    Number of boundary faces = 8294

Processor 29
    Number of cells = 98184
    Number of faces shared with processor 9 = 600
    Number of faces shared with processor 12 = 1224
    Number of faces shared with processor 24 = 1176
    Number of faces shared with processor 25 = 744
    Number of faces shared with processor 28 = 1656
    Number of faces shared with processor 30 = 1320
    Number of processor patches = 6
    Number of processor faces = 6720
    Number of boundary faces = 8182

Processor 30
    Number of cells = 98856
    Number of faces shared with processor 0 = 720
    Number of faces shared with processor 1 = 1560
    Number of faces shared with processor 12 = 576
    Number of faces shared with processor 28 = 192
    Number of faces shared with processor 29 = 1320
    Number of faces shared with processor 31 = 2136
    Number of processor patches = 6
    Number of processor faces = 6504
    Number of boundary faces = 8838

Processor 31
    Number of cells = 98856
    Number of faces shared with processor 18 = 1416
    Number of faces shared with processor 19 = 528
    Number of faces shared with processor 28 = 1536
    Number of faces shared with processor 30 = 2136
    Number of processor patches = 4
    Number of processor faces = 5616
    Number of boundary faces = 9486

Number of processor faces = 87968
Max number of cells = 100368 (2.407444% above average 98008.5)
Max number of processor patches = 7 (51.35135% above average 4.625)
Max number of faces between processors = 7890 (43.50673% above average 5498)


Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
Processor 4: field transfer
Processor 5: field transfer
Processor 6: field transfer
Processor 7: field transfer
Processor 8: field transfer
Processor 9: field transfer
Processor 10: field transfer
Processor 11: field transfer
Processor 12: field transfer
Processor 13: field transfer
Processor 14: field transfer
Processor 15: field transfer
Processor 16: field transfer
Processor 17: field transfer
Processor 18: field transfer
Processor 19: field transfer
Processor 20: field transfer
Processor 21: field transfer
Processor 22: field transfer
Processor 23: field transfer
Processor 24: field transfer
Processor 25: field transfer
Processor 26: field transfer
Processor 27: field transfer
Processor 28: field transfer
Processor 29: field transfer
Processor 30: field transfer
Processor 31: field transfer

End.

It is 32 processors of the same machine.

guilha February 15, 2014 17:41

In the previous post the checkMesh output was shown without the decomposition, after decomposing there is an error

Code:

/*---------------------------------------------------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
Build  : 2.0.1-51f1de99a4bc
Exec  : checkMesh -parallel
Date  : Feb 15 2014
Time  : 22:38:21
Host  : g04
PID    : 3429
Case  : /home/guilha/testes
nProcs : 32
Slaves :
31
(
g04.3430
g04.3431
g04.3432
g04.3433
g04.3434
g04.3435
g04.3436
g04.3437
g04.3438
g04.3439
g04.3440
g04.3441
g04.3442
g04.3443
g04.3444
g04.3445
g04.3446
g04.3447
g04.3448
g04.3449
g04.3450
g04.3451
g04.3452
g04.3453
g04.3454
g04.3455
g04.3456
g04.3457
g04.3458
g04.3459
g04.3460
)

Pstream initialized with:
    floatTransfer    : 0
    nProcsSimpleSum  : 0
    commsType        : nonBlocking
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create polyMesh for time = 0

[g04:03433] *** Process received signal ***
[g04:03433] Signal: Segmentation fault (11)
[g04:03433] Signal code: Address not mapped (1)
[g04:03433] Failing at address: 0xfffffffe0351be78
[g04:03433] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b6814c14480]
[g04:03433] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x728fa) [0x2b6814c548fa]
[g04:03433] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x74d64) [0x2b6814c56d64]
[g04:03433] [ 3] /lib/x86_64-linux-gnu/libc.so.6(__libc_malloc+0x70) [0x2b6814c59420]
[g04:03433] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znwm+0x1d) [0x2b68144fc68d]
[g04:03433] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_Znam+0x9) [0x2b68144fc7a9]
[g04:03433] [ 6] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam5error10printStackERNS_7OstreamE+0x128b) [0x2b6813f631db]
[g04:03433] [ 7] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7sigSegv10sigHandlerEi+0x30) [0x2b6813f55ec0]
[g04:03433] [ 8] /lib/x86_64-linux-gnu/libc.so.6(+0x32480) [0x2b6814c14480]
[g04:03433] [ 9] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam18processorPolyPatch10updateMeshERNS_14PstreamBuffersE+0x2da) [0x2b6813dd5f3a]
[g04:03433] [10] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam16polyBoundaryMesh10updateMeshEv+0x2b1) [0x2b6813ddcfc1]
[g04:03433] [11] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam8polyMeshC1ERKNS_8IOobjectE+0xd0b) [0x2b6813e2b8bb]
[5] #0  Foam::error::printStack(Foam::Ostream&)[g04:03433] [12] checkMesh() [0x41b1d4]
[g04:03433] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xfd) [0x2b6814c00ead]
[g04:03433] [14] checkMesh() [0x407f79]
[g04:03433] *** End of error message ***
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged. 

The process that invoked fork was:

  Local host:          g04 (PID 3434)
  MPI_COMM_WORLD rank: 5

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
 in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #1  Foam::sigSegv::sigHandler(int) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #2  in "/lib/x86_64-linux-gnu/libc.so.6"
[5] #3  Foam::processorPolyPatch::updateMesh(Foam::PstreamBuffers&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #4  Foam::polyBoundaryMesh::updateMesh() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
[5] #5  Foam::polyMesh::polyMesh(Foam::IOobject const&)--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 3433 on node g04 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------



All times are GMT -4. The time now is 19:24.