CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

PimpleFoam:Pressure-driven flow with RAS crashing due to high continuty error

Register Blogs Community New Posts Updated Threads Search

Like Tree6Likes
  • 2 Post By Tobermory
  • 2 Post By dlahaye
  • 2 Post By Alczem

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 19, 2023, 08:32
Red face PimpleFoam:Pressure-driven flow with RAS crashing due to high continuty error
  #1
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Hello, CFD/OpenFOAM community

I have been struggling to run a pressure-driven flow with the RANS approach. The case is described by inlet and outlet as pressure defined bc (as I have no data about velocity/discharge). I am defining totalPressure (kinematic pressure; rho*g*h/rho) at inlet and outlet. At outlet value is specified as 0, as it is open to atmosphere. Velocity at inlet and outlet is calculated based on pressure, so inletOutletPressureVelocity bc is used. To generate the mesh, I have used snappyHexMesh and checkmesh shows mesh is decent enough (max. aspect ratio~5, max. non-orthogonality~47, max. skewness~1.3).

First, I tried to run the case as laminar to check if my bc (pressure based) are causing any problem or not. Laminar simulation runs successfully and results look physical. So, the pressure-based bc should not be a problem also for a RANS simulations.

Later on, I activate RAS (RANS with kEpsilon) in turbulenceProperties and accordingly provide k, nut and epsilon files on my 0 folder. At walls I am using wall functions, and to calculate the initial estimations of k and epsilon, I use a standard formula based on turbulence intensity, velocity, and length scale (https://www.cfd-online.com/Tools/turbulence.php). And try to run the simulations. During initial trials, the courant number was exploding with extremely small time steps (at the very first time steps). I tuned the fvSchemes and fvSolutions accordingly to fit the case, and now simulation could run a bit further (till 0.28sec). However, after some time, all of a sudden, I got very high continuity error and the simulation crashed. Interestingly, when it gets crashed, the courant number and time steps remain low.

I have gone through several post here and tried the suggestions (tightening tolerance for pressure, relaxation factors, etc.) but still could not get rid of this error.

for the details here important files:

0/U
Code:
dimensions      [0 1 -1 0 0 0 0];

internalField   uniform (0 0 0);

boundaryField
{
    inlet
    {
        type            pressureInletOutletVelocity;
        value           uniform (0 0 0);
    }

    outlet
    {
        type            pressureInletOutletVelocity;
        value           uniform (0 0 0);
    }

    top
    {
        type            noSlip; //symmetry;
    }

    domain_patch24311
    {
        type            noSlip;
    }

}
0/p
Code:
dimensions      [0 2 -2 0 0 0 0];

internalField   uniform 0;

boundaryField
{
    inlet
    {
        type            totalPressure; //p is kinematic pressure, p=P/rho
        p0              uniform 27.468;
        value           uniform 27.468;
    }

    outlet
    {
        type            totalPressure;
        p0              uniform 0;
        value           uniform 0;
    }

    top
    {
        type            fixedFluxPressure;
        value           uniform 0; //symmetry; //zeroGradient;
    }

    domain_patch24311
    {
        type            fixedFluxPressure;
        value           uniform 0;
    }

}
0/k

Code:
dimensions      [0 2 -2 0 0 0 0];

internalField   uniform 37.5; //3.75e-03;

boundaryField
{
    inlet
    {
        type            fixedValue;
        value           uniform 37.5; //3.75e-03;
    }
    outlet
    {
        type            zeroGradient;
    }
    top
    {
        type            kqRWallFunction;
        value           uniform 37.5; //3.75e-03; //zeroGradient; //symmetry;
    }
    domain_patch24311
    {
        type            kqRWallFunction;
        value           uniform 37.5; //3.75e-03;
    }
    
}
0/epsilon
Code:
dimensions      [0 2 -3 0 0 0 0];

internalField   uniform 94.3; // 9.4e-05;

boundaryField
{
    inlet
    {
        type            fixedValue;
        value           uniform 94.3; // 9.4e-05;
    }
    outlet
    {
        type            zeroGradient;
    }
    top
    {
        type            epsilonWallFunction;
        value           uniform 94.3; // 9.4e-05; //zeroGradient;//symmetry;
    }
    domain_patch24311
    {
        type            epsilonWallFunction;
        value           uniform 94.3; // 9.4e-05;
    }
   
}
0/nut
Code:
dimensions      [0 2 -1 0 0 0 0];

internalField   uniform 0;

boundaryField
{
    inlet
    {
        type            calculated;
        value           uniform 0;
    }
    outlet
    {
        type            calculated;
        value           uniform 0;
    }
    top
    {
        type            nutkWallFunction;
        value           uniform 0; //zeroGradient; //symmetry;
    }
    domain_patch24311
    {
        type            nutkWallFunction;
        value           uniform 0;
    }
    
}
system/fvSchemes
Code:
ddtSchemes
{
    default         backward; //Euler;
}

gradSchemes
{
    default         leastSquares; //Gauss linear;
}

divSchemes
{
    default         none;
    div(phi,U)      Gauss linear;//bounded Gauss linearUpwind grad(U);
    div(phi,k)      Gauss linear;//bounded Gauss upwind;
    div(phi,epsilon) Gauss linear;//bounded Gauss upwind;
    div(phi,R)      bounded Gauss upwind;
    div(R)          Gauss linear;
    div(phi,nuTilda) bounded Gauss upwind;
    div((nuEff*dev2(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default         Gauss linear corrected;
}

interpolationSchemes
{
    default         linear;
}

snGradSchemes
{
    default         corrected;
}

// kOmegaSST
wallDist
{
    method meshWave;
}
system/fvSolution
Code:
solvers
{
    p
    {
        /*solver          PCG;
        preconditioner  DIC;
        tolerance       1e-06;
        relTol          1e-05;*/
        
        solver           GAMG;
        tolerance        1e-6;
        relTol           1e-05;
        smoother         DICGaussSeidel;

    }

    pFinal
    {
        /*solver          PCG;
        preconditioner  DIC;
        tolerance       1e-08;
        relTol          0;*/
        
        $p;
        smoother        DICGaussSeidel;
        tolerance       1e-08;
        relTol          0;
    }

    "(U|k|epsilon)"
    {
        /*solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-5;
        relTol          0.01;
        maxIter 25000;*/
        solver          smoothSolver;
        smoother        symGaussSeidel;
        tolerance       1e-05;
        relTol          0.1;
    }

    "(U|k|epsilon)Final"
    {
        $U;
        relTol          0;
    }
}

PIMPLE
{
    nOuterCorrectors 2; //1 means running in PISO mode
    nCorrectors     3;
    nNonOrthogonalCorrectors 1;
    pRefCell        1001;
    pRefValue       0;
    
    /*residualControl //not significant effect on solution convergence, remains same
    {
        U
        {
                tolerance  1e-3;
                relTol      0;
        }
        p
        {
                tolerance  1e-3;
                relTol      0;
        }
     }*/
}
/*relaxationFactors //when turned on, co number explodes
{
    fields
    {
	    p	1.0;
    }
    equations
    {
        "U.*"           1.;
        "k.*"           1.;
        "epsilon.*"     1.;
    }

}/*
last two iterations are as follows (before it crashed):
Code:
Courant Number mean: 0.00207104 max: 0.977617
deltaT = 0.000714286
Time = 0.280714

PIMPLE: iteration 1
smoothSolver:  Solving for Ux, Initial residual = 0.000407351, Final residual = 1.12122e-05, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 0.00188658, Final residual = 1.93263e-05, No Iterations 1
smoothSolver:  Solving for Uz, Initial residual = 0.000493951, Final residual = 6.30378e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.0101273, Final residual = 5.20742e-07, No Iterations 8
GAMG:  Solving for p, Initial residual = 0.000923531, Final residual = 6.78503e-07, No Iterations 4
time step continuity errors : sum local = 3.94284e-11, global = -1.06258e-11, cumulative = -1.06258e-11
GAMG:  Solving for p, Initial residual = 0.00090325, Final residual = 6.86228e-07, No Iterations 7
GAMG:  Solving for p, Initial residual = 0.000291614, Final residual = 5.30832e-07, No Iterations 3
time step continuity errors : sum local = 3.07364e-11, global = 1.34748e-11, cumulative = 2.84907e-12
GAMG:  Solving for p, Initial residual = 0.000165717, Final residual = 5.96008e-07, No Iterations 5
GAMG:  Solving for p, Initial residual = 3.29811e-05, Final residual = 8.02269e-09, No Iterations 6
time step continuity errors : sum local = 4.64515e-13, global = -1.144e-13, cumulative = 2.73467e-12
PIMPLE: iteration 2
smoothSolver:  Solving for Ux, Initial residual = 7.20935e-05, Final residual = 9.63959e-06, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 8.84436e-05, Final residual = 5.15508e-06, No Iterations 1
smoothSolver:  Solving for Uz, Initial residual = 6.63605e-05, Final residual = 7.29735e-06, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.00439264, Final residual = 9.08655e-07, No Iterations 8
GAMG:  Solving for p, Initial residual = 0.00111425, Final residual = 3.92066e-07, No Iterations 5
time step continuity errors : sum local = 2.29258e-11, global = -1.05907e-12, cumulative = 1.67559e-12
GAMG:  Solving for p, Initial residual = 0.00138089, Final residual = 4.78288e-07, No Iterations 8
GAMG:  Solving for p, Initial residual = 0.000524477, Final residual = 4.24057e-07, No Iterations 4
time step continuity errors : sum local = 2.46553e-11, global = 2.00002e-12, cumulative = 3.67561e-12
GAMG:  Solving for p, Initial residual = 0.000266858, Final residual = 3.73802e-07, No Iterations 6
GAMG:  Solving for p, Initial residual = 6.07105e-05, Final residual = 5.661e-09, No Iterations 6
time step continuity errors : sum local = 3.29044e-13, global = -4.05711e-14, cumulative = 3.63504e-12
smoothSolver:  Solving for epsilon, Initial residual = 0.000321257, Final residual = 4.76178e-06, No Iterations 1
bounding epsilon, min: -0.111739 max: 5519.29 average: 115.472
smoothSolver:  Solving for k, Initial residual = 0.0003002, Final residual = 7.32572e-06, No Iterations 1
bounding k, min: -0.00444426 max: 41.3147 average: 14.1269
PIMPLE: not converged within 2 iterations
ExecutionTime = 11.5 s  ClockTime = 12 s

Courant Number mean: 0.00207455 max: 0.979355
deltaT = 0.000714286
Time = 0.281429

PIMPLE: iteration 1
smoothSolver:  Solving for Ux, Initial residual = 1, Final residual = 0.00740171, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 1, Final residual = 0.00740171, No Iterations 1
smoothSolver:  Solving for Uz, Initial residual = 1, Final residual = 0.00740171, No Iterations 1
GAMG:  Solving for p, Initial residual = 1, Final residual = 5.35013e-06, No Iterations 14
GAMG:  Solving for p, Initial residual = 0.0583631, Final residual = 4.17531e-07, No Iterations 9
time step continuity errors : sum local = 0.270499, global = 0.0353781, cumulative = 0.0353781
GAMG:  Solving for p, Initial residual = 1, Final residual = 6.683e-06, No Iterations 14
GAMG:  Solving for p, Initial residual = 0.0726393, Final residual = 4.31113e-07, No Iterations 10
time step continuity errors : sum local = 1.81789e+08, global = -4.1165e+07, cumulative = -4.1165e+07
GAMG:  Solving for p, Initial residual = 1, Final residual = 5.52159e-06, No Iterations 13
GAMG:  Solving for p, Initial residual = 0.0763192, Final residual = 6.27906e-09, No Iterations 14
time step continuity errors : sum local = 5.24242e+23, global = 9.78423e+22, cumulative = 9.78423e+22
PIMPLE: iteration 2
#0  Foam::error::printStack(Foam::Ostream&) at ??:?
#1  Foam::sigFpe::sigHandler(int) at ??:?
#2  ? in "/lib/x86_64-linux-gnu/libc.so.6"
#3  Foam::symGaussSeidelSmoother::smooth(Foam::word const&, Foam::Field<double>&, Foam::lduMatrix const&, Foam::Field<double> const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, unsigned char, int) at ??:?
#4  Foam::symGaussSeidelSmoother::smooth(Foam::Field<double>&, Foam::Field<double> const&, unsigned char, int) const at ??:?
#5  Foam::smoothSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
#6  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/pimpleFoam"
#7  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/pimpleFoam"
#8  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/pimpleFoam"
#9  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/pimpleFoam"
#10  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
#11  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/pimpleFoam"
Floating point exception (core dumped)
Please let me know if high continuity errors can be avoided by any means. I am almost out of ideas atm.

Thank you and best regards
Atul
atul1018 is offline   Reply With Quote

Old   October 19, 2023, 11:29
Default
  #2
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Update:

I changed the time scheme back to the Euler scheme, and now it is not crashing anymore.
atul1018 is offline   Reply With Quote

Old   November 2, 2023, 10:25
Default Problem persists with different mesh
  #3
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Dear OpenFOAM-community

I had to come here again as I am struggling with the crashing of my RANS simulations with different Mesh. Last time, with the previous mesh and RANS approach (on the same geometry), I ran when I changed the ddt scheme to Euler. However, the result looks strange and not physical. Some velocity patches seem to appear near walls (see figure:RANS_test_1). I thought, the non-physical results might be due "not so good mesh".

I tried to improve the mesh quality by adjusting level of refinement in snappyHexMesh and now I have a bit finer mesh near the walls, edges, and curves in the domain. With the new mesh, I simulated it as laminar case and results look exactly similar as that of previous mesh. However, when I tried RANS (k-epsilon) with the new mesh, It was getting crashed initially with higher courant number, small time steps, and time step continuity error. I managed avoid most of these errors and run this case a bit longer by adjusting fvSchemes (especially grad and div schemes). However, the simulation is again getting crashed at 0.28 sec. Interstingly, the courant number remains low but time steps continuity error blows up (suddenly). I tried several schemes for all the terms in fvschemes but no success. Looking into the results, i can see that with the current mesh, I don't see the patches with higher velocity near walls (better results: see figure: RANS_test_2). But I need to simulate the case longer than 0.28 sec as it hasn't reached a steady state (URANS) solution like laminar case. With both the meshes, i get exactly same velocity fields (see figure: Laminar).

Now I am wondering How can I simulate longer the current mesh case. The results look better but its getting crashed. please help.

PS: I am using higher values (eg100 times) of k an d Epsillon as initial conditions than suggested by calculator (https://www.cfd-online.com/Tools/turbulence.php). With the values suggested by the calculator, it crashed at the very beginning.

I am wondering, what causes the simulation to get crashed/give unphysical results, when RANS is activated. It should be something related to k and epsilon and how they are being solved, as the laminar case works well without being crashed.

Best Regards
Atul Jaiswal
Attached Images
File Type: png RANS_test_1.PNG (55.2 KB, 11 views)
File Type: png RANS_test_2.PNG (74.0 KB, 10 views)
File Type: png Laminar.PNG (51.1 KB, 9 views)

Last edited by atul1018; November 3, 2023 at 05:44.
atul1018 is offline   Reply With Quote

Old   November 2, 2023, 12:25
Default
  #4
Senior Member
 
Join Date: Apr 2020
Location: UK
Posts: 668
Rep Power: 14
Tobermory will become famous soon enough
Why are you running with such a small number of pimple iterations (2)?

Quote:
nOuterCorrectors 2; //1 means running in PISO mode
I suggest making the solver work harder on each time step, to get a better, more converged solution at each time step. For example, try using 20 pimple iterations - the simulation will probably start off using all 20, which will increase the computation, but eventually it should drop as the rate of change of the solution drops ... providing that you set sensible residualControl parameters.

Or alternatively, run it in steady state mode (or used simpleFoam) and underrelax ... your boundary conditions seem steady, so it should be fine?
hogsonik and atul1018 like this.
Tobermory is offline   Reply With Quote

Old   November 6, 2023, 10:28
Default
  #5
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Thanks for going through the post and suggestions.

Quote:
Why are you running with such a small number of pimple iterations (2)?
I took the reference from wolfdynamics. As per the orthogonality my mesh has, 2-3 should should be fine. Also, the residuals are very small with 2-3 nOuterCorrector until it suddenly crashed with high-time step continuity error.


Quote:
I suggest making the solver work harder on each time step, to get a better, more converged solution at each time step. For example, try using 20 pimple iterations - the simulation will probably start off using all 20, which will increase the computation, but eventually it should drop as the rate of change of the solution drops ... providing that you set sensible residualControl parameters.
following your suggestion, I have tried with 20 nOuterCorrectors. It ran a bit further but eventually got crashed at 0.29sec instead of 0.28sec (with 2 nOuterCorrectors). I tried to play residualControl but it gets killed irrespective of what values I chose.


Quote:
Or alternatively, run it in steady state mode (or used simpleFoam) and underrelax ... your boundary conditions seem steady, so it should be fine?
I will try with stready-state mode (also with simpleFoam). But I am still wondering from where this time step continuity error occur suddenly. Everything runs smoothly until it gives that error (time step continuity error occurs very suddenly). Any other suggestions/ideas/explanations?

Best Regards
Atul
atul1018 is offline   Reply With Quote

Old   November 7, 2023, 03:11
Default
  #6
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 723
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
Quote:
Originally Posted by Tobermory View Post
... For example, try using 20 pimple iterations ...
Thanks for your input as always and ever.

I am wondering:

1/ Is using as much as 20 outer corrections realistic or is 20 here instead intended as a means to test matters;

2/ When running with 3 (three) outer corrections, I see that the first 2 (two) corrections are computed fast (2 linear solve iterations for velocity components and 7 to 8 linear solve iterations for the pressure), while the last (third, 3rd)
takes much longer to compute (350 to 400 linear solve iterations for the velocity components and 7 to 8 linear solve iterations for the pressure as before). The fact that the linear solve pressure iterations remains low can most likely be explained by the use of the GAMG solver.

2.1/ Is the above scenario typical for PIMPLE runs?

2.2/ Is the above scenario typical for initial stage of convergence of PIMPLE runs?

2.3/ Does the above scenario imply that when using as many as 20 outer iterations, the first 19 come for "free" and all computational effort goes into
the last corrections?

Cheers. Domenico.

Edit: My bad. The increase in linear solve iterations for velocity in the last PIMPLE iteration is due to leaving out relaxation factors for the last iteration (UFinal and pFinal). My question on the number of outer iterations to use remains relevant.

Last edited by dlahaye; November 7, 2023 at 03:51. Reason: correct mistake
dlahaye is offline   Reply With Quote

Old   November 7, 2023, 03:47
Default
  #7
Senior Member
 
Join Date: Apr 2020
Location: UK
Posts: 668
Rep Power: 14
Tobermory will become famous soon enough
Great questions, Domenico. I should confess early on that I hardly ever use the PIMPLE algorithm, since I prefer to solve steady state problems with relaxation, ie SIMPLE, rather than with time stepping, ie PISO or PIMPLE ... I have played around with PIMPLE for steady and transient problems, but I could never obtain better or more efficient behaviour than I could with the SIMPLE and PISO algorithms ... perhaps due to the type of cases I have used it on, or maybe it's just my lack of experience/competence with PIMPLE?

Let me add my thoughts on a few of your questions though:

Quote:
1/ Is using as much as 20 outer corrections realistic or is 20 here instead intended as a means to test matters;
It's quite common that the initial field is poorly defined, and that the pressure solver has a lot of work to do in the first few iterations, to get the field to be in balance with the boundary conditions.

With the PIMPLE approach and transient simulations, the philosophy is that you should try and have the solution suitably converged at each time step, before moving onto the next time step ... and that is done by the outer/PIMPLE iterations. The best approach is therefore to set a large value for nOuterIterations and then use the convergence criteria to allow the solver to cut this down to a reasonably smaller number once the hard work at the start of the run (or at the time that large changes in the flow fields occur, whenever that is) is done..

I simply don't know what the correct/robust philosophy is when using PIMPLE for steady state solutions ... where you don't want to solve the whole field fully in one time step ... for that, presumably one needs to accept partially converged solutions at the end of each time step?

Quote:
2/ When running with 3 (three) outer corrections, I see that the first 2 (two) corrections are computed fast (2 linear solve iterations for velocity components and 7 to 8 linear solve iterations for the pressure), while the last (third, 3rd)
takes much longer to compute (350 to 400 linear solve iterations for the velocity components and 7 to 8 linear solve iterations for the pressure as before). The fact that the linear solve pressure iterations remains low can most likely be explained by the use of the GAMG solver.

2.1/ Is the above scenario typical for PIMPLE runs?
2.2/ Is the above scenario typical for initial stage of convergence of PIMPLE runs?
This behaviour (ie so many velocity sweeps, and only a small number of pressure sweeps) is very odd to me ... and is driven by the fvSolution setting for UFinal I think, where relTol is set to 0 and tolerance is inherited from U, as 1e-5.

As per the previous point, maybe the tolerance on UFinal needs adjusting to allow the solution to move on ... I can't believe that doing 400 sweeps on velocity is efficient - plotting the inner iteration convergence traces would quickly show if there's any benefit at all from sweeps 10-400! Perhaps reinstating a value for relTol would help?

Quote:
2.3/ Does the above scenario imply that when using as many as 20 outer iterations, the first 19 come for "free" and all computational effort goes into
the last corrections?
Yes, that seems to be the conclusion here ... again, it feels like a flawed approach to me.

I hope that's intelligible/useful? Let me know your thoughts. All the best, T.
Tobermory is offline   Reply With Quote

Old   November 7, 2023, 05:11
Default
  #8
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Hi Tobermory

Following your suggestion, I tried to simulate the case with simpleFoam. Similar to pimpleFoam (URANS) case, the steady state case gets crashed after 289 iterations due to a floating point exception error.

Looking into log files, I can see that time step continuity error blow up. Another interesting observation is that I get printed that k and epsilon are bounded as they are getting too big. I guess this is the problem. Because laminar case works fine and it blows up only when turbulence is activated. So it definitely has something to do with k/epsilon (e.g. initial condition, its discretization and solvers used to solve it).


PS: I am using 100 times higher values of k and epsilon as suggested by calculator here. with the suggested values of k and epsilon, even steady state simulation gets crashed at the very beginning.
for reference, here arelast two time step:

Code:
Time = 288

smoothSolver:  Solving for Ux, Initial residual = 0.727367, Final residual = 0.052098, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 0.499341, Final residual = 0.0439679, No Iterations 1
smoothSolver:  Solving for Uz, Initial residual = 0.348234, Final residual = 0.0289803, No Iterations 1
GAMG:  Solving for p, Initial residual = 0.88418, Final residual = 0.00102086, No Iterations 1
time step continuity errors : sum local = 7.0474e+61, global = 2.19658e+59, cumulative = 2.19659e+59
smoothSolver:  Solving for epsilon, Initial residual = 2.17778e-07, Final residual = 2.17778e-07, No Iterations 0
bounding epsilon, min: 2.7616e-16 max: 1.67435e+120 average: 1.467e+115
smoothSolver:  Solving for k, Initial residual = 0.285767, Final residual = 0.0140403, No Iterations 1
bounding k, min: -1.98644e+77 max: 7.43413e+122 average: 1.48437e+118
ExecutionTime = 269.92 s  ClockTime = 270 s

Time = 289

smoothSolver:  Solving for Ux, Initial residual = 2.86254e-05, Final residual = 9.0209e-07, No Iterations 1
smoothSolver:  Solving for Uy, Initial residual = 0.000897426, Final residual = 4.99061e-05, No Iterations 1
smoothSolver:  Solving for Uz, Initial residual = 0.00101645, Final residual = 2.82106e-05, No Iterations 1
[0] #0  Foam::error::printStack(Foam::Ostream&) at ??:?
[0] #1  Foam::sigFpe::sigHandler(int) at ??:?
[0] #2  ? in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #3  Foam::GAMGSolver::scale(Foam::Field<double>&, Foam::Field<double>&, Foam::lduMatrix const&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const at ??:?
[0] #4  Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const at ??:?
[0] #5  Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at ??:?
[0] #6  Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) at ??:?
[0] #7  Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
[0] #8  Foam::fvMatrix<double>::solve() in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
[0] #9  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
[0] #10  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #11  ? in "/home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
[generic:07572] *** Process received signal ***
[generic:07572] Signal: Floating point exception (8)
[generic:07572] Signal code:  (-6)
[generic:07572] Failing at address: 0x3e800001d94
[generic:07572] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7efd3c38c090]
[generic:07572] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7efd3c38c00b]
[generic:07572] [ 2] /lib/x86_64-linux-gnu/libc.so.6(+0x43090)[0x7efd3c38c090]
[generic:07572] [ 3] /home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZNK4Foam10GAMGSolver5scaleERNS_5FieldIdEES3_RKNS_9lduMatrixERKNS_10FieldFieldIS1_dEERKNS_8UPtrListIKNS_17lduInterfaceFieldEEERKS2_h+0x280)[0x7efd3cd8ebc0]
[generic:07572] [ 4] /home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZNK4Foam10GAMGSolver6VcycleERKNS_7PtrListINS_9lduMatrix8smootherEEERNS_5FieldIdEERKS8_S9_S9_S9_S9_S9_RNS1_IS8_EESD_h+0x885)[0x7efd3cd91245]
[generic:07572] [ 5] /home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so(_ZNK4Foam10GAMGSolver5solveERNS_5FieldIdEERKS2_h+0x524)[0x7efd3cd93564]
[generic:07572] [ 6] /home/generic/OpenFOAM/OpenFOAM-5.x/platforms/linux64GccDPInt32Opt/lib/libfiniteVolume.so(_ZN4Foam8fvMatrixIdE15solveSegregatedERKNS_10dictionaryE+0x187)[0x7efd3e854927]
[generic:07572] [ 7] simpleFoam(_ZN4Foam8fvMatrixIdE5solveERKNS_10dictionaryE+0x1d8)[0x55a98c55add8]
[generic:07572] [ 8] simpleFoam(_ZN4Foam8fvMatrixIdE5solveEv+0x19a)[0x55a98c55b0ea]
[generic:07572] [ 9] simpleFoam(+0x2b483)[0x55a98c50a483]
[generic:07572] [10] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7efd3c36d083]
[generic:07572] [11] simpleFoam(+0x2d56e)[0x55a98c50c56e]
[generic:07572] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node generic exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------
Any suggestion get avaoid this error and getting solution to converge.

Best Regards
Atul Jaiswal
atul1018 is offline   Reply With Quote

Old   November 7, 2023, 06:38
Default
  #9
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 723
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
@atul:

You refer to blow-up in the k and epsilon. We fixed similar issues in the past by setting in system/fvSolution in the dictionary for the solve for k and epsilon minIter to 1.

"(U|h|k|epsilon|R)"
{
...
minIter 1;
}

minIter set to 1 forces the k and epsilon fields to be updated, even though the residual is small.

Not sure whether it solves your case. Please try to exclude at least that option. We can look further afterwards.

@toberrmory:

Thx! We are in the same boat trying to understand PIMPLE. I typically run reactingFoam and find myself in the need to bring reactingFoam to a steady state solver. More later.
hogsonik and Tobermory like this.
dlahaye is offline   Reply With Quote

Old   November 7, 2023, 09:39
Default
  #10
Senior Member
 
Join Date: Dec 2021
Posts: 207
Rep Power: 5
Alczem is on a distinguished road
Hey,


Regarding your fvSchemes file, if you have not already, try more robust settings. Your mesh is not perfectly orthogonal, and using linear in the divergence schemes has proven difficult on such meshes (in my experience at least) so I would suggest to stick to upwind until you have a working case. Using cellLimited for the gradient and uncorrected for the laplacian and snGradSchemes can also improve stability.


Regarding PIMPLE, I use several outer loops whenever I want to achieve a courant number over 1. I believe it is its main purpose (although it can also help improve stability for multiphase cases or when phase changes are involved). I achieved a Courant number of 20 with decent results using pimpleFoam for a wind turbine and using residualControl with relTol to stop the number of outer loops from reaching nOuterCorrectors.


As for the relTol and tolerance in the solvers of fvSolution, I would also be grateful for more information about the proper way to tune them. All I know is that the solver stops if it reaches the level set by tolerance, or if the relative difference between the current step and the previous one falls below relTol (setting relTol or tolerance to 0 turns off the associated convergence check and the solver relies only on the other value).
hogsonik and atul1018 like this.
Alczem is offline   Reply With Quote

Old   November 7, 2023, 12:00
Default
  #11
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 723
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
Quote:
Originally Posted by Alczem View Post
Hey,

As for the relTol and tolerance in the solvers of fvSolution, I would also be grateful for more information about the proper way to tune them. All I know is that the solver stops if it reaches the level set by tolerance, or if the relative difference between the current step and the previous one falls below relTol (setting relTol or tolerance to 0 turns off the associated convergence check and the solver relies only on the other value).
Thank you for your input.

There are at least two ways to address the issue on linear solvers that you correctly raise.

1/ The first way is to argue that the linear solvers should be adapted to the problem thrown at it. This is the idea of my earlier post at Regarding PETSc4FOAM by openfoam.com/committees/hpc

2/ The second way is to argue the inverse, thus saying that the problem should be adapted to the linear solvers available. Experience shows that a small relaxation factor (in the range of 0.1 to 0.3) leads to linear systems that are easier to solve, i.e., requiring less iterations to reach convergence. This is especially true in initial stages of convergence (as Tobermory points out).

Possibly the good approach is a blend of above two approaches.

What do you think?
dlahaye is offline   Reply With Quote

Old   November 8, 2023, 03:27
Default
  #12
Senior Member
 
Join Date: Dec 2021
Posts: 207
Rep Power: 5
Alczem is on a distinguished road
Quote:
There are at least two ways to address the issue on linear solvers that you correctly raise.

1/ The first way is to argue that the linear solvers should be adapted to the problem thrown at it. This is the idea of my earlier post at Regarding PETSc4FOAM by openfoam.com/committees/hpc

2/ The second way is to argue the inverse, thus saying that the problem should be adapted to the linear solvers available. Experience shows that a small relaxation factor (in the range of 0.1 to 0.3) leads to linear systems that are easier to solve, i.e., requiring less iterations to reach convergence. This is especially true in initial stages of convergence (as Tobermory points out).

Possibly the good approach is a blend of above two approaches.
The points you raise in the linked thread are beyond my limited knowledge of what is happening under the hood But I agree with a blended approach of selecting the correct solver while relaxing adequately. As you said, I tend to have high values for relTol and tolerance as well as low relaxation factors during the first iterations, and then tighten everything as the simulation progresses. But the values are set according to my experience and "feelings" rather than strict criteria. It is usually working out for my current use, but I wish we had some kind of rule of thumb for this thanks for all the details!
Alczem is offline   Reply With Quote

Old   November 8, 2023, 04:51
Default
  #13
Senior Member
 
Join Date: Jun 2020
Posts: 100
Rep Power: 5
atul1018 is on a distinguished road
Hello Community


Quote:
Regarding your fvSchemes file, if you have not already, try more robust settings. Your mesh is not perfectly orthogonal, and using linear in the divergence schemes has proven difficult on such meshes (in my experience at least) so I would suggest to stick to upwind until you have a working case. Using cellLimited for the gradient and uncorrected for the laplacian and snGradSchemes can also improve stability.
Thanks for sharing your knowledge and experience. As mentioned here, I was able to run (at least for now with the current mesh) both steady-state (simpleFoam) and transient (pimpleFoam) simulations by changing div scheme for k and epsilon. for both k and epsilon, I am using bounded Gauss upwind.

Thanks again.

One question that I have: Why the simulation gets crashed with the initial values of turbulence parameter from calculator here. It runs only when I using 100*magnitude higher than that of the suggested ones. I must admit that I don't have much information of velocity, char, length and intensity. for velocity, I take laminar solution as ref, pipe dia (near outlet) as char length and 2% turbulence intensity. All of just random guess.

Best Regards
Atul Jaiswal
atul1018 is offline   Reply With Quote

Reply

Tags
continuity error, pimplefoam, pressure driven flow


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Using PengRobinsonGas EoS with sprayFoam Jabo OpenFOAM Running, Solving & CFD 35 April 29, 2022 15:35
Errors in UDF shashank312 Fluent UDF and Scheme Programming 6 May 30, 2013 20:30
Installation OF1.5-dev ttdtud OpenFOAM Installation 46 May 5, 2009 02:32
How to get the max value of the whole field waynezw0618 OpenFOAM Running, Solving & CFD 4 June 17, 2008 05:07
Multicomponent fluid Andrea CFX 2 October 11, 2004 05:12


All times are GMT -4. The time now is 12:39.