CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   pressure eq. "converges" after few time steps (https://www.cfd-online.com/Forums/openfoam-solving/81461-pressure-eq-converges-after-few-time-steps.html)

maddalena October 27, 2010 11:34

pressure eq. "converges" after few time steps
 
1 Attachment(s)
Hi everybody,
weird simpleFoam convergence over here, need your help!
I have a complex pipes geometry, similar to what sketched in the geom.png file. The two main pipes are connected by fans to the outside, which is represented by a spherical domain. Reynolds is around 21000 on the smallest pipe, thus a launderSharmaKE model is applied, using wallfunction to keep low the cell number. In any case, the mesh is not really fine since I first want to evaluate my setup. BC are standard:
  • external domain:
    • U, epsilon, k inletOutlet;
    • p fixedValue 0;
  • pipes:
    • U: fixedValue;
    • epsilon, k wallFunction;
    • p zeroGradient;
fvSchemes is as follow:
Code:

grad        faceMDLimited Gauss linear 0.5;
div        Gauss linearUpwind cellLimited Gauss linear 1;
laplacian  Gauss linear limited 0.5;

while I tried different combinations for fvSolution:
Code:

this is the first one:
    p
    {
        solver          GAMG;
        tolerance      1e-06;
        relTol          0;
        smoother        GaussSeidel;
        nPreSweeps      0;
        nPostSweeps    2;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;
    }

    U epsilon k
    {
        solver          smoothSolver;
        smoother        GaussSeidel;
        tolerance      1e-04;
        relTol          0;
    }

and the second one:
Code:

  p
    {
        solver          GAMG;
        tolerance      1e-6;
        relTol          1e-3;
        smoother        GaussSeidel;
        nPreSweeps      0;
        nPostSweeps    2;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-4;
        relTol          1e-3;
    }
   
    k epsilon
    {
        solver          smoothSolver;
        smoother    GaussSeidel;
        tolerance      1e-4;
        relTol          1e-3;
    }

I ended up with this log file:
Code:

Time = 1

smoothSolver:  Solving for Ux, Initial residual = 0, Final residual = 0, No Iterations 0    //correct, I applied two fans!
smoothSolver:  Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Uz, Initial residual = 0, Final residual = 0, No Iterations 0
GAMG:  Solving for p, Initial residual = 1, Final residual = 9.59218e-07, No Iterations 393
GAMG:  Solving for p, Initial residual = 9.29457e-08, Final residual = 9.29457e-08, No Iterations 0
GAMG:  Solving for p, Initial residual = 9.29457e-08, Final residual = 9.29457e-08, No Iterations 0
time step continuity errors : sum local = 0.00122424, global = -1.32307e-12, cumulative = -1.32307e-12
smoothSolver:  Solving for epsilon, Initial residual = 0.454767, Final residual = 9.2459e-06, No Iterations 1
smoothSolver:  Solving for k, Initial residual = 1, Final residual = 7.38315e-05, No Iterations 1
ExecutionTime = 174.05 s  ClockTime = 175 s

Time = 2

smoothSolver:  Solving for Ux, Initial residual = 0.141374, Final residual = 5.00021e-05, No Iterations 5
smoothSolver:  Solving for Uy, Initial residual = 0.272753, Final residual = 7.82825e-05, No Iterations 5
smoothSolver:  Solving for Uz, Initial residual = 0.202637, Final residual = 8.15781e-05, No Iterations 5
GAMG:  Solving for p, Initial residual = 8.8497e-08, Final residual = 8.8497e-08, No Iterations 0
GAMG:  Solving for p, Initial residual = 8.8497e-08, Final residual = 8.8497e-08, No Iterations 0
GAMG:  Solving for p, Initial residual = 8.8497e-08, Final residual = 8.8497e-08, No Iterations 0
time step continuity errors : sum local = 0.000634968, global = 7.45987e-07, cumulative = 7.45986e-07
smoothSolver:  Solving for epsilon, Initial residual = 0.164053, Final residual = 2.52305e-06, No Iterations 1
smoothSolver:  Solving for k, Initial residual = 0.24006, Final residual = 1.351e-05, No Iterations 1
ExecutionTime = 192.39 s  ClockTime = 194 s

Does this implies that the pressure is already converged at the second time step??? :eek:
This does not change when:
- reducing the nNonOrthogonalCorrectors;
- using the second fvSchemes files;
- switching off the turbulence.

Any explanation on that? Really no ideas...

thank you!

maddalena

FelixL October 27, 2010 12:00

Hi,


try lowering the tolerances of the linear solvers. Something in the order of 1e-12 might be appropriate.

Remember, your simulation is not converged if the residuals of just one equation fall below a certain tolerance. The pressure field depends on the velocity field and vice versa - since your velocity field clearly hasn't converged after the second iteration your whole problem hasn't (would be very surprising if it had) and the pressure equation might and probably will be re-solved at some point.


Greetings,
Felix.

francescomarra October 28, 2010 06:48

Dear Maddalena,

I would try also changing the initial conditions: I am not aware of your problem but maybe you can try to force the raising of pressure gradients by assigning non zero values for the velocity in the pipes and zero outside.

Regards,

Franco

maddalena November 2, 2010 09:28

Thanks Felix and Francesco for your suggestions.
Some observations after some more try:
Quote:

Originally Posted by FelixL (Post 281036)
try lowering the tolerances of the linear solvers. Something in the order of 1e-12 might be appropriate.

That helped a lot. Now I have p and U equations that are solved at every time step.
Quote:

Originally Posted by FelixL (Post 281036)
The pressure field depends on the velocity field and vice versa - since your velocity field clearly hasn't converged after the second iteration your whole problem hasn't (would be very surprising if it had) and the pressure equation might and probably will be re-solved at some point..

Yes, of course I have a feeling of this. But is it correct to first solve U equation and than, after some time, solve p? Is this not longer than solve all the equations at the same time?
Quote:

Originally Posted by francescomarra (Post 281159)
I would try also changing the initial conditions: I am not aware of your problem but maybe you can try to force the raising of pressure gradients by assigning non zero values for the velocity in the pipes and zero outside.

About the boundary conditions I am almost sure that everything is fine, since I have already used this setup in the past. However, you suggest to initialize the pipe field with different velocity values, is this correct? If so, how can I do that?

What I have got up to now is a converging but unstable solution: I have bounding epsilon and k warnings, but a convergent solution till time step 295. After that, the max epsilon and k raise suddently and the solution blows away:
Code:

Time = 295

DILUPBiCG:  Solving for Ux, Initial residual = 0.11505, Final residual = 8.32225e-11, No Iterations 13
DILUPBiCG:  Solving for Uy, Initial residual = 0.934182, Final residual = 7.36365e-11, No Iterations 14
DILUPBiCG:  Solving for Uz, Initial residual = 0.713126, Final residual = 4.13259e-11, No Iterations 13
GAMG:  Solving for p, Initial residual = 1.86515e-09, Final residual = 9.94925e-13, No Iterations 206
GAMG:  Solving for p, Initial residual = 6.88289e-10, Final residual = 7.66889e-13, No Iterations 9
time step continuity errors : sum local = 4.67002e-09, global = 1.51568e-10, cumulative = -2.05894e-08
smoothSolver:  Solving for epsilon, Initial residual = 0.00158922, Final residual = 7.46827e-11, No Iterations 24
bounding epsilon, min: 1.30954e-18 max: 5812.76 average: 93.9024
smoothSolver:  Solving for k, Initial residual = 7.82377e-09, Final residual = 4.92151e-11, No Iterations 5
bounding k, min: 6.82659e-17 max: 14.436 average: 0.424139
ExecutionTime = 65389.8 s  ClockTime = 65515 s

Time = 296

DILUPBiCG:  Solving for Ux, Initial residual = 0.328206, Final residual = 5.57292e-11, No Iterations 33
DILUPBiCG:  Solving for Uy, Initial residual = 0.904646, Final residual = 3.81614e-11, No Iterations 30
DILUPBiCG:  Solving for Uz, Initial residual = 0.678265, Final residual = 3.27509e-11, No Iterations 29
GAMG:  Solving for p, Initial residual = 0.000198723, Final residual = 9.87374e-13, No Iterations 723
GAMG:  Solving for p, Initial residual = 5.88726e-07, Final residual = 9.89924e-13, No Iterations 273
time step continuity errors : sum local = 2.72317e-07, global = -1.44975e-09, cumulative = -2.20392e-08
smoothSolver:  Solving for epsilon, Initial residual = 0.998411, Final residual = 5.92556e-11, No Iterations 35
bounding epsilon, min: 1.30845e-18 max: 7.92429e+07 average: 1496.69
smoothSolver:  Solving for k, Initial residual = 1.39098e-07, Final residual = 5.71729e-11, No Iterations 5
bounding k, min: -2.01645e-14 max: 449863 average: 2.44943
ExecutionTime = 65807.2 s  ClockTime = 65932 s

Time = 297

DILUPBiCG:  Solving for Ux, Initial residual = 0.693738, Final residual = 2.49235e-11, No Iterations 28
DILUPBiCG:  Solving for Uy, Initial residual = 0.745261, Final residual = 2.71405e-11, No Iterations 28
DILUPBiCG:  Solving for Uz, Initial residual = 0.715263, Final residual = 2.6154e-11, No Iterations 28
GAMG:  Solving for p, Initial residual = 1.06952e-05, Final residual = 9.67823e-13, No Iterations 738
GAMG:  Solving for p, Initial residual = 3.03894e-09, Final residual = 9.87472e-13, No Iterations 31
time step continuity errors : sum local = 0.00097431, global = 7.21803e-06, cumulative = 7.196e-06
smoothSolver:  Solving for epsilon, Initial residual = 0.587168, Final residual = 7.92945e-11, No Iterations 18
bounding epsilon, min: 1.56953e-17 max: 9.66762e+12 average: 6.49312e+07
smoothSolver:  Solving for k, Initial residual = 0.00708516, Final residual = 1.51465e-11, No Iterations 11
bounding k, min: -3.00985e-15 max: 3.33247e+10 average: 154254
ExecutionTime = 66136 s  ClockTime = 66261 s

Time = 298

DILUPBiCG:  Solving for Ux, Initial residual = 0.652017, Final residual = 5.22231e-11, No Iterations 23
DILUPBiCG:  Solving for Uy, Initial residual = 0.523133, Final residual = 6.68451e-11, No Iterations 21
DILUPBiCG:  Solving for Uz, Initial residual = 0.677302, Final residual = 8.86501e-11, No Iterations 22
GAMG:  Solving for p, Initial residual = 4.62234e-08, Final residual = 9.77244e-13, No Iterations 566
GAMG:  Solving for p, Initial residual = 2.08294e-10, Final residual = 7.37324e-13, No Iterations 4
time step continuity errors : sum local = 0.0110342, global = 0.000335115, cumulative = 0.000342311
smoothSolver:  Solving for epsilon, Initial residual = 0.00963571, Final residual = 9.99631e-11, No Iterations 26
bounding epsilon, min: -2.55095e-15 max: 2.70603e+18 average: 2.26696e+13
smoothSolver:  Solving for k, Initial residual = 0.00019889, Final residual = 6.88818e-11, No Iterations 21
bounding k, min: -2.62403e-13 max: 6.83816e+11 average: 6.84336e+06
ExecutionTime = 66386.5 s  ClockTime = 66512 s

I tried to reduce relaxation (till 0.3 on U, epsilon and k and 0.15 on p) and switch to upwind. No way. The weird thing is that the solution looks good at time 295!
Ideas?

maddalena

l_r_mcglashan November 2, 2010 09:34

You could look at the individual cell residuals? That might help you figure out what's causing the instability.

maddalena November 2, 2010 09:40

Quote:

Originally Posted by l_r_mcglashan (Post 281833)
You could look at the individual cell residuals?

That sounds interesting. How can I do that?

l_r_mcglashan November 2, 2010 09:43

I need to do it myself and am just about to look into it :D

You can jump ahead here:
http://www.cfd-online.com/Forums/ope...residuals.html

maddalena November 2, 2010 09:53

Quote:

Originally Posted by l_r_mcglashan (Post 281842)
I need to do it myself and am just about to look into it

Ok, so you need a workmate! I will look into it as well.
Stay in touch.

mad

maddalena November 2, 2010 11:04

Quote:

Originally Posted by l_r_mcglashan (Post 281833)
You could look at the individual cell residuals

Done. I modified the simpleFoamResidual utility that hrvoje posted in order to match the new turbulence definition. What I get is a plot showing that the uResidual are higher at the corners of my pipe geometry, thus on the points where the flow is more difficult to model. Does this mean that my mesh is not fine enough on those points?


mad

l_r_mcglashan November 2, 2010 11:09

Same. Popped it up on github:

http://github.com/lrm29/OpenFOAM.loc...dualCalculator

OpenFOAM-1.7.x has some errorEstimation libraries which would be nice to use. The residual that simpleFoamResidual calculates needs to be normalised (by the initial state possibly?) so that the changes from step to step can be seen more clearly.

l_r_mcglashan November 2, 2010 12:08

Quote:

Originally Posted by maddalena (Post 281857)
What I get is a plot showing that the uResidual are higher at the corners of my pipe geometry, thus on the points where the flow is more difficult to model. Does this mean that my mesh is not fine enough on those points?

Were the residuals increasing in those areas? It's difficult to say, there are a number of problems it could be related to. Versteeg and Malalasekera's book has a nice chapter on possible causes of problems.

Another thing that may be worth checking is that the mass flow into and out of your domain matches.

maddalena December 2, 2010 10:11

Why my solutiion is not converging?
 
Hello,
still here with the same problem on a different case: pressure & velocity fields look good and their residual's trend is "converging"; however local value of residuals (simpleFoamResidual) does not looks good and I have weird oscillations on the initial residual of every time step.
My mesh (tet mesh with no boundary layer) is nice. I have tried tons of different BC-fvSchemes-fvSolution settings. No way.
May this be due to the high number of cyclic patches I am using? Two fans BC + cyclic sides.
I do not what to think now. Suggestions?

mad

maddalena February 7, 2011 03:33

Reversed problem!
 
Ok, maybe this is not the right place where to post, but I hope to get some answer on a problem that is similar to what posted above...
I started this thread speaking of a pressure equation that converges too soon... And I am writing now for a pressure equation that is never solved within the 1000 iterations of a time step! Geometry and bc are similar to what described above, only the pipe geometry is a little bit more complex than what sketched. Check mesh does not complain about that:
Code:

    Overall domain bounding box (-37.4532 -6.70564 -3.99289e-17) (42.605 6.70578 27.2094)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.78883e-18 -1.17153e-15 -2.36782e-14) OK.
    Max cell openness = 3.29759e-16 OK.
    Max aspect ratio = 42.4261 OK.
    Minumum face area = 1.27273e-06. Maximum face area = 9.60387.  Face area magnitudes OK.
    Min volume = 1.12921e-09. Max volume = 8.07969.  Total volume = 9723.47.  Cell volumes OK.
    Mesh non-orthogonality Max: 69.699 average: 18.046
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 0.956692 OK

.
fvSchemes and fvSolution are:
Code:

grad        faceMDLimited Gauss linear 0.5;
div        Gauss linearUpwind cellLimited Gauss linear 1;
laplacian  Gauss linear limited 0.5;

Code:

p
    {
        solver          GAMG;
        tolerance      1e-10;
        relTol          0;
        smoother        GaussSeidel;
        nPreSweeps      0;
        nPostSweeps    2;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-08;
        relTol          0;
    }
   
    k epsilon
    {
        solver          smoothSolver;
        smoother    GaussSeidel;
        tolerance      1e-08;
        relTol          0;
    }
nNonOrthogonalCorrectors 3;
relaxationFactors
    p: 0.15;
    U, k, epsilon: 0.3;

and this is my weird log.simpleFoam file:
Code:

DILUPBiCG:  Solving for Ux, Initial residual = 0.0028965, Final residual = 2.16561e-11, No Iterations 7
DILUPBiCG:  Solving for Uy, Initial residual = 0.00286544, Final residual = 2.35329e-11, No Iterations 7
DILUPBiCG:  Solving for Uz, Initial residual = 0.00271231, Final residual = 2.42359e-11, No Iterations 7
GAMG:  Solving for p, Initial residual = 0.127338, Final residual = 7.19827e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0408166, Final residual = 2.54205e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0144267, Final residual = 1.11529e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.00831105, Final residual = 1.09388e-07, No Iterations 1000
time step continuity errors : sum local = 8.4358e-08, global = -1.12046e-09, cumulative = 7.57121e-10
smoothSolver:  Solving for epsilon, Initial residual = 0.0201266, Final residual = 4.78163e-11, No Iterations 10
smoothSolver:  Solving for k, Initial residual = 0.00307404, Final residual = 3.2731e-11, No Iterations 10

I am using 4 nonOrthogonalCorrectors in order to try to lower p residuals. However, as you can see, pressure equation does not reaches convergence within the 1000 iterations x 4 of each time step. Of course, velocity, turbulence and pressure solution field are far to be as expected.
What I should do? What to change? I really need a help from you!

mad

makaveli_lcf February 7, 2011 03:40

maddalena

Are you doing some internal loops?

maddalena February 7, 2011 03:42

Hello,
fastest answer ever!
Quote:

Originally Posted by makaveli_lcf (Post 293912)
Are you doing some internal loops?

No, that is standard simpleFoam, thus no internal loop.

mad

makaveli_lcf February 7, 2011 03:44

:cool:

could U please post more of your log?

maddalena February 7, 2011 03:55

1 Attachment(s)
Yes, sure. Here it is.
As you can see, something strange happens around time step 17 on Uz. On the contrary, pressure does what explained above.:confused:

mad

makaveli_lcf February 7, 2011 03:58

... and your fvScheme and fvSolution in the studio please)))

maddalena February 7, 2011 04:00

2 Attachment(s)
Quote:

Originally Posted by makaveli_lcf (Post 293916)
... and your fvScheme and fvSolution in the studio please)))

Written above. However, I tried different combination of them. The log file posted above refers to these two files.

mad

maddalena February 7, 2011 04:15

Maybe it is worth adding that my mesh is tetra mainly. I used prisms in proximity of fans, for the reason explained here. Also, the answer I get here is a bit worrying. Any experience on the subject?

mad

makaveli_lcf February 7, 2011 04:20

First
Code:

div((nuEff*dev(grad(U).T())))    Gauss linear corrected; // Mistyped
Second

p
{
solver GAMG;
tolerance 1e-12;
relTol 0; // Not efficient for steady state solution, try ~0.05
smoother GaussSeidel;
nPreSweeps 0;
nPostSweeps 2;
cacheAgglomeration true;
nCellsInCoarsestLevel 10;
agglomerator faceAreaPair;
mergeLevels 1;
}

My thoughts concerned that you are using some pseudo-2nd-order schemes like linearUpwind with limiters, which cause the problems with the convergence. Try to increase velocity tolerance in fvSolution. As from my experience, pressure residuals will improve if you improve velocity field calculation accuracy.

To judge more correctly, try to run first without corrections and limiting with the first order upwind. I have also not really good experience with MDLimited schemes(((

I am currently studying accuracy/convergence issues for different schemes (hope it will result in the paper soon). So far all limiting and improving the order to 2nd and further in convectional schemes bring the problems with convergence.

makaveli_lcf February 7, 2011 04:28

Also an important comment from Prof. Jasak regarding the need in nonOrtho corrections in your case http://www.cfd-online.com/Forums/ope...tml#post233066

makaveli_lcf February 7, 2011 04:42

1 Attachment(s)
BTW, your solution is not converged to the steady state yet.
Attachment 6377

maddalena February 7, 2011 04:47

thank you, however:
Quote:

Originally Posted by makaveli_lcf (Post 293922)
div((nuEff*dev(grad(U).T()))) Gauss linear;

corrected is extra, isn't it?
Quote:

Originally Posted by makaveli_lcf (Post 293922)
relTol 0; // Not efficient for steady state solution, try ~0.05

This will no improve the initial convergence, since the solver will stop when the relTol is 0.05. thus it will not run until 1E-12 but stop at 0.05. And pressure equation will be not converged smoothly, I guess. What are your experience on the subject? What do you mean with not efficient?
Quote:

Originally Posted by makaveli_lcf (Post 293922)
Try to increase velocity tolerance in fvSolution. As from my experience, pressure residuals will improve if you improve velocity field calculation accuracy.

Somewhere else it has been suggested to use pressure tolerance 2 order of magnitude lower than velocity tolerance. Therefore, as I lower velocity tol, I must lower pressure tol as well. This was suggested as a "remedy" due to the higher difficulty on pressure eq to get convergence.
Quote:

Originally Posted by makaveli_lcf (Post 293922)
To judge more correctly, try to run first without corrections and limiting with the first order upwind. I have also not really good experience with MDLimited schemes(((

On the contrary of what reported above, fvSchemes attached on the last post used a linear upwind scheme...
Quote:

Originally Posted by makaveli_lcf (Post 293922)
So far all limiting and improving the order to 2nd and further in convectional schemes bring the problems with convergence.

Does it applies with tet mesh or with hea as well?
Quote:

Originally Posted by makaveli_lcf (Post 293922)
hope it will result in the paper soon

Hope to get it when published!

Therefore, my next steps are:
  1. use relTol 0.05 on p -> BTW, why not efficient?
  2. lower U tol
  3. use first order everywhere.
One more question: is this setup convergent, but not accurate?

mad

vkrastev February 7, 2011 04:48

[QUOTE=maddalena;293910]Ok, maybe this is not the right place where to post, but I hope to get some answer on a problem that is similar to what posted above...
I started this thread speaking of a pressure equation that converges too soon... And I am writing now for a pressure equation that is never solved within the 1000 iterations of a time step! Geometry and bc are similar to what described above, only the pipe geometry is a little bit more complex than what sketched. Check mesh does not complain about that:
Code:

    Overall domain bounding box (-37.4532 -6.70564 -3.99289e-17) (42.605 6.70578 27.2094)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.78883e-18 -1.17153e-15 -2.36782e-14) OK.
    Max cell openness = 3.29759e-16 OK.
    Max aspect ratio = 42.4261 OK.
    Minumum face area = 1.27273e-06. Maximum face area = 9.60387.  Face area magnitudes OK.
    Min volume = 1.12921e-09. Max volume = 8.07969.  Total volume = 9723.47.  Cell volumes OK.
    Mesh non-orthogonality Max: 69.699 average: 18.046
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 0.956692 OK

.
fvSchemes and fvSolution are:
Code:

grad        faceMDLimited Gauss linear 0.5;
div        Gauss linearUpwind cellLimited Gauss linear 1;
laplacian  Gauss linear limited 0.5;

Code:

p
    {
        solver          GAMG;
        tolerance      1e-10;
        relTol          0;
        smoother        GaussSeidel;
        nPreSweeps      0;
        nPostSweeps    2;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-08;
        relTol          0;
    }
   
    k epsilon
    {
        solver          smoothSolver;
        smoother    GaussSeidel;
        tolerance      1e-08;
        relTol          0;
    }
nNonOrthogonalCorrectors 3;
relaxationFactors
    p: 0.15;
    U, k, epsilon: 0.3;

and this is my weird log.simpleFoam file:
Code:

DILUPBiCG:  Solving for Ux, Initial residual = 0.0028965, Final residual = 2.16561e-11, No Iterations 7
DILUPBiCG:  Solving for Uy, Initial residual = 0.00286544, Final residual = 2.35329e-11, No Iterations 7
DILUPBiCG:  Solving for Uz, Initial residual = 0.00271231, Final residual = 2.42359e-11, No Iterations 7
GAMG:  Solving for p, Initial residual = 0.127338, Final residual = 7.19827e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0408166, Final residual = 2.54205e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0144267, Final residual = 1.11529e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.00831105, Final residual = 1.09388e-07, No Iterations 1000
time step continuity errors : sum local = 8.4358e-08, global = -1.12046e-09, cumulative = 7.57121e-10
smoothSolver:  Solving for epsilon, Initial residual = 0.0201266, Final residual = 4.78163e-11, No Iterations 10
smoothSolver:  Solving for k, Initial residual = 0.00307404, Final residual = 3.2731e-11, No Iterations 10

I am using 4 nonOrthogonalCorrectors in order to try to lower p residuals. However, as you can see, pressure equation does not reaches convergence within the 1000 iterations x 4 of each time step. Of course, velocity, turbulence and pressure solution field are far to be as expected.
What I should do? What to change? I really need a help from you!

mad[/QUOTE

1) Why a so severe convergence criterion whithin the single time step? I have a bit of experience with simpleFoam on tetra/prisms meshes and In my runs I do prefer to set a relative convergence criterion of 10^-05 for the pressure and of 10^-03 for the other quantities: meanly speaking, this should be far enough to reach a satisfactory final convergence

2) Why only 10 cells in coarsest level for the GAMG solver? How big is your mesh? I'm not a matrix-solver expert, but I've read somewhere here in the forum that the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)

3) What kind of k-epsilon model are you using? To my experience, looking at the linear High-Re models implemented in OpenFOAM, the standard k-epsilon is the most stable, then comes the realizable k-epsilon and finally the RNG, which is less dissipative than the others and thus more difficult to lead to convergence

4) Why limiting both the velocity and pressure gradients? The pressure gradient is used at each iteration loop inside the pressure-velocity correction procedure, thus imposing a limiter upon it could maybe improve numerical convergence, but also lead to unphysical results (this is what indeed has happened in my runs)

5) Finally, try to set Gauss linearUpwindV cellMDLimited Gauss linear 1 only for div(phi,U), and simply Gauss upwind for the other convective terms: in my runs, when I use the realizable k-epsilon model with standard wall functions, setting a higher order interpolation scheme for the turbulent quantities too does not have a significant influence on the accuracy of the results.

Best Regards

V.

maddalena February 7, 2011 04:50

Quote:

Originally Posted by makaveli_lcf (Post 293930)
BTW, your solution is not converged to the steady state yet.
Attachment 6377

Yes, I know! That is exactly the point!

maddalena February 7, 2011 04:58

Hello V, and thank you to join the discussion
Quote:

Originally Posted by vkrastev (Post 293933)
1) Why a so severe convergence criterion whithin the single time step?

This has been suggested here
Quote:

Originally Posted by vkrastev (Post 293933)
2) [...] the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)

Good to know. I will set up it properly.
Quote:

Originally Posted by vkrastev (Post 293933)
3) What kind of k-epsilon model are you using?

Launder-Sharma KE + Low Re wall function.
Quote:

Originally Posted by vkrastev (Post 293933)
4) Why limiting both the velocity and pressure gradients? The pressure gradient is used at each iteration loop inside the pressure-velocity correction procedure, thus imposing a limiter upon it could maybe improve numerical convergence, but also lead to unphysical results (this is what indeed has happened in my runs)

I wanted to improve the numerical convergence...
Quote:

Originally Posted by vkrastev (Post 293933)
5) Finally, try to set Gauss linearUpwindV cellMDLimited Gauss linear 1 only for div(phi,U), and simply Gauss upwind for the other convective terms

Ok, I recall only now your suggestion...

Let us keep in touch.

mad

makaveli_lcf February 7, 2011 04:58

You are running steady state, there is no point in converging all your equation to the absolute tolerance value, you will reach it with the steady state. Thus 0.05 for p and 0.1 for U (with the smoothSolver instead of CG) is a good decision.

Did you reach the state, when all you residuals (apart from pressure) become flat?

maddalena February 7, 2011 05:03

Quote:

Originally Posted by makaveli_lcf (Post 293923)
Also an important comment from Prof. Jasak regarding the need in nonOrtho corrections in your case http://www.cfd-online.com/Forums/ope...tml#post233066

and
Quote:

Originally Posted by makaveli_lcf (Post 293938)
Thus 0.05 for p and 0.1 for U (with the smoothSolver instead of CG) is a good decision.

Mmm... these are on the opposite direction of what suggested by Alberto here.

Quote:

Originally Posted by makaveli_lcf (Post 293938)
Did you reach the state, when all you residuals (apart from pressure) become flat?

Never reached it... :o Simulation crashed before I could get to it.

vkrastev February 7, 2011 05:28

Quote:

Originally Posted by maddalena (Post 293937)
Hello V, and thank you to join the discussion

You are wellcome ;)


Quote:

Originally Posted by maddalena (Post 293937)
This has been suggested here

I think that there are some issues about the tolerances that have to be clarified: lowering a lot the absolute convergence criterion assures that if eventually the residual of one of the variables for which you are solving falls down to a much lower value then the others (in OF this happens, for instance, with the omega in the k-omega turbulence model), the code will keep to solve for every variable, thus avoiding the "danger" of an only partially resolved global problem. However, it is also my opinion (I know that other CFD users and developers have different ideas about it) that forcing the solver to push down the residuals under exremely low values at every time step is not so useful as it seems to be (especially if combined with under-relaxed solution practices, such as the SIMPLE one): therefore, I do prefer to set the absolute tolerances to very low values (10^-11/10^-12), but also to keep control over the relative convergence criterion (as I wrote in my previous post), and till now this kind of practice has found to be quite effective for my cases...

Good luck with your run

V.

PS-Improving numerical convergence is meaningful only if you reach satisfactory physical behavior of your system: if not so, it simply becomes a mathematical exercise...

maddalena February 7, 2011 05:36

Quote:

Originally Posted by vkrastev (Post 293943)
I think that there are some issues about the tolerances that have to be clarified: lowering a lot the absolute convergence criterion assures that if eventually the residual of one of the variables for which you are solving falls down to a much lower value then the others (in OF this happens, for instance, with the omega in the k-omega turbulence model), the code will keep to solve for every variable, thus avoiding the "danger" of an only partially resolved global problem. However, it is also my opinion (I know that other CFD users and developers have different ideas about it) that forcing the solver to push down the residuals under exremely low values at every time step is not so useful as it seems to be (especially if combined with under-relaxed solution practices, such as the SIMPLE one).

That is the best explanation I have had on the subject up to now.
I will try to put it in practice and report my (hopefully good) results.

mad

makaveli_lcf February 7, 2011 05:48

My question regarding convergence and residuals here:
why some time k-epsilon equation stop being solved with Tol 10^-5???
Velocity and pressure have settings for the tolerance of order, let's say, 10^-5 and 10^-8 accordingly at the same time. Thus k-epsilon fields seem to be frozen when observing the results. If tolerance is set to 10^-14, then k-epsilon pattern dramatically changes and "looks" more physically.

francescomarra February 7, 2011 06:30

Dear Maddalena,

convergence issues are always really frustrating ! My post probably is not a real help, just an attempt to turn on some different lights.

Residuals on pressure seems reduced of 7 order of magnitude after 1000 iterations. Did you try to reduce the number of max iterations to see how much residuals are diminished after, to say, 300 iterations ? If you get the same order of magnitude, this could indicate that the solver has been trapped into a false solution.

Maybe you can try to solve on a coarser grid, than interpolate the coarse solution onto you final grid.

Another attempt could be to adopt a transient solver, to see where the solution blows up (if this happens again) and try to identify the critical issue.

I am sorry I do not have a real solution.

Regards,

Franco

arjun February 7, 2011 06:36

Quote:

Originally Posted by maddalena (Post 293937)

Good to know. I will set up it properly.

coarse levels cells as 10 is fine.

I will explain this issue. OpenFOAM uses additive corrective multigrid. In this coarse level is generated by merging two cells at a time and creating one cell from it. Since it is algebraic multigrid there are no physical cells to be generated just that two equations would merge and become 1.
So if you started with say N equations the next coarse level ideally should have N/2 cells. I said ideally because some cells escape this aggolomoration process and hence they are merged with their neighbours who are part of coarse level. So some coarse equations may consist of say 3 or 4 or more equations.

So the multigrid levels would fall like this:
N , N/2 , N/4, N/8 ....

The the parameter coarset level that you set it to 10 says that below this number of equations direct solver will be used to solve. (instead of iterative solver like other levels).
Because of this reason this number should be small enough to be efficiently solved by direct method. Imagine if this direct method is of order O(N^3) then you may be in problem if you chose this parameter large.

Take for example if you mesh has 1 million cells then sqrt(1000000) = 1000.
so if you set this then your will have to wait a lot for direct solver to finish. It will all kill the efficiency of your solver.

usually this parameter is set to 50 to 100. I personally use it as 1 in my multigrid solver. (but that is different multigrid).

maddalena February 7, 2011 07:37

Quote:

Originally Posted by francescomarra (Post 293949)
Residuals on pressure seems reduced of 7 order of magnitude after 1000 iterations. Did you try to reduce the number of max iterations to see how much residuals are diminished after, to say, 300 iterations ? If you get the same order of magnitude, this could indicate that the solver has been trapped into a false solution.

Hi Franco, and thanks for your comment. How can I set the max iteration number? Is there a missing parameter on my fvSolution where can I set it?
Quote:

Originally Posted by arjun (Post 293951)
coarse levels cells as 10 is fine.

I will explain this issue. OpenFOAM uses additive corrective multigrid. In this coarse level is generated by merging two cells at a time and creating one cell from it. Since it is algebraic multigrid there are no physical cells to be generated just that two equations would merge and become 1.
So if you started with say N equations the next coarse level ideally should have N/2 cells. I said ideally because some cells escape this aggolomoration process and hence they are merged with their neighbours who are part of coarse level. So some coarse equations may consist of say 3 or 4 or more equations.

So the multigrid levels would fall like this:
N , N/2 , N/4, N/8 ....

The the parameter coarset level that you set it to 10 says that below this number of equations direct solver will be used to solve. (instead of iterative solver like other levels).
Because of this reason this number should be small enough to be efficiently solved by direct method. Imagine if this direct method is of order O(N^3) then you may be in problem if you chose this parameter large.

Take for example if you mesh has 1 million cells then sqrt(1000000) = 1000.
so if you set this then your will have to wait a lot for direct solver to finish. It will all kill the efficiency of your solver.

usually this parameter is set to 50 to 100. I personally use it as 1 in my multigrid solver. (but that is different multigrid).

Wow, detailed answer on the subject, thanks.
Quote:

Originally Posted by vkrastev (Post 293933)
Why only 10 cells in coarsest level for the GAMG solver? How big is your mesh? I'm not a matrix-solver expert, but I've read somewhere here in the forum that the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)

V, can you please comment on that? Can you share your experience on the subject?

Any other comment on the subject? This thread is becoming really interesting...

mad

FelixL February 7, 2011 08:21

Hello, everybody,


Quote:

Originally Posted by vkrastev (Post 293943)
therefore, I do prefer to set the absolute tolerances to very low values (10^-11/10^-12), but also to keep control over the relative convergence criterion (as I wrote in my previous post), and till now this kind of practice has found to be quite effective for my cases...


this is exactly the same experience I've made with my simpleFoam simulations. My best practice usually is setting tolerance for every quantity to 1e-9 or lower and relTol to 0.1. The quite low tolerance values ensure the governing equations being solved during the whole simulation process (avoiding e.g. p not being solved for anymore) and the big relative tolerances reduce the number of inner iterations for each quantity, thus (usually) avoiding >1000 iterations per outer iteration.

This usually works like a charm for me. I manually check for convergence and stop the simulation when all the initial residuals have fallen below a certain tolerance (1e-4, for example). In the next version of OF this can be automatically accomplished by using the new residualControl function.

Just remember: the low tolerance value isn't imposed for accuracy, but to make sure that the equations keep being solved. OF handles residual control a bit differently than e.g. FLUENT.


Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.

vkrastev February 7, 2011 08:42

Quote:

Originally Posted by maddalena (Post 293958)
V, can you please comment on that? Can you share your experience on the subject?

As I said in the previous post, my knowledge about the GAMG (and, in general, the algebraic solvers employed for CFD problems) is far from being exhaustive...I found here in the forum a post (unfortunately I don't remember the name of the discussion) where a kind of criterion for the choice of the number of cells in coarsest level was suggested (the square root of the number of cells in the domain) and then I've tried it for my runs: with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not (probably the differences in efficiency will come out in smaller domains, where the solver will reach faster the coarsest level and then start to solve dicrectly, as said in the comment posted above). However, all of this is to remark that, differently from what I said concerning the tolerances and the SIMPLE procedure, about GAMG I can talk only by personal experience and therefore for your case the solver's behavior could be different.

V.

vkrastev February 7, 2011 08:47

Quote:

Originally Posted by FelixL (Post 293969)
Hello, everybody,





this is exactly the same experience I've made with my simpleFoam simulations. My best practice usually is setting tolerance for every quantity to 1e-9 or lower and relTol to 0.1. The quite low tolerance values ensure the governing equations being solved during the whole simulation process (avoiding e.g. p not being solved for anymore) and the big relative tolerances reduce the number of inner iterations for each quantity, thus (usually) avoiding >1000 iterations per outer iteration.

This usually works like a charm for me. I manually check for convergence and stop the simulation when all the initial residuals have fallen below a certain tolerance (1e-4, for example). In the next version of OF this can be automatically accomplished by using the new residualControl function.

Just remember: the low tolerance value isn't imposed for accuracy, but to make sure that the equations keep being solved. OF handles residual control a bit differently than e.g. FLUENT.


Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.

Hi Felix,
I've tried both PCG and GAMG as solvers for p, but my experience goes in a different direction than yours...In particular, if the relTol parameter for p starts to be quite low (I usually use something like 10^-04/10^-05) the GAMG solver is much faster than the PCG one. Can you explain for which cases you have found an opposite behavior?
Thanks

V.

maddalena February 7, 2011 08:51

Quote:

Originally Posted by vkrastev (Post 293972)
with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not

Ok, that is fine. Seen from a different point of view, this says that using a lower cell number will prevent me to make a long simulation without significant improvement on results.
Quote:

Originally Posted by vkrastev (Post 293974)
Can you explain for which cases you have found an opposite behavior?.

That is interesting for me as well.


there is a simulation running with the setup suggested above. Hope in some hours I can get a decent result from it.
stay tuned.

mad

PS: btw, you adopted V as your official signature? ;) that sounds good!

vkrastev February 7, 2011 08:54

Quote:

Originally Posted by maddalena (Post 293976)
Ok, that is fine. Seen from a different point of view, this says that using a lower cell number will prevent me to make a long simulation without significant improvement on results.

Yes, that's my understanding too


Quote:

Originally Posted by maddalena (Post 293976)
PS: btw, you adopted V as your official signature? ;) that sounds good!

I simply like informal signatures...;)


All times are GMT -4. The time now is 21:40.