CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

pressure eq. "converges" after few time steps

Register Blogs Community New Posts Updated Threads Search

Like Tree24Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 7, 2011, 04:20
Default
  #21
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 250
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
First
Code:
div((nuEff*dev(grad(U).T())))    Gauss linear corrected; // Mistyped
Second

p
{
solver GAMG;
tolerance 1e-12;
relTol 0; // Not efficient for steady state solution, try ~0.05
smoother GaussSeidel;
nPreSweeps 0;
nPostSweeps 2;
cacheAgglomeration true;
nCellsInCoarsestLevel 10;
agglomerator faceAreaPair;
mergeLevels 1;
}

My thoughts concerned that you are using some pseudo-2nd-order schemes like linearUpwind with limiters, which cause the problems with the convergence. Try to increase velocity tolerance in fvSolution. As from my experience, pressure residuals will improve if you improve velocity field calculation accuracy.

To judge more correctly, try to run first without corrections and limiting with the first order upwind. I have also not really good experience with MDLimited schemes(((

I am currently studying accuracy/convergence issues for different schemes (hope it will result in the paper soon). So far all limiting and improving the order to 2nd and further in convectional schemes bring the problems with convergence.
Amir, Tushar@cfd, kiddmax and 3 others like this.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 7, 2011, 04:28
Default
  #22
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 250
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
Also an important comment from Prof. Jasak regarding the need in nonOrtho corrections in your case http://www.cfd-online.com/Forums/ope...tml#post233066
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 7, 2011, 04:42
Default
  #23
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 250
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
BTW, your solution is not converged to the steady state yet.
Resid.png
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 7, 2011, 04:47
Default
  #24
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
thank you, however:
Quote:
Originally Posted by makaveli_lcf View Post
div((nuEff*dev(grad(U).T()))) Gauss linear;
corrected is extra, isn't it?
Quote:
Originally Posted by makaveli_lcf View Post
relTol 0; // Not efficient for steady state solution, try ~0.05
This will no improve the initial convergence, since the solver will stop when the relTol is 0.05. thus it will not run until 1E-12 but stop at 0.05. And pressure equation will be not converged smoothly, I guess. What are your experience on the subject? What do you mean with not efficient?
Quote:
Originally Posted by makaveli_lcf View Post
Try to increase velocity tolerance in fvSolution. As from my experience, pressure residuals will improve if you improve velocity field calculation accuracy.
Somewhere else it has been suggested to use pressure tolerance 2 order of magnitude lower than velocity tolerance. Therefore, as I lower velocity tol, I must lower pressure tol as well. This was suggested as a "remedy" due to the higher difficulty on pressure eq to get convergence.
Quote:
Originally Posted by makaveli_lcf View Post
To judge more correctly, try to run first without corrections and limiting with the first order upwind. I have also not really good experience with MDLimited schemes(((
On the contrary of what reported above, fvSchemes attached on the last post used a linear upwind scheme...
Quote:
Originally Posted by makaveli_lcf View Post
So far all limiting and improving the order to 2nd and further in convectional schemes bring the problems with convergence.
Does it applies with tet mesh or with hea as well?
Quote:
Originally Posted by makaveli_lcf View Post
hope it will result in the paper soon
Hope to get it when published!

Therefore, my next steps are:
  1. use relTol 0.05 on p -> BTW, why not efficient?
  2. lower U tol
  3. use first order everywhere.
One more question: is this setup convergent, but not accurate?

mad
maddalena is offline   Reply With Quote

Old   February 7, 2011, 04:48
Default
  #25
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
[QUOTE=maddalena;293910]Ok, maybe this is not the right place where to post, but I hope to get some answer on a problem that is similar to what posted above...
I started this thread speaking of a pressure equation that converges too soon... And I am writing now for a pressure equation that is never solved within the 1000 iterations of a time step! Geometry and bc are similar to what described above, only the pipe geometry is a little bit more complex than what sketched. Check mesh does not complain about that:
Code:
    Overall domain bounding box (-37.4532 -6.70564 -3.99289e-17) (42.605 6.70578 27.2094)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (-2.78883e-18 -1.17153e-15 -2.36782e-14) OK.
    Max cell openness = 3.29759e-16 OK.
    Max aspect ratio = 42.4261 OK.
    Minumum face area = 1.27273e-06. Maximum face area = 9.60387.  Face area magnitudes OK.
    Min volume = 1.12921e-09. Max volume = 8.07969.  Total volume = 9723.47.  Cell volumes OK.
    Mesh non-orthogonality Max: 69.699 average: 18.046
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 0.956692 OK
.
fvSchemes and fvSolution are:
Code:
grad         faceMDLimited Gauss linear 0.5;
div         Gauss linearUpwind cellLimited Gauss linear 1;
laplacian   Gauss linear limited 0.5;
Code:
p
    {
        solver          GAMG;
        tolerance       1e-10;
        relTol          0;
        smoother        GaussSeidel;
        nPreSweeps      0;
        nPostSweeps     2;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels     1;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-08;
        relTol          0;
    }
    
    k epsilon
    {
        solver          smoothSolver;
        smoother    GaussSeidel;
        tolerance       1e-08;
        relTol          0;
    }
nNonOrthogonalCorrectors 3;
relaxationFactors
    p: 0.15;
    U, k, epsilon: 0.3;
and this is my weird log.simpleFoam file:
Code:
DILUPBiCG:  Solving for Ux, Initial residual = 0.0028965, Final residual = 2.16561e-11, No Iterations 7
DILUPBiCG:  Solving for Uy, Initial residual = 0.00286544, Final residual = 2.35329e-11, No Iterations 7
DILUPBiCG:  Solving for Uz, Initial residual = 0.00271231, Final residual = 2.42359e-11, No Iterations 7
GAMG:  Solving for p, Initial residual = 0.127338, Final residual = 7.19827e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0408166, Final residual = 2.54205e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.0144267, Final residual = 1.11529e-06, No Iterations 1000
GAMG:  Solving for p, Initial residual = 0.00831105, Final residual = 1.09388e-07, No Iterations 1000
time step continuity errors : sum local = 8.4358e-08, global = -1.12046e-09, cumulative = 7.57121e-10
smoothSolver:  Solving for epsilon, Initial residual = 0.0201266, Final residual = 4.78163e-11, No Iterations 10
smoothSolver:  Solving for k, Initial residual = 0.00307404, Final residual = 3.2731e-11, No Iterations 10
I am using 4 nonOrthogonalCorrectors in order to try to lower p residuals. However, as you can see, pressure equation does not reaches convergence within the 1000 iterations x 4 of each time step. Of course, velocity, turbulence and pressure solution field are far to be as expected.
What I should do? What to change? I really need a help from you!

mad[/QUOTE

1) Why a so severe convergence criterion whithin the single time step? I have a bit of experience with simpleFoam on tetra/prisms meshes and In my runs I do prefer to set a relative convergence criterion of 10^-05 for the pressure and of 10^-03 for the other quantities: meanly speaking, this should be far enough to reach a satisfactory final convergence

2) Why only 10 cells in coarsest level for the GAMG solver? How big is your mesh? I'm not a matrix-solver expert, but I've read somewhere here in the forum that the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)

3) What kind of k-epsilon model are you using? To my experience, looking at the linear High-Re models implemented in OpenFOAM, the standard k-epsilon is the most stable, then comes the realizable k-epsilon and finally the RNG, which is less dissipative than the others and thus more difficult to lead to convergence

4) Why limiting both the velocity and pressure gradients? The pressure gradient is used at each iteration loop inside the pressure-velocity correction procedure, thus imposing a limiter upon it could maybe improve numerical convergence, but also lead to unphysical results (this is what indeed has happened in my runs)

5) Finally, try to set Gauss linearUpwindV cellMDLimited Gauss linear 1 only for div(phi,U), and simply Gauss upwind for the other convective terms: in my runs, when I use the realizable k-epsilon model with standard wall functions, setting a higher order interpolation scheme for the turbulent quantities too does not have a significant influence on the accuracy of the results.

Best Regards

V.
Ramzy1990 likes this.
vkrastev is offline   Reply With Quote

Old   February 7, 2011, 04:50
Default
  #26
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by makaveli_lcf View Post
BTW, your solution is not converged to the steady state yet.
Attachment 6377
Yes, I know! That is exactly the point!
maddalena is offline   Reply With Quote

Old   February 7, 2011, 04:58
Default
  #27
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Hello V, and thank you to join the discussion
Quote:
Originally Posted by vkrastev View Post
1) Why a so severe convergence criterion whithin the single time step?
This has been suggested here
Quote:
Originally Posted by vkrastev View Post
2) [...] the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)
Good to know. I will set up it properly.
Quote:
Originally Posted by vkrastev View Post
3) What kind of k-epsilon model are you using?
Launder-Sharma KE + Low Re wall function.
Quote:
Originally Posted by vkrastev View Post
4) Why limiting both the velocity and pressure gradients? The pressure gradient is used at each iteration loop inside the pressure-velocity correction procedure, thus imposing a limiter upon it could maybe improve numerical convergence, but also lead to unphysical results (this is what indeed has happened in my runs)
I wanted to improve the numerical convergence...
Quote:
Originally Posted by vkrastev View Post
5) Finally, try to set Gauss linearUpwindV cellMDLimited Gauss linear 1 only for div(phi,U), and simply Gauss upwind for the other convective terms
Ok, I recall only now your suggestion...

Let us keep in touch.

mad
maddalena is offline   Reply With Quote

Old   February 7, 2011, 04:58
Default
  #28
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 250
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
You are running steady state, there is no point in converging all your equation to the absolute tolerance value, you will reach it with the steady state. Thus 0.05 for p and 0.1 for U (with the smoothSolver instead of CG) is a good decision.

Did you reach the state, when all you residuals (apart from pressure) become flat?
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 7, 2011, 05:03
Default
  #29
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by makaveli_lcf View Post
Also an important comment from Prof. Jasak regarding the need in nonOrtho corrections in your case http://www.cfd-online.com/Forums/ope...tml#post233066
and
Quote:
Originally Posted by makaveli_lcf View Post
Thus 0.05 for p and 0.1 for U (with the smoothSolver instead of CG) is a good decision.
Mmm... these are on the opposite direction of what suggested by Alberto here.

Quote:
Originally Posted by makaveli_lcf View Post
Did you reach the state, when all you residuals (apart from pressure) become flat?
Never reached it... Simulation crashed before I could get to it.
maddalena is offline   Reply With Quote

Old   February 7, 2011, 05:28
Default
  #30
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by maddalena View Post
Hello V, and thank you to join the discussion
You are wellcome


Quote:
Originally Posted by maddalena View Post
This has been suggested here
I think that there are some issues about the tolerances that have to be clarified: lowering a lot the absolute convergence criterion assures that if eventually the residual of one of the variables for which you are solving falls down to a much lower value then the others (in OF this happens, for instance, with the omega in the k-omega turbulence model), the code will keep to solve for every variable, thus avoiding the "danger" of an only partially resolved global problem. However, it is also my opinion (I know that other CFD users and developers have different ideas about it) that forcing the solver to push down the residuals under exremely low values at every time step is not so useful as it seems to be (especially if combined with under-relaxed solution practices, such as the SIMPLE one): therefore, I do prefer to set the absolute tolerances to very low values (10^-11/10^-12), but also to keep control over the relative convergence criterion (as I wrote in my previous post), and till now this kind of practice has found to be quite effective for my cases...

Good luck with your run

V.

PS-Improving numerical convergence is meaningful only if you reach satisfactory physical behavior of your system: if not so, it simply becomes a mathematical exercise...
vkrastev is offline   Reply With Quote

Old   February 7, 2011, 05:36
Thumbs up
  #31
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by vkrastev View Post
I think that there are some issues about the tolerances that have to be clarified: lowering a lot the absolute convergence criterion assures that if eventually the residual of one of the variables for which you are solving falls down to a much lower value then the others (in OF this happens, for instance, with the omega in the k-omega turbulence model), the code will keep to solve for every variable, thus avoiding the "danger" of an only partially resolved global problem. However, it is also my opinion (I know that other CFD users and developers have different ideas about it) that forcing the solver to push down the residuals under exremely low values at every time step is not so useful as it seems to be (especially if combined with under-relaxed solution practices, such as the SIMPLE one).
That is the best explanation I have had on the subject up to now.
I will try to put it in practice and report my (hopefully good) results.

mad
maddalena is offline   Reply With Quote

Old   February 7, 2011, 05:48
Default
  #32
Senior Member
 
Dr. Alexander Vakhrushev
Join Date: Mar 2009
Posts: 250
Blog Entries: 1
Rep Power: 19
makaveli_lcf is on a distinguished road
Send a message via ICQ to makaveli_lcf
My question regarding convergence and residuals here:
why some time k-epsilon equation stop being solved with Tol 10^-5???
Velocity and pressure have settings for the tolerance of order, let's say, 10^-5 and 10^-8 accordingly at the same time. Thus k-epsilon fields seem to be frozen when observing the results. If tolerance is set to 10^-14, then k-epsilon pattern dramatically changes and "looks" more physically.
__________________
Best regards,

Dr. Alexander VAKHRUSHEV

Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics"

Simulation and Modelling of Metallurgical Processes
Department of Metallurgy
University of Leoben

http://smmp.unileoben.ac.at
makaveli_lcf is offline   Reply With Quote

Old   February 7, 2011, 06:30
Default
  #33
Member
 
Franco Marra
Join Date: Mar 2009
Location: Napoli - Italy
Posts: 68
Rep Power: 17
francescomarra is on a distinguished road
Dear Maddalena,

convergence issues are always really frustrating ! My post probably is not a real help, just an attempt to turn on some different lights.

Residuals on pressure seems reduced of 7 order of magnitude after 1000 iterations. Did you try to reduce the number of max iterations to see how much residuals are diminished after, to say, 300 iterations ? If you get the same order of magnitude, this could indicate that the solver has been trapped into a false solution.

Maybe you can try to solve on a coarser grid, than interpolate the coarse solution onto you final grid.

Another attempt could be to adopt a transient solver, to see where the solution blows up (if this happens again) and try to identify the critical issue.

I am sorry I do not have a real solution.

Regards,

Franco
francescomarra is offline   Reply With Quote

Old   February 7, 2011, 06:36
Default
  #34
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by maddalena View Post

Good to know. I will set up it properly.
coarse levels cells as 10 is fine.

I will explain this issue. OpenFOAM uses additive corrective multigrid. In this coarse level is generated by merging two cells at a time and creating one cell from it. Since it is algebraic multigrid there are no physical cells to be generated just that two equations would merge and become 1.
So if you started with say N equations the next coarse level ideally should have N/2 cells. I said ideally because some cells escape this aggolomoration process and hence they are merged with their neighbours who are part of coarse level. So some coarse equations may consist of say 3 or 4 or more equations.

So the multigrid levels would fall like this:
N , N/2 , N/4, N/8 ....

The the parameter coarset level that you set it to 10 says that below this number of equations direct solver will be used to solve. (instead of iterative solver like other levels).
Because of this reason this number should be small enough to be efficiently solved by direct method. Imagine if this direct method is of order O(N^3) then you may be in problem if you chose this parameter large.

Take for example if you mesh has 1 million cells then sqrt(1000000) = 1000.
so if you set this then your will have to wait a lot for direct solver to finish. It will all kill the efficiency of your solver.

usually this parameter is set to 50 to 100. I personally use it as 1 in my multigrid solver. (but that is different multigrid).
akidess and Ramzy1990 like this.

Last edited by arjun; February 7, 2011 at 06:54.
arjun is offline   Reply With Quote

Old   February 7, 2011, 07:37
Default
  #35
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by francescomarra View Post
Residuals on pressure seems reduced of 7 order of magnitude after 1000 iterations. Did you try to reduce the number of max iterations to see how much residuals are diminished after, to say, 300 iterations ? If you get the same order of magnitude, this could indicate that the solver has been trapped into a false solution.
Hi Franco, and thanks for your comment. How can I set the max iteration number? Is there a missing parameter on my fvSolution where can I set it?
Quote:
Originally Posted by arjun View Post
coarse levels cells as 10 is fine.

I will explain this issue. OpenFOAM uses additive corrective multigrid. In this coarse level is generated by merging two cells at a time and creating one cell from it. Since it is algebraic multigrid there are no physical cells to be generated just that two equations would merge and become 1.
So if you started with say N equations the next coarse level ideally should have N/2 cells. I said ideally because some cells escape this aggolomoration process and hence they are merged with their neighbours who are part of coarse level. So some coarse equations may consist of say 3 or 4 or more equations.

So the multigrid levels would fall like this:
N , N/2 , N/4, N/8 ....

The the parameter coarset level that you set it to 10 says that below this number of equations direct solver will be used to solve. (instead of iterative solver like other levels).
Because of this reason this number should be small enough to be efficiently solved by direct method. Imagine if this direct method is of order O(N^3) then you may be in problem if you chose this parameter large.

Take for example if you mesh has 1 million cells then sqrt(1000000) = 1000.
so if you set this then your will have to wait a lot for direct solver to finish. It will all kill the efficiency of your solver.

usually this parameter is set to 50 to 100. I personally use it as 1 in my multigrid solver. (but that is different multigrid).
Wow, detailed answer on the subject, thanks.
Quote:
Originally Posted by vkrastev View Post
Why only 10 cells in coarsest level for the GAMG solver? How big is your mesh? I'm not a matrix-solver expert, but I've read somewhere here in the forum that the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well)
V, can you please comment on that? Can you share your experience on the subject?

Any other comment on the subject? This thread is becoming really interesting...

mad
maddalena is offline   Reply With Quote

Old   February 7, 2011, 08:21
Default
  #36
Senior Member
 
Felix L.
Join Date: Feb 2010
Location: Hamburg
Posts: 165
Rep Power: 18
FelixL is on a distinguished road
Hello, everybody,


Quote:
Originally Posted by vkrastev View Post
therefore, I do prefer to set the absolute tolerances to very low values (10^-11/10^-12), but also to keep control over the relative convergence criterion (as I wrote in my previous post), and till now this kind of practice has found to be quite effective for my cases...

this is exactly the same experience I've made with my simpleFoam simulations. My best practice usually is setting tolerance for every quantity to 1e-9 or lower and relTol to 0.1. The quite low tolerance values ensure the governing equations being solved during the whole simulation process (avoiding e.g. p not being solved for anymore) and the big relative tolerances reduce the number of inner iterations for each quantity, thus (usually) avoiding >1000 iterations per outer iteration.

This usually works like a charm for me. I manually check for convergence and stop the simulation when all the initial residuals have fallen below a certain tolerance (1e-4, for example). In the next version of OF this can be automatically accomplished by using the new residualControl function.

Just remember: the low tolerance value isn't imposed for accuracy, but to make sure that the equations keep being solved. OF handles residual control a bit differently than e.g. FLUENT.


Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.
FelixL is offline   Reply With Quote

Old   February 7, 2011, 08:42
Default
  #37
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by maddalena View Post
V, can you please comment on that? Can you share your experience on the subject?
As I said in the previous post, my knowledge about the GAMG (and, in general, the algebraic solvers employed for CFD problems) is far from being exhaustive...I found here in the forum a post (unfortunately I don't remember the name of the discussion) where a kind of criterion for the choice of the number of cells in coarsest level was suggested (the square root of the number of cells in the domain) and then I've tried it for my runs: with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not (probably the differences in efficiency will come out in smaller domains, where the solver will reach faster the coarsest level and then start to solve dicrectly, as said in the comment posted above). However, all of this is to remark that, differently from what I said concerning the tolerances and the SIMPLE procedure, about GAMG I can talk only by personal experience and therefore for your case the solver's behavior could be different.

V.
vkrastev is offline   Reply With Quote

Old   February 7, 2011, 08:47
Default
  #38
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by FelixL View Post
Hello, everybody,





this is exactly the same experience I've made with my simpleFoam simulations. My best practice usually is setting tolerance for every quantity to 1e-9 or lower and relTol to 0.1. The quite low tolerance values ensure the governing equations being solved during the whole simulation process (avoiding e.g. p not being solved for anymore) and the big relative tolerances reduce the number of inner iterations for each quantity, thus (usually) avoiding >1000 iterations per outer iteration.

This usually works like a charm for me. I manually check for convergence and stop the simulation when all the initial residuals have fallen below a certain tolerance (1e-4, for example). In the next version of OF this can be automatically accomplished by using the new residualControl function.

Just remember: the low tolerance value isn't imposed for accuracy, but to make sure that the equations keep being solved. OF handles residual control a bit differently than e.g. FLUENT.


Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already!


Greetings,
Felix.
Hi Felix,
I've tried both PCG and GAMG as solvers for p, but my experience goes in a different direction than yours...In particular, if the relTol parameter for p starts to be quite low (I usually use something like 10^-04/10^-05) the GAMG solver is much faster than the PCG one. Can you explain for which cases you have found an opposite behavior?
Thanks

V.
vkrastev is offline   Reply With Quote

Old   February 7, 2011, 08:51
Default
  #39
Senior Member
 
maddalena's Avatar
 
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 23
maddalena will become famous soon enough
Quote:
Originally Posted by vkrastev View Post
with domains of a few milions of cells (1.5 up to 5 milions) I can tell you that passing from 1000 to 50 cells does not have any effect on the solver efficiency, but this doesn't prove that such a criterion is universally correct or not
Ok, that is fine. Seen from a different point of view, this says that using a lower cell number will prevent me to make a long simulation without significant improvement on results.
Quote:
Originally Posted by vkrastev View Post
Can you explain for which cases you have found an opposite behavior?.
That is interesting for me as well.


there is a simulation running with the setup suggested above. Hope in some hours I can get a decent result from it.
stay tuned.

mad

PS: btw, you adopted V as your official signature? that sounds good!
maddalena is offline   Reply With Quote

Old   February 7, 2011, 08:54
Default
  #40
Senior Member
 
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 20
vkrastev is on a distinguished road
Quote:
Originally Posted by maddalena View Post
Ok, that is fine. Seen from a different point of view, this says that using a lower cell number will prevent me to make a long simulation without significant improvement on results.
Yes, that's my understanding too


Quote:
Originally Posted by maddalena View Post
PS: btw, you adopted V as your official signature? that sounds good!
I simply like informal signatures...
vkrastev is offline   Reply With Quote

Reply

Tags
convergence issues, pipe flow, simplefoam


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
TimeVaryingMappedFixedValue irishdave OpenFOAM Running, Solving & CFD 32 June 16, 2021 06:55
time Step's turbFoam >>> exit mgolbs OpenFOAM Pre-Processing 4 December 8, 2009 03:48
Modeling in micron scale using icoFoam m9819348 OpenFOAM Running, Solving & CFD 7 October 27, 2007 00:36
Hydrostatic pressure in 2-phase flow modeling (CFX4.2) HB &DS CFX 0 January 9, 2000 13:19
unsteady calcs in FLUENT Sanjay Padhiar Main CFD Forum 1 March 31, 1999 12:32


All times are GMT -4. The time now is 01:33.