
[Sponsors] 
pressure eq. "converges" after few time steps 

LinkBack  Thread Tools  Search this Thread  Display Modes 
February 7, 2011, 05:20 

#21 
Senior Member

First
Code:
div((nuEff*dev(grad(U).T()))) Gauss linear corrected; // Mistyped p { solver GAMG; tolerance 1e12; relTol 0; // Not efficient for steady state solution, try ~0.05 smoother GaussSeidel; nPreSweeps 0; nPostSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } My thoughts concerned that you are using some pseudo2ndorder schemes like linearUpwind with limiters, which cause the problems with the convergence. Try to increase velocity tolerance in fvSolution. As from my experience, pressure residuals will improve if you improve velocity field calculation accuracy. To judge more correctly, try to run first without corrections and limiting with the first order upwind. I have also not really good experience with MDLimited schemes((( I am currently studying accuracy/convergence issues for different schemes (hope it will result in the paper soon). So far all limiting and improving the order to 2nd and further in convectional schemes bring the problems with convergence.
__________________
Best regards, Dr. Alexander VAKHRUSHEV Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics" Simulation and Modelling of Metallurgical Processes Department of Metallurgy University of Leoben http://smmp.unileoben.ac.at 

February 7, 2011, 05:28 

#22 
Senior Member

Also an important comment from Prof. Jasak regarding the need in nonOrtho corrections in your case http://www.cfdonline.com/Forums/ope...tml#post233066
__________________
Best regards, Dr. Alexander VAKHRUSHEV Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics" Simulation and Modelling of Metallurgical Processes Department of Metallurgy University of Leoben http://smmp.unileoben.ac.at 

February 7, 2011, 05:42 

#23 
Senior Member

BTW, your solution is not converged to the steady state yet.
Resid.png
__________________
Best regards, Dr. Alexander VAKHRUSHEV Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics" Simulation and Modelling of Metallurgical Processes Department of Metallurgy University of Leoben http://smmp.unileoben.ac.at 

February 7, 2011, 05:47 

#24  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
thank you, however:
corrected is extra, isn't it? Quote:
Quote:
Quote:
Quote:
Hope to get it when published! Therefore, my next steps are:
mad 

February 7, 2011, 05:48 

#25 
Senior Member
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 16 
[QUOTE=maddalena;293910]Ok, maybe this is not the right place where to post, but I hope to get some answer on a problem that is similar to what posted above...
I started this thread speaking of a pressure equation that converges too soon... And I am writing now for a pressure equation that is never solved within the 1000 iterations of a time step! Geometry and bc are similar to what described above, only the pipe geometry is a little bit more complex than what sketched. Check mesh does not complain about that: Code:
Overall domain bounding box (37.4532 6.70564 3.99289e17) (42.605 6.70578 27.2094) Mesh (nonempty, nonwedge) directions (1 1 1) Mesh (nonempty) directions (1 1 1) Boundary openness (2.78883e18 1.17153e15 2.36782e14) OK. Max cell openness = 3.29759e16 OK. Max aspect ratio = 42.4261 OK. Minumum face area = 1.27273e06. Maximum face area = 9.60387. Face area magnitudes OK. Min volume = 1.12921e09. Max volume = 8.07969. Total volume = 9723.47. Cell volumes OK. Mesh nonorthogonality Max: 69.699 average: 18.046 Nonorthogonality check OK. Face pyramids OK. Max skewness = 0.956692 OK fvSchemes and fvSolution are: Code:
grad faceMDLimited Gauss linear 0.5; div Gauss linearUpwind cellLimited Gauss linear 1; laplacian Gauss linear limited 0.5; Code:
p { solver GAMG; tolerance 1e10; relTol 0; smoother GaussSeidel; nPreSweeps 0; nPostSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } U { solver PBiCG; preconditioner DILU; tolerance 1e08; relTol 0; } k epsilon { solver smoothSolver; smoother GaussSeidel; tolerance 1e08; relTol 0; } nNonOrthogonalCorrectors 3; relaxationFactors p: 0.15; U, k, epsilon: 0.3; Code:
DILUPBiCG: Solving for Ux, Initial residual = 0.0028965, Final residual = 2.16561e11, No Iterations 7 DILUPBiCG: Solving for Uy, Initial residual = 0.00286544, Final residual = 2.35329e11, No Iterations 7 DILUPBiCG: Solving for Uz, Initial residual = 0.00271231, Final residual = 2.42359e11, No Iterations 7 GAMG: Solving for p, Initial residual = 0.127338, Final residual = 7.19827e06, No Iterations 1000 GAMG: Solving for p, Initial residual = 0.0408166, Final residual = 2.54205e06, No Iterations 1000 GAMG: Solving for p, Initial residual = 0.0144267, Final residual = 1.11529e06, No Iterations 1000 GAMG: Solving for p, Initial residual = 0.00831105, Final residual = 1.09388e07, No Iterations 1000 time step continuity errors : sum local = 8.4358e08, global = 1.12046e09, cumulative = 7.57121e10 smoothSolver: Solving for epsilon, Initial residual = 0.0201266, Final residual = 4.78163e11, No Iterations 10 smoothSolver: Solving for k, Initial residual = 0.00307404, Final residual = 3.2731e11, No Iterations 10 What I should do? What to change? I really need a help from you! mad[/QUOTE 1) Why a so severe convergence criterion whithin the single time step? I have a bit of experience with simpleFoam on tetra/prisms meshes and In my runs I do prefer to set a relative convergence criterion of 10^05 for the pressure and of 10^03 for the other quantities: meanly speaking, this should be far enough to reach a satisfactory final convergence 2) Why only 10 cells in coarsest level for the GAMG solver? How big is your mesh? I'm not a matrixsolver expert, but I've read somewhere here in the forum that the number of cells in coarsest level should be roughly equal to the root squared of the number of cells in the domain (and this setting has found to be appropriate for my runs as well) 3) What kind of kepsilon model are you using? To my experience, looking at the linear HighRe models implemented in OpenFOAM, the standard kepsilon is the most stable, then comes the realizable kepsilon and finally the RNG, which is less dissipative than the others and thus more difficult to lead to convergence 4) Why limiting both the velocity and pressure gradients? The pressure gradient is used at each iteration loop inside the pressurevelocity correction procedure, thus imposing a limiter upon it could maybe improve numerical convergence, but also lead to unphysical results (this is what indeed has happened in my runs) 5) Finally, try to set Gauss linearUpwindV cellMDLimited Gauss linear 1 only for div(phi,U), and simply Gauss upwind for the other convective terms: in my runs, when I use the realizable kepsilon model with standard wall functions, setting a higher order interpolation scheme for the turbulent quantities too does not have a significant influence on the accuracy of the results. Best Regards V. 

February 7, 2011, 05:50 

#26  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Quote:


February 7, 2011, 05:58 

#27  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Hello V, and thank you to join the discussion
Quote:
Quote:
LaunderSharma KE + Low Re wall function. Quote:
Quote:
Let us keep in touch. mad 

February 7, 2011, 05:58 

#28 
Senior Member

You are running steady state, there is no point in converging all your equation to the absolute tolerance value, you will reach it with the steady state. Thus 0.05 for p and 0.1 for U (with the smoothSolver instead of CG) is a good decision.
Did you reach the state, when all you residuals (apart from pressure) become flat?
__________________
Best regards, Dr. Alexander VAKHRUSHEV Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics" Simulation and Modelling of Metallurgical Processes Department of Metallurgy University of Leoben http://smmp.unileoben.ac.at 

February 7, 2011, 06:03 

#29  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Quote:
Quote:
Never reached it... Simulation crashed before I could get to it. 

February 7, 2011, 06:28 

#30  
Senior Member
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 16 
You are wellcome
Quote:
Good luck with your run V. PSImproving numerical convergence is meaningful only if you reach satisfactory physical behavior of your system: if not so, it simply becomes a mathematical exercise... 

February 7, 2011, 06:36 

#31  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Quote:
I will try to put it in practice and report my (hopefully good) results. mad 

February 7, 2011, 06:48 

#32 
Senior Member

My question regarding convergence and residuals here:
why some time kepsilon equation stop being solved with Tol 10^5??? Velocity and pressure have settings for the tolerance of order, let's say, 10^5 and 10^8 accordingly at the same time. Thus kepsilon fields seem to be frozen when observing the results. If tolerance is set to 10^14, then kepsilon pattern dramatically changes and "looks" more physically.
__________________
Best regards, Dr. Alexander VAKHRUSHEV Christian Doppler Laboratory for "Metallurgical Applications of Magnetohydrodynamics" Simulation and Modelling of Metallurgical Processes Department of Metallurgy University of Leoben http://smmp.unileoben.ac.at 

February 7, 2011, 07:30 

#33 
Member
Franco Marra
Join Date: Mar 2009
Location: Napoli  Italy
Posts: 58
Rep Power: 12 
Dear Maddalena,
convergence issues are always really frustrating ! My post probably is not a real help, just an attempt to turn on some different lights. Residuals on pressure seems reduced of 7 order of magnitude after 1000 iterations. Did you try to reduce the number of max iterations to see how much residuals are diminished after, to say, 300 iterations ? If you get the same order of magnitude, this could indicate that the solver has been trapped into a false solution. Maybe you can try to solve on a coarser grid, than interpolate the coarse solution onto you final grid. Another attempt could be to adopt a transient solver, to see where the solution blows up (if this happens again) and try to identify the critical issue. I am sorry I do not have a real solution. Regards, Franco 

February 7, 2011, 07:36 

#34 
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 830
Rep Power: 22 
coarse levels cells as 10 is fine.
I will explain this issue. OpenFOAM uses additive corrective multigrid. In this coarse level is generated by merging two cells at a time and creating one cell from it. Since it is algebraic multigrid there are no physical cells to be generated just that two equations would merge and become 1. So if you started with say N equations the next coarse level ideally should have N/2 cells. I said ideally because some cells escape this aggolomoration process and hence they are merged with their neighbours who are part of coarse level. So some coarse equations may consist of say 3 or 4 or more equations. So the multigrid levels would fall like this: N , N/2 , N/4, N/8 .... The the parameter coarset level that you set it to 10 says that below this number of equations direct solver will be used to solve. (instead of iterative solver like other levels). Because of this reason this number should be small enough to be efficiently solved by direct method. Imagine if this direct method is of order O(N^3) then you may be in problem if you chose this parameter large. Take for example if you mesh has 1 million cells then sqrt(1000000) = 1000. so if you set this then your will have to wait a lot for direct solver to finish. It will all kill the efficiency of your solver. usually this parameter is set to 50 to 100. I personally use it as 1 in my multigrid solver. (but that is different multigrid). Last edited by arjun; February 7, 2011 at 07:54. 

February 7, 2011, 08:37 

#35  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Quote:
Quote:
Quote:
Any other comment on the subject? This thread is becoming really interesting... mad 

February 7, 2011, 09:21 

#36  
Senior Member
Felix L.
Join Date: Feb 2010
Location: Hamburg
Posts: 165
Rep Power: 14 
Hello, everybody,
Quote:
this is exactly the same experience I've made with my simpleFoam simulations. My best practice usually is setting tolerance for every quantity to 1e9 or lower and relTol to 0.1. The quite low tolerance values ensure the governing equations being solved during the whole simulation process (avoiding e.g. p not being solved for anymore) and the big relative tolerances reduce the number of inner iterations for each quantity, thus (usually) avoiding >1000 iterations per outer iteration. This usually works like a charm for me. I manually check for convergence and stop the simulation when all the initial residuals have fallen below a certain tolerance (1e4, for example). In the next version of OF this can be automatically accomplished by using the new residualControl function. Just remember: the low tolerance value isn't imposed for accuracy, but to make sure that the equations keep being solved. OF handles residual control a bit differently than e.g. FLUENT. Another thing: have you tried a different solver for p than GAMG? I recently made the experience that PCG can be a lot faster than GAMG for certain cases. Which may sound a bit weird for all the benefits multigrid methods have, but it's just my experience. Give it a shot, if you haven't already! Greetings, Felix. 

February 7, 2011, 09:42 

#37  
Senior Member
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 16 
Quote:
V. 

February 7, 2011, 09:47 

#38  
Senior Member
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 16 
Quote:
I've tried both PCG and GAMG as solvers for p, but my experience goes in a different direction than yours...In particular, if the relTol parameter for p starts to be quite low (I usually use something like 10^04/10^05) the GAMG solver is much faster than the PCG one. Can you explain for which cases you have found an opposite behavior? Thanks V. 

February 7, 2011, 09:51 

#39  
Senior Member
maddalena
Join Date: Mar 2009
Posts: 436
Rep Power: 18 
Quote:
Quote:
there is a simulation running with the setup suggested above. Hope in some hours I can get a decent result from it. stay tuned. mad PS: btw, you adopted V as your official signature? that sounds good! 

February 7, 2011, 09:54 

#40  
Senior Member
Vesselin Krastev
Join Date: Jan 2010
Location: University of Tor Vergata, Rome
Posts: 368
Rep Power: 16 
Quote:
I simply like informal signatures... 

Tags 
convergence issues, pipe flow, simplefoam 
Thread Tools  Search this Thread 
Display Modes  


Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
TimeVaryingMappedFixedValue  irishdave  OpenFOAM Running, Solving & CFD  31  January 25, 2018 04:03 
time Step's turbFoam >>> exit  mgolbs  OpenFOAM PreProcessing  4  December 8, 2009 04:48 
Modeling in micron scale using icoFoam  m9819348  OpenFOAM Running, Solving & CFD  7  October 27, 2007 01:36 
Hydrostatic pressure in 2phase flow modeling (CFX4.2)  HB &DS  CFX  0  January 9, 2000 14:19 
unsteady calcs in FLUENT  Sanjay Padhiar  Main CFD Forum  1  March 31, 1999 13:32 