|
[Sponsors] |
Residuals start increasing after decreasing to a very low value |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
March 21, 2021, 17:41 |
Residuals start increasing after decreasing to a very low value
|
#1 |
Member
Join Date: Feb 2020
Posts: 31
Rep Power: 6 |
Hi
I am running a case in ansys fluent using k-omega sst. The mesh is structured and the quality is good. The residuals are dropping a satisafctory level of 10^-8, but then they start rising. I would appreciate if someone else has faced similar issues and let me know what could be the possible reason. |
|
March 21, 2021, 21:52 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,746
Rep Power: 66 |
Hard to say since you don't mention anything about the setup.
But also I don't see anything unusual. Have you never seen residuals go up before? |
|
March 22, 2021, 02:10 |
|
#3 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
You are working with turbulence model (K omega in this case).
What your charts are showing is that when the flow initially trying to converge , it is converging to a solution that has gradients that give rise to different flow field (via viscosity) since it acts as source to turbulence. This is raising the residuals again. It is normal behaviour that is noticed time to time. |
|
March 29, 2021, 14:25 |
|
#4 |
Member
Join Date: Feb 2020
Posts: 31
Rep Power: 6 |
Thank you very much for replying.
I figured out th eproblem. The y-plus on one of the walls was messed up and was too high for K-omega SST. I refined the mesh on all the walls and got convergence. Thank you. |
|
June 13, 2021, 02:28 |
|
#5 |
Member
Join Date: Feb 2020
Posts: 31
Rep Power: 6 |
Hello Arjun,
I had solved the above problem of residuals by fixing the yplus values. I had recently been working on steady, incompressible, laminar flows and has seen such behavior of residuals strangely. I am aware that only residuals are not the best criteria of knowing the accuracy but can such solution be acceptable if the residuals start rising and reach to the order of 10^-3 or 10^-2. I am not able to find any discussion on this topic and don't know if there are others who observe such behavior. Although I am thinking maybe I should switch to unsteady solver maybe to solve this issue. I would appreciate if anyone can share their experience with this issue. |
|
June 13, 2021, 13:29 |
|
#6 | ||
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
The residual behaviour is not a problem in itself. Quote:
The difference between the experimental drag and solver predicted drag is less than 3 percent. |
|||
April 13, 2023, 00:25 |
|
#7 | |
Senior Member
Farzad Faraji
Join Date: Nov 2019
Posts: 206
Rep Power: 7 |
Dear Arjun
What are you saying is that, residuals even in order of 0.1 is not important and we should care about the for example physical values like drag or temperature? am I correct? I have attached my residuals for laminar simulation which it rise up again but my Drag coefficient remains constant after 200 iteration up to end. I have also another question; My mesh works fine for laminar solver, but when I use RANS(kOmegaSST) code blows up at the first iterations since Omega becomes too large. I have 37 million mesh and some of my mesh are as small as 2 micron. I am using OpenFOAM and checkMesh shows no error in my mesh but at the end of snappyHexMesh procedure it give me illegal faces(which they do not appear in checkMesh). I will be thankful if you visit my below post; illegal faces generated using snappyHexMesh Thanks, Farzad Quote:
|
||
April 18, 2023, 01:22 |
|
#8 |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Yes.
The issue is that people mis-understand convergence. Convergence is not reduction of residuals (which you also think) BUT convergence of solution to a value that would not change further. This produces a solution that would have certain amount of errors. Now this is upto the user to decide if the solution is acceptable or not. Usually it is if the solver is benchmarked and verfified. Now the residual behaviour in question. When you start the iterations there was no or constant solution. That produces very low (or zero) residual. As iterations proceed the solution will converge and residuals will drop. In case of turbulent flow this converging flow field produces the turbulence that wasn't there. So many times the residuals start to increase as you see here. So the solution then converges to different solution (kind of going from laminar to turbulent and finally converging to turbulent solution). |
|
April 18, 2023, 05:18 |
|
#9 |
Senior Member
|
I don't think I completely agree here.
Assuming the code is largely verified and validated, the issue is on the user/case side. But, independently from the cause, a residual drop away from machine precision is what it is. It means that, at least for one cell, the solution keeps changing because something doesn't allow the equations there to be satisfied. We then decide that we are fine with certain cases, like a wall function that, for certain cells, keeps switching from one branch to the other, or maybe a limiter, if not some small separation which no turbulence model seems to be able to suppress. And we use some other mean to actually get confident on that (like other monitors of global/local quantities). But this, in my opinion, doesn't change the fact that the case is not converged. If you were using an unsteady solver, you would be possibly getting a non zero time derivative. In my experience, it is possible to get partial convergence to completely wrong solutions, for example in closed cavities. |
|
April 18, 2023, 05:24 |
|
#10 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
In higher order cases, in finite volume, most of the time residuals can not drop to machine precision. It does not happen in most cases. The reason for that is that the face values are constructed using gradients and only in perfect cases the solution from both side of the face will lead to same interpolated values. This delta stops solver from dropping to machine precision. Even though the residuals do not drop to machine precision, the solution is much more accurate than first order solution where residuals do drop to machine precision. |
||
April 18, 2023, 08:52 |
|
#11 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
From a practical point of view, you simply decide to accept a non-convergent solution beacause you are not able to get convergence. In case your residuals diminuish, no one would stop the iteration based only on the analysis of an integral variable. |
||
April 18, 2023, 08:56 |
|
#12 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Quote:
Which part you do not agree? It seems you define convergence as reduction of residuals, which it is not. Convergence is convergence of solution. Residual is just an outcome. |
||
April 18, 2023, 09:09 |
|
#13 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
That's exactly the point, your definition of convergence is correct only when you see the direct error ||x_ex-x_k|| ->0 but not at all when ||x_k+1-x_k||->0!! The only computable quantity that is available is the residual A.x_k+1-q=r_k+1. The property of consistence and convergence ensures that you have vanishing direct error when the residual vanishes, but not when x_k+1-x_k is small. It is easy to show that for linear problems but you can extend to non-linear problems (if you have a unique solution). If you stop on a threshold that is linked to some field variable, you simply admit "ok, I cannot get a convergent solution, I use the solution as it is". Specifically, in RANS the residual can be seen like the time derivatives of the variables, thus you accept that your solution is not statistically steady, that is you should think to consider a URANS framework. |
||
April 18, 2023, 09:29 |
|
#14 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
Sorry but this does not address, how higher residual higher order solution is more accurate than lower residual lower order solution. Based on what you seem to suggest lower residual solution shall be better. Which most often is not the case.
Still convergence is convergence of solution. The point where solution does not change. Residual may or may not be measure of the quality of the solution. Lower residuals is no guarantee of better solution. (which is crux of this thread). Quote:
|
||
April 18, 2023, 09:44 |
|
#15 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
Dear arjun this topic a basic theory of linear algebra and can be extended to more complex problems. The exact solution x_ex satisfies the problem A.x_ex -q=0 while the k-th iteration produces A.x_k -q=r_k Therefore x_ex- x_k =-A^-1.r_k ||x_ex- x_k|| =||A^-1.r_k|| <= S(A^-1) ||r_k|| That is sufficient to show that a norm on the residual will be a control on the direct error. A spectral norm S on the operator A will take you to the analysis of the spectral radius. I let you manipulate the relations for expressing the difference x_k+1- x_k. |
||
April 18, 2023, 09:54 |
|
#16 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 |
This is where the problem lies. The residual defined in the solver does not involve the exact solution.
So the solver is not even reporting this residual. When you compare the higher order solution with higher residual (solver reported) to lower order solution with lower residual (solver reported), the higher order will show lower residuals with respect to exact solution hence more accurate solution. You have different residual in mind, this is why the confusion. Quote:
|
||
April 18, 2023, 10:39 |
|
#17 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
Quote:
The residual is the residual. Is a unique definition. I would not consider any other definition https://www.sciencedirect.com/scienc...67610521003366 |
||
April 18, 2023, 10:54 |
|
#18 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,746
Rep Power: 66 |
Within the realm of linear algebra the difference between x_k or x_k+1 and the x_ex imparts a bias onto the solution and hence the residual. Whether you know the exact solution or not, the norm of the residual remains a measure assuming the difference x_k and x_k+1 is small (i.e. the solution is converging to something). Here is where higher order schemes can impart a bias that can give more accurate solution despite higher absolute level of residuals. Lower residual does not guarantee a better/worse solution but lowering of residual is quite necessary. The exceptions are the null cases: 1) you aren't converging, 2) the initial condition/guess used the converged solution, or 3) it is a case that requires non-linear estimators
So I tend to agree with the philosophy that in essence you are choosing to accept a less-converged solution. This is also apparent since you can manipulate the spectral radius to get arbitrarily small x_k+1 - x_k (i.e. set the urf's to 0). That being said, of course there have been many examples of when I have chosen to do exactly such, accept the solution as-is because there is indeed some type of error in system that prevents it from converging better–one bad cell in the mesh, etc. Of course I also understand the point that solvers obfuscate the residual a bit by normalizing it about the solution. But the impact of this is mostly making it hard to compare convergence quality between two different cases. Although you can manipulate solver-reported residuals by the initial guess or normalization options, most people have habits that make it consistent, i.e. initial velocity is always U=0 every time they run a case, etc. Last edited by LuckyTran; April 18, 2023 at 12:14. |
|
April 18, 2023, 11:10 |
|
#19 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 |
and, ideed, it is easy to show that you can have a low x_k+1-x_k term just because you get a low difference r_k+1-r_k, not because x_ex-x_k+1 is small.
The key is in the nature of the operator A, if it is well conditioned or not. The direct error and the residual can have a different order of magnitude when S(A^-1) is not O(1). On the other hand, in practical computation, the convergence is generally acceptable if ||r_k+1||=eps||r_0||, with eps = O(10^-4)-O(10^-5). That is not the same if you consider only the iteration difference. |
|
April 18, 2023, 12:28 |
|
#20 |
Senior Member
|
There is an important aspect in the exact convergence of the system which is related to the finite volume discretization. Your implicit, steady solution won't be conservative to machine precision if your system isn't solved to machine precision. For any equation of any cell that doesn't converge to machine precision you get an imbalance.
Depending on the severity of the problem, this might be largely acceptable or largely not, but the point I want to make is that is not, per se, nice to accept such solutions. I've done this dozens of times, but I accept the fact that I might have something wrong somewhere. Checking the solution overall then typically gives me more confidence, but I still get those solutions with a grain of salt. I'm pretty sure we all know our stuff here, so my concern is mostly on putting a clear line between acceptable and correct, which are two different things, and this should be made clear, in my opinion. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[Tutorials] Tutorial of how to plot residuals ! | wolle1982 | OpenFOAM Community Contributions | 171 | February 20, 2024 03:55 |
TimeVaryingMappedFixedValue | irishdave | OpenFOAM Running, Solving & CFD | 32 | June 16, 2021 07:55 |
Cannot find functionObject file residuals | vava10 | OpenFOAM Pre-Processing | 2 | November 14, 2020 13:21 |
Residuals increasing after few iterations | Aijiazi | FLUENT | 5 | September 28, 2020 09:46 |
Mesh size and solver residuals... | Scott | CFX | 5 | December 15, 2008 18:10 |