CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Residuals start increasing after decreasing to a very low value (https://www.cfd-online.com/Forums/main/234856-residuals-start-increasing-after-decreasing-very-low-value.html)

The_seeker March 21, 2021 16:41

Residuals start increasing after decreasing to a very low value
 
1 Attachment(s)
Hi
I am running a case in ansys fluent using k-omega sst. The mesh is structured and the quality is good. The residuals are dropping a satisafctory level of 10^-8, but then they start rising. I would appreciate if someone else has faced similar issues and let me know what could be the possible reason.

LuckyTran March 21, 2021 20:52

Hard to say since you don't mention anything about the setup.

But also I don't see anything unusual. Have you never seen residuals go up before?

arjun March 22, 2021 01:10

You are working with turbulence model (K omega in this case).

What your charts are showing is that when the flow initially trying to converge , it is converging to a solution that has gradients that give rise to different flow field (via viscosity) since it acts as source to turbulence.
This is raising the residuals again. It is normal behaviour that is noticed time to time.

The_seeker March 29, 2021 13:25

Thank you very much for replying.
I figured out th eproblem. The y-plus on one of the walls was messed up and was too high for K-omega SST. I refined the mesh on all the walls and got convergence.

Thank you.

The_seeker June 13, 2021 01:28

Hello Arjun,
I had solved the above problem of residuals by fixing the yplus values. I had recently been working on steady, incompressible, laminar flows and has seen such behavior of residuals strangely. I am aware that only residuals are not the best criteria of knowing the accuracy but can such solution be acceptable if the residuals start rising and reach to the order of 10^-3 or 10^-2. I am not able to find any discussion on this topic and don't know if there are others who observe such behavior. Although I am thinking maybe I should switch to unsteady solver maybe to solve this issue.
I would appreciate if anyone can share their experience with this issue.

arjun June 13, 2021 12:29

1 Attachment(s)
Quote:

Originally Posted by The_seeker (Post 805933)
Hello Arjun,
I had solved the above problem of residuals by fixing the yplus values.

By changing the y-plus value from grid you just changed the solver behaviour.

The residual behaviour is not a problem in itself.


Quote:

Originally Posted by The_seeker (Post 805933)
I am aware that only residuals are not the best criteria of knowing the accuracy but can such solution be acceptable if the residuals start rising and reach to the order of 10^-3 or 10^-2.

I am attaching you a residual plot from Wildkatze solver for prediction of car drag. Shows the "problem" behaviour of the residual. Mesh size is roughly 5 million cells.

The difference between the experimental drag and solver predicted drag is less than 3 percent.

farzadmech April 12, 2023 23:25

1 Attachment(s)
Dear Arjun
What are you saying is that, residuals even in order of 0.1 is not important and we should care about the for example physical values like drag or temperature? am I correct? I have attached my residuals for laminar simulation which it rise up again but my Drag coefficient remains constant after 200 iteration up to end.

I have also another question;
My mesh works fine for laminar solver, but when I use RANS(kOmegaSST) code blows up at the first iterations since Omega becomes too large. I have 37 million mesh and some of my mesh are as small as 2 micron. I am using OpenFOAM and checkMesh shows no error in my mesh but at the end of snappyHexMesh procedure it give me illegal faces(which they do not appear in checkMesh). I will be thankful if you visit my below post;

https://www.cfd-online.com/Forums/op...pyhexmesh.html


Thanks,
Farzad


Quote:

Originally Posted by arjun (Post 805955)
By changing the y-plus value from grid you just changed the solver behaviour.

The residual behaviour is not a problem in itself.




I am attaching you a residual plot from Wildkatze solver for prediction of car drag. Shows the "problem" behaviour of the residual. Mesh size is roughly 5 million cells.

The difference between the experimental drag and solver predicted drag is less than 3 percent.


arjun April 18, 2023 00:22

Yes.


The issue is that people mis-understand convergence. Convergence is not reduction of residuals (which you also think) BUT convergence of solution to a value that would not change further. This produces a solution that would have certain amount of errors. Now this is upto the user to decide if the solution is acceptable or not. Usually it is if the solver is benchmarked and verfified.


Now the residual behaviour in question. When you start the iterations there was no or constant solution. That produces very low (or zero) residual.
As iterations proceed the solution will converge and residuals will drop.


In case of turbulent flow this converging flow field produces the turbulence that wasn't there. So many times the residuals start to increase as you see here. So the solution then converges to different solution (kind of going from laminar to turbulent and finally converging to turbulent solution).



Quote:

Originally Posted by farzadmech (Post 848062)
Dear Arjun
What are you saying is that, residuals even in order of 0.1 is not important and we should care about the for example physical values like drag or temperature? am I correct?
Farzad


sbaffini April 18, 2023 04:18

I don't think I completely agree here.

Assuming the code is largely verified and validated, the issue is on the user/case side. But, independently from the cause, a residual drop away from machine precision is what it is. It means that, at least for one cell, the solution keeps changing because something doesn't allow the equations there to be satisfied.

We then decide that we are fine with certain cases, like a wall function that, for certain cells, keeps switching from one branch to the other, or maybe a limiter, if not some small separation which no turbulence model seems to be able to suppress. And we use some other mean to actually get confident on that (like other monitors of global/local quantities).

But this, in my opinion, doesn't change the fact that the case is not converged. If you were using an unsteady solver, you would be possibly getting a non zero time derivative.

In my experience, it is possible to get partial convergence to completely wrong solutions, for example in closed cavities.

arjun April 18, 2023 04:24

Quote:

Originally Posted by sbaffini (Post 848329)
I don't think I completely agree here.

Assuming the code is largely verified and validated, the issue is on the user/case side. But, independently from the cause, a residual drop away from machine precision is what it is. It means that, at least for one cell, the solution keeps changing because something doesn't allow the equations there to be satisfied.



In higher order cases, in finite volume, most of the time residuals can not drop to machine precision. It does not happen in most cases.

The reason for that is that the face values are constructed using gradients and only in perfect cases the solution from both side of the face will lead to same interpolated values. This delta stops solver from dropping to machine precision.

Even though the residuals do not drop to machine precision, the solution is much more accurate than first order solution where residuals do drop to machine precision.

FMDenaro April 18, 2023 07:52

Quote:

Originally Posted by arjun (Post 848319)
Yes.


The issue is that people mis-understand convergence. Convergence is not reduction of residuals (which you also think) BUT convergence of solution to a value that would not change further. This produces a solution that would have certain amount of errors. Now this is upto the user to decide if the solution is acceptable or not. Usually it is if the solver is benchmarked and verfified.


Now the residual behaviour in question. When you start the iterations there was no or constant solution. That produces very low (or zero) residual.
As iterations proceed the solution will converge and residuals will drop.


In case of turbulent flow this converging flow field produces the turbulence that wasn't there. So many times the residuals start to increase as you see here. So the solution then converges to different solution (kind of going from laminar to turbulent and finally converging to turbulent solution).

I don’t agree at all.
From a practical point of view, you simply decide to accept a non-convergent solution beacause you are not able to get convergence.
In case your residuals diminuish, no one would stop the iteration based only on the analysis of an integral variable.

arjun April 18, 2023 07:56

Quote:

Originally Posted by FMDenaro (Post 848345)
I don’t agree at all.
From a practical point of view, you simply decide to accept a non-convergent solution beacause you are not able to get convergence.
In case your residuals diminuish, no one would stop the iteration based only on the analysis of an integral variable.



Which part you do not agree?


It seems you define convergence as reduction of residuals, which it is not. Convergence is convergence of solution. Residual is just an outcome.

FMDenaro April 18, 2023 08:09

Quote:

Originally Posted by arjun (Post 848346)
Which part you do not agree?


It seems you define convergence as reduction of residuals, which it is not. Convergence is convergence of solution. Residual is just an outcome.




That's exactly the point, your definition of convergence is correct only when you see the direct error ||x_ex-x_k|| ->0 but not at all when ||x_k+1-x_k||->0!!


The only computable quantity that is available is the residual A.x_k+1-q=r_k+1.



The property of consistence and convergence ensures that you have vanishing direct error when the residual vanishes, but not when x_k+1-x_k is small.


It is easy to show that for linear problems but you can extend to non-linear problems (if you have a unique solution).



If you stop on a threshold that is linked to some field variable, you simply admit "ok, I cannot get a convergent solution, I use the solution as it is".


Specifically, in RANS the residual can be seen like the time derivatives of the variables, thus you accept that your solution is not statistically steady, that is you should think to consider a URANS framework.

arjun April 18, 2023 08:29

Sorry but this does not address, how higher residual higher order solution is more accurate than lower residual lower order solution. Based on what you seem to suggest lower residual solution shall be better. Which most often is not the case.

Still convergence is convergence of solution. The point where solution does not change. Residual may or may not be measure of the quality of the solution. Lower residuals is no guarantee of better solution. (which is crux of this thread).


Quote:

Originally Posted by FMDenaro (Post 848348)
That's exactly the point, your definition of convergence is correct only when you see the direct error ||x_ex-x_k|| ->0 but not at all when ||x_k+1-x_k||->0!!


The only computable quantity that is available is the residual A.x_k+1-q=r_k+1.



The property of consistence and convergence ensures that you have vanishing direct error when the residual vanishes, but not when x_k+1-x_k is small.


It is easy to show that for linear problems but you can extend to non-linear problems (if you have a unique solution).



If you stop on a threshold that is linked to some field variable, you simply admit "ok, I cannot get a convergent solution, I use the solution as it is".


Specifically, in RANS the residual can be seen like the time derivatives of the variables, thus you accept that your solution is not statistically steady, that is you should think to consider a URANS framework.


FMDenaro April 18, 2023 08:44

Quote:

Originally Posted by arjun (Post 848351)
Sorry but this does not address, how higher residual higher order solution is more accurate than lower residual lower order solution. Based on what you seem to suggest lower residual solution shall be better. Which most often is not the case.

Still convergence is convergence of solution. The point where solution does not change. Residual may or may not be measure of the quality of the solution. Lower residuals is no guarantee of better solution. (which is crux of this thread).




Dear arjun


this topic a basic theory of linear algebra and can be extended to more complex problems.


The exact solution x_ex satisfies the problem


A.x_ex -q=0

while the k-th iteration produces


A.x_k -q=r_k


Therefore



x_ex- x_k =-A^-1.r_k


||x_ex- x_k|| =||A^-1.r_k|| <= S(A^-1) ||r_k||


That is sufficient to show that a norm on the residual will be a control on the direct error. A spectral norm S on the operator A will take you to the analysis of the spectral radius.


I let you manipulate the relations for expressing the difference x_k+1- x_k.

arjun April 18, 2023 08:54

This is where the problem lies. The residual defined in the solver does not involve the exact solution.


So the solver is not even reporting this residual. When you compare the higher order solution with higher residual (solver reported) to lower order solution with lower residual (solver reported), the higher order will show lower residuals with respect to exact solution hence more accurate solution.


You have different residual in mind, this is why the confusion.


Quote:

Originally Posted by FMDenaro (Post 848353)
Dear arjun


this topic a basic theory of linear algebra and can be extended to more complex problems.


The exact solution x_ex satisfies the problem


A.x_ex -q=0

while the k-th iteration produces


A.x_k -q=r_k


Therefore



x_ex- x_k =-A^-1.r_k


||x_ex- x_k|| =||A^-1.r_k|| <= S(A^-1) ||r_k||


That is sufficient to show that a norm on the residual will be a control on the direct error. A spectral norm S on the operator A will take you to the analysis of the spectral radius.


I let you manipulate the relations for expressing the difference x_k+1- x_k.


FMDenaro April 18, 2023 09:39

Quote:

Originally Posted by arjun (Post 848354)
This is where the problem lies. The residual defined in the solver does not involve the exact solution.


So the solver is not even reporting this residual. When you compare the higher order solution with higher residual (solver reported) to lower order solution with lower residual (solver reported), the higher order will show lower residuals with respect to exact solution hence more accurate solution.


You have different residual in mind, this is why the confusion.






The residual is the residual. Is a unique definition. I would not consider any other definition



https://www.sciencedirect.com/scienc...67610521003366

LuckyTran April 18, 2023 09:54

Within the realm of linear algebra the difference between x_k or x_k+1 and the x_ex imparts a bias onto the solution and hence the residual. Whether you know the exact solution or not, the norm of the residual remains a measure assuming the difference x_k and x_k+1 is small (i.e. the solution is converging to something). Here is where higher order schemes can impart a bias that can give more accurate solution despite higher absolute level of residuals. Lower residual does not guarantee a better/worse solution but lowering of residual is quite necessary. The exceptions are the null cases: 1) you aren't converging, 2) the initial condition/guess used the converged solution, or 3) it is a case that requires non-linear estimators

So I tend to agree with the philosophy that in essence you are choosing to accept a less-converged solution. This is also apparent since you can manipulate the spectral radius to get arbitrarily small x_k+1 - x_k (i.e. set the urf's to 0). That being said, of course there have been many examples of when I have chosen to do exactly such, accept the solution as-is because there is indeed some type of error in system that prevents it from converging better–one bad cell in the mesh, etc.


Of course I also understand the point that solvers obfuscate the residual a bit by normalizing it about the solution. But the impact of this is mostly making it hard to compare convergence quality between two different cases. Although you can manipulate solver-reported residuals by the initial guess or normalization options, most people have habits that make it consistent, i.e. initial velocity is always U=0 every time they run a case, etc.

FMDenaro April 18, 2023 10:10

and, ideed, it is easy to show that you can have a low x_k+1-x_k term just because you get a low difference r_k+1-r_k, not because x_ex-x_k+1 is small.
The key is in the nature of the operator A, if it is well conditioned or not. The direct error and the residual can have a different order of magnitude when S(A^-1) is not O(1).



On the other hand, in practical computation, the convergence is generally acceptable if ||r_k+1||=eps||r_0||, with eps = O(10^-4)-O(10^-5). That is not the same if you consider only the iteration difference.

sbaffini April 18, 2023 11:28

There is an important aspect in the exact convergence of the system which is related to the finite volume discretization. Your implicit, steady solution won't be conservative to machine precision if your system isn't solved to machine precision. For any equation of any cell that doesn't converge to machine precision you get an imbalance.

Depending on the severity of the problem, this might be largely acceptable or largely not, but the point I want to make is that is not, per se, nice to accept such solutions. I've done this dozens of times, but I accept the fact that I might have something wrong somewhere. Checking the solution overall then typically gives me more confidence, but I still get those solutions with a grain of salt.

I'm pretty sure we all know our stuff here, so my concern is mostly on putting a clear line between acceptable and correct, which are two different things, and this should be made clear, in my opinion.


All times are GMT -4. The time now is 14:00.