|
[Sponsors] |
General question about discretisation and iterations errors |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 22, 2024, 10:23 |
General question about discretisation and iterations errors
|
#1 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear all,
These days I'm diving back into the theory underlying CFD and I realize that there is a point that I understand quite poorly concerning so-called iteration errors. So that we can agree on the terms, I define the iteration error as being the difference between the exact solution of the discrete problem deduced from the classical transport equations with the solution recovered after N iterations of the solver (with N quite big usually). Furthermore, I define the discretization error as being the difference between the analytical solution of the transport equation and the exact solution of the discretized problem. The question that bothers me concerns the iteration error. How is it that we cannot make it tend towards zero each time, regardless of the size of the mesh? Knowing that we are solving (independently of the underlying physics) a problem of N equations with N unknowns, we should be able to reach (in theory) an exact solution each time. Why is this system of N equations so difficult to solve? Why in all the simulations that I do, do I keep residuals (representing how far i am from solving the equations system) very far from the rounding error of the machine? I understand well that if I have a coarse mesh, then I cannot hope to make my discretized solution tend towards the analytical solution. Indeed, discretization will always reveal a difference between the two solutions. Difference which will decrease with the refinement of the mesh. What bothers me therefore has more to do with the iteration error (and therefore ultimately the residuals) which does not seem to me to have good reasons to remain non-zero with enough iterations. I hope to be clear enough and maybe someone will be able to help ! Thank you Pierre |
|
February 22, 2024, 13:33 |
|
#2 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
Could you better detail if you are talking about the iteration error of a method for solving linear system or more generally to the solver of the coupled PDE equation? I mean, ek=(x-xk) is the iteration error for a linear system A.x=q where A is the matrix that appears after the discretization of the PDE equation. Thus, even for an exact inversion of the matrix A you cannot think that x is the exact solution of the PDE. But for vanishing computational steps it should tend to that. |
||
February 22, 2024, 14:27 |
|
#3 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear FMDenaro,
Thank you for your reply ! I am a bit confused, i am not sure to understand your questions. If I follow your formalism, my question is, definitely, why the solver is not able to solve exactly (at the round off error) the discretized problem Ax=Q ? Such as ek=0 (more or less something close to the round off error) after enough iterations ? Indeed, if I take any simulations, the residuals never going down to reach the round off error order of magnitude. So, why the solver is not able to resolve very accurately the discretized problem ? I think (but i am not sure...) it is due to the fact that basically, does not only one equation to solve (ie not just Ax=Q) but more (one for momentum conservation, another one for the pressure and a last one for the continuity) and all these equations are definitely not completely compatible that is why the solver will reach a final solution where it oscillate between the differents solutions modification to satisfy all the equations... Thank you for your help ! |
|
February 22, 2024, 14:43 |
|
#4 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
If you are thinking to an algebric linear system, your idea is wrong. By definition the iterative process must converge to ek->0 for k->Inf, no matter about the initial guess. That is a theoretical basis of any stationary iterative method, requiring that the spectral radius of the iteration matrix is less than one. You can reach convergence at single, double or quadruple precision depending on your setting. Thus, you are talking about some other issue. |
||
February 22, 2024, 15:33 |
|
#5 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear FMDenaro,
I will try to take an exemple to gather all the ideas. Let's consider a steady state simulation of an incompressible flow. Consequently, we have two PDE to solve : 1) The continuity equation 2) The momentom conservation (Navier-Stockes) These two PDE can not be solved directly (analytically) so we will consider a discretized form of the attached equations to solve them with iterative algorithm. This change to the discretized form induces of course the discretization error which will quantify the difference between the analytical solution, I can not find, and the discretized one i will try to find... Let's consider SIMPLE algorithm which appears to be quite relevant for a such case. I will have : 1) A matrix equation M*U=-grap(P) which will give me a first assessment of the velocity field 2) An equation to find the pressure 3) A last equation to correct the velocity with the pressure found just before and such that the velocity verify the continuity equation All the 3 steps being iteratively solved. My question is: Why this 3 steps algorithm does not converge very well for all cases well defined if I perform enough iterations (ie with residuals reaching almost the computer round off error) ? Where does the non zero residuals are coming from ? I don't know if I'm expressing myself clearly. I'm trying to understand why residuals remain non-zero on the solution of the discretized problem. Does it means that the solution of the discretized equations does not exist or the solver is not able to reach it ? Thank you VERY MUCH for your help! Best regards, Pierre |
|
February 22, 2024, 16:39 |
|
#6 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
Well, the iterative methods for solving non-linear system of PDEs is a different topic. But if you think as an axample to the case in which I use an implicit method for the diffusive terms in the momentum equation and I solve the linear system with an iterative method I will drive the residual up to machine precision. The same happens for the iterative solution of the pressure equation. |
||
February 23, 2024, 04:29 |
|
#7 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear Filippo,
Thank you once again for your help ! Ok so basically, we agree that for a linear system, iterative methods will fine exactly (modulo the round off error) the solution. Regarding non linear system (because unfortunately that's what it's all about...). Does the solver's inability to find the exact solution (in the same way as for the linear case) stem from a mathematical problem or reveal an underlying physical problem? The question at the heart of my interrogation concerns the physical interpretation of residuals. I think I've understood that they reflect a conservation imbalance at cell level. However, I'm wondering whether this imbalance necessarily reflects a discretization problem (the mesh is unsuitable for respecting the conservation principle and therefore effectively capturing the field of velocity, pressure etc.) or whether it could be a matter of the solver's "mathematical" inability to find a solution that might exist? Put another way, I wonder whether residuals are necessarily a marker of areas where the mesh is ill-suited to the physics, or whether it could simply be a solver-related resolution problem that doesn't reflect the quality of the mesh? Thank you VERY MUCH for your help ! Pierre |
|
February 23, 2024, 05:19 |
|
#8 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
If you look at the residuals in a steady state solution, they would correspond to non vanishing time derivatives. But you have to check the type of residuals in your software |
||
February 23, 2024, 05:36 |
|
#9 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear Filippo,
Thank you for your pugnacity in trying to help me I am using Fluent but I think the question is more general : Does the fact that residuals do not fall below a certain value necessarily reflect an underlying discretization problem? And therefore cells with high residuals must necessarily be refined because no "correct" solution exist like that ? Or can these non-zero residuals simply be due to the nonlinear solver's inability to find the solution to the discretized problem (without necessarily having a too-coarse mesh problem, for example)? thanks! |
|
February 23, 2024, 06:13 |
|
#10 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
Have a look to what is the “residual” monitored by Fluent, that is not exactly the physical time derivative. I remember an old post about that. |
||
February 23, 2024, 07:19 |
|
#11 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear Filippo,
I am very surprised by what you say, I think that residuals that never fall below, say, 10^-5 (more or less) are a common phenomenon in all software. Here's an example from a simulation I'm currently running on Fluent (sorry for the bad quality of the picture...): My question is what does these "big" (compared to round off error) residuals reveals ? By definition, residuals quantify how much the numerical scheme is converged. The question is then why the numerical scheme can not converge more ? Is it due to the solver incapacity to reach the solution which exist ? Or maybe does the solution of the discretized problem does not exist because of the discretization which is "bad" (typically bad mesh) ? Thank you for your help, Pierre |
|
February 23, 2024, 07:37 |
|
#12 |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
To be clear, if I use my own made code I consider that the residuals in my solver must represent the time derivative I want to reduce. This is a physical intepretation of my residual but other softwares can use a different residual.
When you discretize the system of equations, you will get a modified set of system of PDEs. Now the general question is: has this modified system a solution for the steady state? |
|
February 23, 2024, 08:30 |
|
#13 |
New Member
Pierre LECHEVALIER
Join Date: Mar 2023
Posts: 26
Rep Power: 3 |
Dear Filippo,
Ok, I'm going to bounce off the general question you raise, because I think it's the one I am wondering about !). Let's consider a situation for which we assume that a steady-state solution exist, we will obtain a discretized version of the attached PDE. The question is then, will the iterations errors tend to zero (module round off error) whatever the discretization (ie the mesh) ? Or does the mesh quality will influence the residuals convergence to zero ? In this case, it means that there is a link between iterations errors and discretized error... Thank you VERY MUCH for your help, Pierre |
|
February 23, 2024, 09:08 |
|
#14 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
On the other hand, if you perform a time-dependent URANS, will your solution tend to a steady state ? I don’t think … |
||
February 23, 2024, 10:06 |
|
#15 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,680
Rep Power: 66 |
If you are using somebody else's code then most likely you are using a accelerator, which means each iteration you solve a relaxed set of equations (look for AMG, preconditioners, etc.). Accelerators amplify errors to help them propagate faster at the expense of stalling the residual reduction rate
|
|
February 23, 2024, 13:48 |
|
#16 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,278
Rep Power: 34 |
In fluent, StarCCM and other softwares of the ilk, the reason why residuals many times do not fall to the levels you expect is that the solver is finite volume. The variables are interpolated to the face from both sides (from right and left) and for convection term from one side (based on flux).
The values from both side do not smoothly meet at the control volume faces and there is an delta difference between these values. This does not allow residuals to fall to machine precision. You will notice that pretty much always first order schemes converge to much lower residuals. The reason is in that case this difference mostly goes down very much. Higher the order you will move, the difference will be higher and thus the residuals. (does not mean that solution is less accurate with higher orders, just residual is not fallen very much). Hope this explains. Try the same case with first order scheme. Quote:
|
||
February 23, 2024, 13:55 |
|
#17 | |
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,782
Rep Power: 71 |
Quote:
If I remember correctly, we discussed some time ago about the fact of what Fluent monitors as “residuals”. And they are not intended as the residual as theoretically defined in iterative methods. |
||
February 23, 2024, 14:03 |
|
#18 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,278
Rep Power: 34 |
Quote:
That too. Different solvers have different definitions. Their definition of residual is (mostly) absolute sum of residuals remaining after the descretization is applied. One of the major source is Rhie and Chow term which is basically function of difference between the pressure interpolated to face. (using gradients). If mesh is not uniform or some reasons like higher variation, this does not fall to zero and leaves residual in continuity. In momentum second order upwind, which only interpolates from one side. |
||
February 23, 2024, 18:20 |
|
#19 |
Senior Member
|
Your example picture is from a RANS computation which, besides any other reason for lack of convergence, requires a dedicated explanation. But let's step back a little (assuming an implicitly discretized steady problem, unless otherwise stated):
1) For linear problems, the matter is simple and any lack of convergence (if not divergence) is strictly related to a problem in the discretization directly propagating itself in the system matrix or to inconsistent bcs. That is, something that makes the linear system unsolvable. 2) A correct FV discrerization has one and only one flux per face. So, there is no jump between the two sides of the face. This is independent from the discretization method and order. The discretization could be unstable, but would not impair conservation for a properly coded fv method. 3) What is left is, probably, what your question refers to: lack of convergence for nonlinear problems. Let me first say that what Fluent plots as residual is, indeed, a measure of the current imbalance of a given discretized equation summed over all the cells. Second, let me point out that the theory for nonlinear equations is very limited even for simpler problems, even more so for the NS equations. Does a strong solution to converge to exist in the first place? If not, do weak solution exists, and to which one should we converge? 4) In addition and relation to the previous point, the specific algorithm and initial conditions you use to converge for a nonlinear problem have a huge relevance... In the same way a root-finding algo could never find the zero of a function if starting in the wrong point or have problems if it finds a stationary point. Initializations, implicit and explicit under-relaxation factors, etc. all have an impact on how and if you converge... especially relevant if you consider that, very differently from root finding algos, none of the published methods has any proof of convergence or whatsoever. We are so trained to properly initialize and set up a case that we rarely appreciate the infinite ways we could make them wrong and not converge to what we expect to or at all. 5) Inviscid flows are different from laminar ones which are different from RANS, especially in relation to point 3. And flows with complex domains and physics are different from more canonical flows in simple domains. For RANS in particular, there is no reason for which a given turbulent model should allow any solution, if not for the stabilizing effect of any eddy viscosity. So, the actual question is, why do you instead expect convergence (to an actually existing correct solution)? Still, the actual practice is much more simple than all of this, and most simple cases would actually fully converge, if given the proper number of iterations. Going back to your picture, however, there might be any combination of more mundane reason for the convergence being so slow or completely missing |
|
February 24, 2024, 02:08 |
|
#20 | |
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,278
Rep Power: 34 |
Quote:
This part specially for Fluent is mostly not correct. In fact in 2023, in DAGA 2023 (For acoustics), I presented results for suppressing spurious noise reflection from grid coarsening. The whole method was about reducing this jump of pressure when interpolated from both sides. If you were right, i would not be able to reduce the jump. The option Delta - V dissipation in STAR-CCM+ is based on this jump and actually tries to reduce it. (Delta in Delta V refers to this jump). PS: Note that there is single flux between the faces but there is jump in variables to compute this flux. |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
floating point exception running pimpleFoam | elisa_france | OpenFOAM Running, Solving & CFD | 1 | July 4, 2023 06:11 |
[solids4Foam] FSI case does not converge, foam extend 4.0 | MFWilliams | OpenFOAM CC Toolkits for Fluid-Structure Interaction | 12 | May 13, 2022 08:35 |
oscillatingInletACMI2D tutorial case for compressible flow | jimteb | OpenFOAM Running, Solving & CFD | 0 | February 6, 2015 05:17 |
Large time Step Continuity Errors | ozzythewise | OpenFOAM Running, Solving & CFD | 1 | September 29, 2010 20:55 |
AMG parallelisation and convection schemes | christian | OpenFOAM Running, Solving & CFD | 3 | December 17, 2007 08:21 |