CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Convergence to 0.001?

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 13, 2000, 14:30
Default Convergence to 0.001?
  #1
Sheng
Guest
 
Posts: n/a
Hi, experts

It was stated on some papers: the convergence of solution is 0.001. What's the meaning? It is general for all euqations or just the mass?

When using some CFD package for real industrial case, It seems the stop is most controlled by the setting number of iteration. If looking at the convergence line on the screen, it only comes to a stable level (may with high values). It is very difficult to reach the setting convergence criterion. Does it mean convergence?

Thanks in advance!

Sheng
  Reply With Quote

Old   March 13, 2000, 15:04
Default Re: Convergence to 0.001?
  #2
John C. Chien
Guest
 
Posts: n/a
(1). If you monitor the important variables at several key locations, and print these values vs iteration numbers ,say single precision output, then you will see that these varibles eventually will remain unchanged. That is, you will get identical printed values regardless of any additional iterations. I have been doing this from time to time for almost 30 years. (2). There is a different name for that, sometimes, it is call " machine zero" convergence. At that point, the solution will remain the same, except the precision of the machine. (3). So, that has been done on the routine basis. (4). For the 3-D real world applications, I have been setting the normalized residuals limits to 1.0E-08. In most cases, all of the residuals will get below 1.0E-06, and they will stay flat at that level. Normally, I will take those solutions as converged solutions. (5). Sometimes, if I do not pay attention to the mesh distribution (large mesh size ratio between neighboring cells, or highly skewed, flat cells), then the solution residuals will stay flat at around 1.0E-04. Then I know that somewhere in the flow field, the solution is still oscillating. In this case, it depends on the area of oscillation. If the area of interest is still not stable, then the mesh has to be improved first. It is also possible that there are other problems with the handling of the boundary conditions. (6). In case, if it is very hard to lower the residuals, then one needs to look at the contour plots between iterations, to see if the solution is accepatble or not. For academic purposes, those contour plots should be identical on the top of each other. (7). This is very important, because even for very good agreement in pressure field, the wall properties can be miles off. (8). So, when the process is converging, the solutions will converge to machine zero. It's being done on the routine basis.
  Reply With Quote

Old   March 13, 2000, 22:12
Default Re: Convergence to 0.001?
  #3
R.Sureshkumar
Guest
 
Posts: n/a
Dear John Do you mean that if the contours are overlapping then the convergence is achieved say for residuals less than 0.01 also. In my twophase flow simulations (Academic), i found that 0.001 was too strict! and 0.01 gave results comparable to experimental observations. any advice.
  Reply With Quote

Old   March 14, 2000, 00:01
Default Re: Convergence to 0.001?
  #4
John C. Chien
Guest
 
Posts: n/a
(1). The convergence of the CFD results is a purely numerical issue. (2). The converged solution has nothing to do with the experimental observation at all. (3). If for some practical reasons it is hard to reach the strict convergence criterion, then you will have to make sure that the solution is always repeatable within certain error bands. (4). I think, it is problem dependent. But if you relax the requirement in the first place, then no one else can improve the quality of the solution for you. So, you really have to monitor the development of the flow parameters vs iteration number, even if the residual is already below 0.001 (5). As long as you are happy with your solutions, you are a happy person.
  Reply With Quote

Old   March 14, 2000, 01:26
Default Re: Convergence to 0.001?
  #5
narayan
Guest
 
Posts: n/a
When you run a steady state calculation, you start with a initial guess and reiterate the solution. After a certain number of iterations you will see that the difference in the solution between the n-1 and nth iteration is negligible. This difference is dictated by setting the convergence limit.

Most of the Cfd packages should have a way to set the convergence limit. Also, it is possible to monitor convergence by doing a plot of the residual v/s number of iterations. From this plot you should be clearly able to find out if your solution has converged or not.

The convergence depends on the purticular type of problem, and how accurate you would want the solution to be. I usually set convergence limit to 10E-06, which seems to work fine for most cases.

A convergence of 0.001 probably means that for the purticular problem this is a sufficient limit, &/or the solution settles down once this limit has reached.
  Reply With Quote

Old   March 14, 2000, 02:02
Default Re: Convergence to 0.001?
  #6
RobertHeidmann
Guest
 
Posts: n/a
The problem is: there are different possibilities to calculate the "residuals" and you cannot compare numbers like 10e-06. In fluent with the default settings for example (sorry, wrong forum) you only get those numbers with perfectly meshed tubes without any separation. If you restart from a good guess, you will also fail the 10e-06. Robert
  Reply With Quote

Old   March 14, 2000, 03:20
Default Re: Convergence to 0.001?
  #7
Sheng
Guest
 
Posts: n/a
Dear John,

Did you use your own code or the CFD packages? When using own code, it is possible to reach the 1.0E-08 level (depending on the case). When using the package, it is hard to reach the level. For most of the cases, it is stoped after viewing the 'machine flat line'.

In addition, the different equations of P, U,V,W,K,e,T will have quite big different level. Then, when reading some reference papers, I thouth what is the meaning of 'Convergence to 0.001'. This should be for all equations or just one of them. For example a combustion case for solving the Enthalpy, the value of H will be 1e+06. Then, if for the absolute residual, the convergence is very difficult to 0.001. Maybe in the paper, it is stated of the 'relatvie residual'.

Thanks for your time!

Sheng

  Reply With Quote

Old   March 14, 2000, 04:28
Default Re: Convergence to 0.001?
  #8
Dr. Hrvoje Jasak
Guest
 
Posts: n/a
Hi guys,

A nice question!! The thing I particularly like is how everybory is throwing numbers around, all of which mean absolutely NOTHING AT ALL: 1) 0.001 (or 1e-6) of what units??? Bananas? Furlongs per fortnight? The fact is that without reading the manual very carefully you'll never find out.

In pressure-based segregated algorithms (which I know best), the iterative matrix solver returns the residual of the dimensions of the source term in the equations: for pressure it is the mass imbalance, for momentum force imbalance etc. This, in absolute terms means nothing - it depends whether you're solving for a waterfall or cooling of a microprocessor chip. Here is the first potential for "cheating": is the residual intensive or extensive (i.e. did I divide by total volume).... which brings me to the second point:

2) Normalisation. I had this described to me as "a bit of witchcraft". It is sort of traditional (and intuitive) to normalise the residual based on the inlet flux (mass, momentum...) but you don't always have an inlet (or, worse, it is tiny and unrepresentative)! The other possibility is to create an appropriate combination of the matrix diagonal and source, which will always work but is less intuitive. Not to mention that I can liberally sprinkle the whole thing with under-relaxation factors and similar for further loss of clarity.

But even now the thing is not clear:

3) Equation behaviour is different: some are source-dominated (a typical example is the epsilon equation, which sometimes reports "huge" residuals and people exclude it from convergence checks) and some are sourceless. Also, the difference between deferred correcton and implicit implementation of convection will also influence the matter.

4) What about unsteadiness? It is fairly typical for a user to expect a "steady-state" solution from a CFD code for a probem which physically is not steady and then complain that the code "does not converge". If you're particularly (unlucky, you may even kill off the physical unsteadiness with a suitably diffusive discretisation and/or massive under-relaxation.

So, the conclusion is: Forget about 0.001 or anything else like it, as this is not comparable over different codes (or, indeed, simulations) and learn what the residual level means in practical terms in the code you're using. My personal favourite would be to follow the change in the error estimate through iterations (this is a very different animal from the solver residual!). When the error estimate stabilises, you are "converged"; its distribution will probably tell you what's going on.

Hrv
  Reply With Quote

Old   March 14, 2000, 08:09
Default Re: Convergence to 0.001?
  #9
John C. Chien
Guest
 
Posts: n/a
(1). Both. My own codes and current commercial codes. (2). The only reason to set the normalized residuals to 1.0E-08 is to make sure that the residuals will get below the level of 1.0E-06. Sometimes, you are not going to see much of it in the display, because they are outside the graphical scale. (3). The best place to check the convergence is on the computed flow variables at several locations directly.
  Reply With Quote

Old   March 14, 2000, 10:24
Default Re: Convergence to 0.001?
  #10
Wolfgang Schmidt
Guest
 
Posts: n/a
Perhpas a stupid comment, I'm no real expert. But, is it meaningful to iterate always to a change of the velocitys, pressure or whatsoever in the range of 1e-6..1e-8 (= (OldValue-NewValue)/Oldvalue) or below. In my opinion it is only necsessary to iterate as long, as you reach the range of the discretization error or a little bit below. E.g. a second order discretization scheme on a 100*100 grid will yield an error of 0.01^2*Length. I think it's not useful to have a solution like the following one for a normalized pressure p at the grid point(i,j) "p(i,j)_true := p(i,j) +/- 1e-8*p(i,j) +/- p(i,j)*0.01^2". Here true means computational truth not physical truth and it's "normalized" using a dimensionless formulation. Maybe I'm wrong and someone can enligthen me.
  Reply With Quote

Old   March 14, 2000, 10:32
Default Re: Convergence to 0.001?
  #11
Sheng
Guest
 
Posts: n/a
Dear Dr. Hrvoje Jasak,

Thanks for your time. I got the same feeling as you. However, as you know, when writing a paper, it always gets the question from the reviewers: What's your convergence? Is it a question to froce someone to 'cheating'? How can I give the frank and scientific answer?

Best regards, Sheng
  Reply With Quote

Old   March 14, 2000, 10:42
Default Re: Convergence to 0.001?
  #12
Amadou Sowe
Guest
 
Posts: n/a
You said that your favourite would be to follow the change in the error estimate through iterations. What kind of error estimate are you talking about?
  Reply With Quote

Old   March 14, 2000, 11:41
Default Re: Convergence to 0.001?
  #13
narayan
Guest
 
Posts: n/a
I totally agree with you, it certainlly depends on the problem. There are other ways to track convergence, like the L2 error norm, or tracking indvidual variables (e.g. Pressure). I never meant 10E-06 as a universal solution, and in many cases it may be even difficult to acheive. thnx
  Reply With Quote

Old   March 14, 2000, 15:27
Default Re: Convergence to 0.001? btw
  #14
John C. Chien
Guest
 
Posts: n/a
(1). By the way, many years ago, some people used to claim that the coarse mesh solution of the inviscid, transonic flow agreed better with the experimental data than that of the fine mesh viscous, transonic flow. (2). As a design tool, it is acceptable only if the tool is calibrated and validated for the specified range of applications.
  Reply With Quote

Old   March 15, 2000, 03:48
Default Re: Convergence to 0.001?
  #15
Dr. Hrvoje Jasak
Guest
 
Posts: n/a
Hi,

Well, I suppose I'm a bit weird when I set up my CFD code. I always use double precision and always iterate on the problem until the solver stops solving, i.e. my global convergence tolerance is the cut-off point for the solver, which is 1e-6 or 1e-7 in "my" residual level. This is partly because I'm under no pressure to produce converged results whithin the hour and partly because, as a numerics person, I am interested in precisely those cases where "it is impossible to converge the solution to arbitrary tight tolerance". When you're doing runs for publication it is always useful to be careful about tolerance and if you are, you have every right to reply to the reviewers with a meaningless number like 1e-6 (provided you're happy that the solution is indeed converged). I never had anyone come back to me and say "1e-6 whats?" and from my previous message it is obvious that we should have kept talking for at least 10 minutes after the initial "answer". Just to re-iterate: be careful with convergence because published papers are read by clever people and any fudging of the data will be found out but in at the end of the day it's about being careful about your work.

Have fun,

Hrv
  Reply With Quote

Old   March 15, 2000, 04:04
Default Re: Convergence to 0.001?
  #16
Dr. Hrvoje Jasak
Guest
 
Posts: n/a
Now, this is going to confuse you, but let's just be careful about the terminology here:

a) when you have a matrix and an iterative solver, you can calculate the "solution residual" by putting the current solution into the algebraic system of equations and calculating imbalance, right? This residual can be reduced to zero (mathematically) or machine tolerance EVERY TIME. In fact, if you used a direct solver, this is what you'd get.

b) Now consider the finite elements. Here, the matrix is constructed by weighting the approximation (shape functions) with a weighting function (and you get a system of equations, see above). BUT, if you just take the current solution (i.e. shape functions with current nodal values) and substitute it into the governing equation (NOT THE MATRIX), you will find that the governing equation is not satisfied over the finite element. This "residual" is not reducible to zero, as it represents the discretisation error (in fact, it will be zero only if your numerical solution hits the analytical solution in every single point of the domain, which is fair enough).

The point is that in a) you can reduce the solution residual arbitrarily (solving the system of algebraic equations) irrespective of everything else, whereas in b) you typically can't: the residual represents the discretisation error!

This means that if you follow the b) residual, it will bottom out and when it goes "flat" there's no point solving any more because you're just shuffling the discretisation error around or the flow is genuinely unsteady (look at the disrtribution of the residual, it will tell you what's going on!). If for the same case you look at the a) residual, it will jump about and you can't be quite sure (read: it takes experience to know) what's going on.

For us poor souls using the Finite Volume Method I have derived an equivalent to b) and called it (traditionally from FEM) the Residual Error Estimate. It's properly normalised (!), i.e. it's got the same units as the field you're solving for and thus is easy to understand. The paper is under review at the moment, but if you send me an E-mail, I can provide a pre-print.

Hrv

  Reply With Quote

Old   March 15, 2000, 05:37
Default Re: Convergence to 0.001? btw
  #17
Sheng
Guest
 
Posts: n/a
I did a simulation use k-e with very fine grid, however, viewing the flow patter, it is very hard to get good agreement with LDV measurement. Then, one day, I found a reference ( 15 years ago ). They also used the k-e model and with very coarse grid (same boundary..) But, their results is extremely fitting for the experimental data.

Here, comes to the thinking like grid--turbulence--wall function... But anyway, they were very lucky. I must do the investigation on the wall function, grid size, differenent turbulent model... in order to get solution fit for the experiment.

Regards, Sheng

  Reply With Quote

Old   March 20, 2000, 11:00
Default Re: Convergence to 0.001?
  #18
Amadou Sowe
Guest
 
Posts: n/a
I think you are right , I am very confused by your comparison between (a) and (b). To me, these are related but very different objects. On the one hand, in part (a), you are talking about solving a matrix equation via residual reduction irrespective of the scheme used to generate the matrix equation, on the other hand, in (b), you are discussing a numerical scheme (Finite Element) that can generate a matrix equation. I hope I am making the reason for my confusion clear.

In part (a) of your comments, you said that given a matrix and an iterative method one can reduce the 'solution residual' to zero mathematically. Is it not true that the scheme leading to the equation being solved has to satisfy some form of a maximum principle? This maximum principle is typically inferred when the diagonal elements are positive and dominant over the off-diagonal elements. When this condition is not satisfied, you are not guaranteed your "EVERY TIME" convergence you are refered to.

In part (b) of your comment, the reason the governing equation is not satisfied over an element is similar to the reason why the iterative solution in (a) may not satisfy the governing equation used to generate it (through Finite Differences, Finite Volume, etc.). But your governing equation become better represented over the elements as you refine your solution through 'p' and 'h' refinements.

Your last comment seems to put a distance between the Finite Volume Method (FVM) with FEM. Is it really not the case that the FVM is a subset of the Petrov-Galerkin FEM? I have not done much work in this area in years, but I think you can confirm this for me. So, if FVM is indeed an FEM, then it seems reasonable to derive some form of Residual Error Estimate in an appropriate normed space. I think your work in this area can be very helpful to CFD. By the way I will be more than glad to get a pre-print copy. My e-mail address should accompany this posting.
  Reply With Quote

Old   March 21, 2000, 04:40
Default Re: Convergence to 0.001?
  #19
Dr. Hrvoje Jasak
Guest
 
Posts: n/a
Well, you obviously know the subject, so we can get into a bit more detail.

In a) I am looking at s system of linear algebraic equations with a unique solution and the origin (i.e. discretisation scheme) does not interest me at the moment. The diagonal dominance condition you mentioned is indeed necessary for iterative solvers, but not generally - I can just as well use a direct solver and produce a solution irrespective how "nasty" the matrix is (let's leave computer round-off errors out for the moment). What I am saying is that I can look at my solution procedure in two parts: "here's a matrix" and "here's a solution".

Let us also neglect the "cost" of using a direct solver and do a "mind experiment". Consider trying to solve a clearly unsteady problem (e.g. vortex shedding behind a cylinder) using your favourite discretisation scheme and a direct solver to a steady state. We shall postulate that the problem is non-linear (needs some sort of relaxation, which, fo convinience I will call time-stepping!) and that a steady-state residual is calculated by throwing away the ddt term (read: under-relaxation), as this should be zero for steady state. Looking at the residual vs. time-step graph, we should get a clear harmonic curve, telling us that the problem is unsteady (fine!).

Unfortunately, this is not what people do: we use a "nice" matrix (see above) and an iterative solver and instead of assembling the "steady-state residual", we look at the solver residual (in segregated solver not quite the same thing!). What you see now is not a "nice" sinusoid, but a jaggedy curve, which is just inviting you to start fiddling with the solver parameters.

Why is this the case? Because, in principle, we are looking at a number which should be reducible to zero and is directly linked to our solver tolerances. In my opinion, it would be better to look at a number which "stabilises" to a value and when we see it has stopped changing (or, indeed, we see a harmonic), we can stop solving. The proof for this (characteristic of our brain) is above: people report that they are looking at the "monitoring location" and "characteristic variable" etc.

Now, the problem with "monitoring locations" is that you need to know a hell of a lot about the flow to set one up properly: some people, want a global parameter (drag), some, loca pressure, some overall mass continuity; the whole thing is problem-dependent and difficult to explain to the user. What I argue is that an error estimate is a good and a more general choice.

As a second part of my answer I have to go into the fundamentals of numerics; if you see a problem, please contact me directly.

1) Difference between FVM and FEM: In FEM, we formulate the solution in terms of element shape functions, which when assembled produce a spatial variation. Now we take this solution and put it into the governing equation and get nothing! (to be more precise, we get a residual). So, how do we create a system of algebraic equations? Well, we take the above equation and multiply it with a trial function and we look for a minumum (of weighted residuals over finite elements); the rest is just routine. As you can see, once we have a solution, we can assemble the residual, the same one whose weighted sum we minimised - the principle of minimisation of the residual (variational principle) is at the heart of the matter!

In FVM, we split the domain into control volumes (CVs) and write the governing equation in the integral form over each CV. We convert the volume integrals of div-terms as surface integrals and prescribe the "interpolation practice" from cell centres onto the faces. Given these values, we assemble the balance equations for each CV, which directly produces the algebraic system. Once we solve the system, we can calculate the face values using the same interpolation and our governing equatons will be satisfied over each CV. Here, there is no variational principle AND we get something I call "conservative fluxes" - for each face I can calculate the unique flux which satisfies the governing equation to SOLVER TOLERANCE (and not discretisation error).

I can show that there is a way from FVM to FEM and back but it's now as simple as it seems (the problem is how to get conservative fluxes from a FEM solution; it's been solved by Kelly, although he didn't realise it), but because of the conservation property of FVM I am a bit reluctant to call the methods the same unless the audience is an absolute expert! However, what I have done it the process of FVM discretisation is to "loose" the FVM equivalent of a residual - that's what my paper is about, so let's leave it there.

I'd like to come back to my error estimate again. What I need is a property which will even for unsteady flow flatten out and tell me not to bother solving any more. Also, if I have incurable numerical instability (i.e. my numerics won't let me solve any tighter), the property should stay constant. How? Well, in a situation similar to the test case above, vortices are moving through the domain BUT the total sum of "rate of change" in the domain will stay approximately the same (it will move location!), so the error estimate will flatten out. I would still like to play with this, but current experience is very good.

Finally, I'd like to comment on your reference to "appropriate normed space". I had a lot of trouble with this, mainly with the mathematicians who happen to review my papers. My audience are engineers in industry and I am trying to build a simple, robust and intuitive tool for everyday use. The error estimate is therefore dimensioned to be the same as the field you're solving for, i.e. for velocity you get the error in m/s, for temperature in Kelvin etc. I believe in practice "appropriately normed spaces" are very dangerous (and best left to mathematicians) because it will mislead you average engineer: consider what happens when you tell the aerodynamics guy that his calculation is "out by 1500 watts" (i.e. the mangitude in the error norm for the momentum equation in terms of the local minimum (of total dissapation in the domain) is out by 1500). At best you will be ignored and at worst mis-understood ("does this mean I can save 1500 watts on engine power?"). So, even if I loose properties like "Galerkin orthogonality of the error estimate to the test space" (Angermann 1998) (which in fact I don't!), I will stick to simplest forms of error estimates I can produce.

You can get some of my papers from prof. Gosman's CFD group web site:

http://monet.me.ic.ac.uk

and I would be grateful for any comments you may want to make.

Regards,

Hrvoje Jasak
  Reply With Quote

Old   March 28, 2000, 09:31
Default Re: Convergence to 0.001?
  #20
Jack Keays
Guest
 
Posts: n/a
Hi!

COuld anyone tell me the name of paper/s which talk about this 0.001 convergence criterion...I would be interested to better understand it.

Thanks Jack.
  Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Convergence Centurion2011 FLUENT 48 June 14, 2022 23:29
Force can not converge colopolo CFX 13 October 4, 2011 22:03
Convergence of CFX field in FSI analysis nasdak CFX 2 June 29, 2009 01:17
Defect correction and convergence ganesh Main CFD Forum 4 June 30, 2006 14:20
Office Room myting OpenFOAM Running, Solving & CFD 3 June 12, 2006 08:41


All times are GMT -4. The time now is 22:58.