CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Accurate solutions on badly skewed meshes (https://www.cfd-online.com/Forums/main/233404-accurate-solutions-badly-skewed-meshes.html)

aerosayan January 26, 2021 03:51

Accurate solutions on badly skewed meshes
 
1 Attachment(s)
How would you get accurate solutions on a badly skewed mesh as shown in the picture?
Share your best hypothesis if you don't know the exact answer.



image credit : senior member arjun

FMDenaro January 26, 2021 05:09

Quote:

Originally Posted by aerosayan (Post 794455)
How would you get accurate solutions on a badly skewed mesh as shown in the picture?
Share your best hypothesis if you don't know the exact answer.



image credit : senior member arjun




I understand the reason of your question but the answer is not possible without addressing what do you mean in this post for "accurate solution".

A first order scheme provides an accurate solution if the local truncation error vanishes as O(h). But the solution need to be smooth so that all derivatives are at least O(1). Furthermore, testing the correct scaling of the error requires that the grid is sufficiently fine. If you get that, uou have an accurate solution.
The same reasoning is valid for any scheme of formal order p.




Thus, what do you want to assess on a single skewed grid?

You can get a stable solution, especially if an integral-based FV is used and a proper flux reconstruction is introduced. If you have an exact solution you can also test the discretization error (but remember that the volume-averaged cell value is always a second order approximation of the pointwise value if it is evaluated in the centroid, otherwise you have a first order accurate representation).

But the single value of the discretization error for one grid say almost nothing.

aerosayan January 26, 2021 05:50

Quote:

Originally Posted by FMDenaro (Post 794461)
I understand the reason of your question but the answer is not possible without addressing what do you mean in this post for "accurate solution".

A first order scheme provides an accurate solution if the local truncation error vanishes as O(h). But the solution need to be smooth so that all derivatives are at least O(1). Furthermore, testing the correct scaling of the error requires that the grid is sufficiently fine. If you get that, uou have an accurate solution.
The same reasoning is valid for any scheme of formal order p.




Thus, what do you want to assess on a single skewed grid?

You can get a stable solution, especially if an integral-based FV is used and a proper flux reconstruction is introduced. If you have an exact solution you can also test the discretization error (but remember that the volume-averaged cell value is always a second order approximation of the pointwise value if it is evaluated in the centroid, otherwise you have a first order accurate representation).

But the single value of the discretization error for one grid say almost nothing.


I didn't really get into solver development before one year, so I don't know much of the details. But I had this issue in one closed-source solver I was working with, where a C grid was being used by a FVM solver. The flow across the thin cells at the wake region just after the trailing edge, was causing a lot of error in the solution. The C grid was initially produced for a FDM solver, so there was a lot of thin cells at the boundary layer and wake region.

Maybe after I know more about this topic, I will come back here to clear my doubts.

For now, anything you shared is helpful.

Thanks.

FMDenaro January 26, 2021 05:59

Quote:

Originally Posted by aerosayan (Post 794464)
I didn't really get into solver development before one year, so I don't know much of the details. But I had this issue in one closed-source solver I was working with, where a C grid was being used by a FVM solver. The flow across the thin cells at the wake region just after the trailing edge, was causing a lot of error in the solution. The C grid was initially produced for a FDM solver, so there was a lot of thin cells at the boundary layer and wake region.

Maybe after I know more about this topic, I will come back here to clear my doubts.

For now, anything you shared is helpful.

Thanks.




It is difficult to say something about your specific problem without knowing the "error" you are addressing and the solver you used. A C grid produced for FDM should be quite regular in the computational plane so you should see the problems in the metrics

praveen January 26, 2021 07:32

Just ask https://twitter.com/HiroNishikawa on twitter. I am sure he will be happy to answer your question.

arjun January 26, 2021 08:31

1. Accuracy of descretization

This is main factor in stability. If what you are doing is not correct then sometimes it will work and sometimes it won't.

2. Consistency


Here after every iteration solve the solution should go in one direction. If the corrections produced are rough or too much changing than solver might not recover and diverge.


3. Limiting

for achieving (2) one often have to limit the corrections or solution. Gradient limiters are mainly used for this purpose.


In all, do not add any extra term to achieve stability. The final solution should not be dependent on the path it took to arrive. Stability makes sure that you arrive to final solution.


These are main factors. I can not write the details but you should think over these things and how you achieve it. (I spent 15 years thinking over it by now).

arjun January 26, 2021 08:36

Quote:

Originally Posted by aerosayan (Post 794464)
I didn't really get into solver development before one year, so I don't know much of the details. But I had this issue in one closed-source solver I was working with, where a C grid was being used by a FVM solver. The flow across the thin cells at the wake region just after the trailing edge, was causing a lot of error in the solution. The C grid was initially produced for a FDM solver, so there was a lot of thin cells at the boundary layer and wake region.

Maybe after I know more about this topic, I will come back here to clear my doubts.

For now, anything you shared is helpful.

Thanks.




The gradient limiting in thin cells is tough and many times algorithm fails to limit them. This causes problems many times.

aerosayan January 27, 2021 06:23

Quote:

Originally Posted by arjun (Post 794481)
1. Accuracy of descretization

This is main factor in stability. If what you are doing is not correct then sometimes it will work and sometimes it won't.

2. Consistency


Here after every iteration solve the solution should go in one direction. If the corrections produced are rough or too much changing than solver might not recover and diverge.


3. Limiting

for achieving (2) one often have to limit the corrections or solution. Gradient limiters are mainly used for this purpose.


In all, do not add any extra term to achieve stability. The final solution should not be dependent on the path it took to arrive. Stability makes sure that you arrive to final solution.


These are main factors. I can not write the details but you should think over these things and how you achieve it. (I spent 15 years thinking over it by now).


Thanks Arjun,


Although this research isn't the highest priority for me right now (and I'm probably not experienced enough to do it right now), your explanation will be helpful in future.


Since I would like to use this method without requiring 10 years of research, this is how I plan to approach the problem (at least for designing the gradient limiter) :


1. DATA1 = Record the correct flow solution in a simple domain, using good quality cells.
2. DATA2 = Record the wrong flow solution in a simple domain, using badly skewed cells.
3. Use Machine Learning to design a gradient limiter that minimizes the error = ABS(DATA1-DATA2)
4. Fit the gradient limiter curve using an equation, and use that equation as the limiter for making other solvers.



Maybe I'm wrong here, once I get more experience I will try other things.:)


Thanks and regards
~sayan

sbaffini January 27, 2021 07:46

Quote:

Originally Posted by aerosayan (Post 794585)
Since I would like to use this method without requiring 10 years of research...

... 3. Use Machine Learning to ...

I see a pattern here ;)

arjun January 27, 2021 10:01

When in 2005 i wrote my first navier stokes solver on unstructured mesh. It took me an year to understand all that first. I found out that anything other than simple hex meshes the solver was very unstable. That time i used Fluent to compare so i could see that Fluent was able to run the same without any issues.

So I wanted to learn how to make it stable and eversince I have been experimenting. From that time I always create very bad or worst of meshes and i try to see if the solution is stable there.

Once I am done with Wildkatze I intend to write all that in a small book. The reason is that there is a lot that I learned meanwhile and no book or paper teaches this aspect. I feel one of the major reasons is that most of these writing is done by people in academics and they most often work with simple meshes or test cases. Their maximum stance about this is to create good mesh.

With wildkatze the aim is to provide solver for industry so here having solution on tough meshes is priority. So we actually have reason to learn this aspect of numerical methods too.

PS: Last 2 years I have also created a good interface tracking method and the thinking behind the scheme is very different than how it is traditionally done. The original formulation does not involve courant number and blending based on surface angle . It removes major two issues of VOF for implicit scheme. This also need to be exposed to other people so that it could be improved further.

arjun January 27, 2021 10:06

Quote:

Originally Posted by sbaffini (Post 794587)
I see a pattern here ;)


In year 1999 my final year project was predicitng particle size distribution using neural nets (i wrote). Now it feels like back to the past seeing how they are trying AI everywhere.

aero_head January 27, 2021 10:35

Hello Arjun,

Have you published a paper about this? I am involved in some research involving particle tracking, and am always interested to see AI/machine learning in the fields I study.

FMDenaro January 27, 2021 11:20

Quote:

Originally Posted by arjun (Post 794598)
Once I am done with Wildkatze I intend to write all that in a small book. The reason is that there is a lot that I learned meanwhile and no book or paper teaches this aspect. I feel one of the major reasons is that most of these writing is done by people in academics and they most often work with simple meshes or test cases. Their maximum stance about this is to create good mesh.


Actually also in the academy the use of unstructured grids was of real interest. When I did my PhD thesis in 1995 I worked also on developing a in-house NS code for compressible/incompressible flows testing the resulting accuracy on several types of unstructured grids. At that time I was also at the VKI and people like T.Barth and H. Deconinck were already largely involved in practice. During those years appeared also the first LES applications on unstructured grids by K. Jansen at the CTR.

I agree that only working personally on such problem one acquires some specific experience.

arjun January 28, 2021 00:33

Quote:

Originally Posted by FMDenaro (Post 794605)
Actually also in the academy the use of unstructured grids was of real interest. When I did my PhD thesis in 1995 I worked also on developing a in-house NS code for compressible/incompressible flows testing the resulting accuracy on several types of unstructured grids. At that time I was also at the VKI and people like T.Barth and H. Deconinck were already largely involved in practice. During those years appeared also the first LES applications on unstructured grids by K. Jansen at the CTR.

I agree that only working personally on such problem one acquires some specific experience.



Most research and development could be attributed to academy. For example we see work on gradient limiters on non-uniform meshes. Now there is a lots of work on higher order method for example.
Still the original point that most of the work in academics is done on simple meshes stands and when the solver does not converge the usual advice is to improve the mesh.


My opinion is that the people who develop the solver should also try and not always leave it on user to improve the mesh. Also the commercial solvers For example Fluent and Starccm are very robust too. So we have indeed made some progress here.

arjun January 28, 2021 01:14

Quote:

Originally Posted by aero_head (Post 794602)
Hello Arjun,

Have you published a paper about this? I am involved in some research involving particle tracking, and am always interested to see AI/machine learning in the fields I study.




You mean that particle size distribution for my final year thesis?

Then the answer is no. I had no interest in chemical engineering back then. All i wanted to do was to join some software company and nothing to do with chemical engineering that was my area of graduation.

I actually ended up in CFD by an accident and until I was assigned to work with CFD had not even heard the name CFD.


PS: I personally have never published. I have name on some patents but all the writings there is done by someone else too.

aero_head January 28, 2021 09:18

Quote:

Originally Posted by arjun (Post 794639)
You mean that particle size distribution for my final year thesis?

Then the answer is no. I had no interest in chemical engineering back then. All i wanted to do was to join some software company and nothing to do with chemical engineering that was my area of graduation.

I actually ended up in CFD by an accident and until I was assigned to work with CFD had not even heard the name CFD.


PS: I personally have never published. I have name on some patents but all the writings there is done by someone else too.

Hello Arjun,

Yes, that was what I was asking about.

Interesting, I guess the moral of the story is that we never know where we are going to end up in our professional lives.

FMDenaro January 28, 2021 11:29

Quote:

Originally Posted by arjun (Post 794635)
Most research and development could be attributed to academy. For example we see work on gradient limiters on non-uniform meshes. Now there is a lots of work on higher order method for example.
Still the original point that most of the work in academics is done on simple meshes stands and when the solver does not converge the usual advice is to improve the mesh.


My opinion is that the people who develop the solver should also try and not always leave it on user to improve the mesh. Also the commercial solvers For example Fluent and Starccm are very robust too. So we have indeed made some progress here.




I think that the discussion about a bad grid is too general. Also speaking about convergence is general, it is about the convergence towards a steady state or convergence of an iterative method in a time-dependent algorithm? The nature of the problem is not only on the grid but in the coupling with the chosen discretization of the equations.
For example, what happen if we use a bad unstructured grid on a simple geometry when we have enough data to compare the results?

arjun January 28, 2021 13:15

Quote:

Originally Posted by FMDenaro (Post 794703)
I think that the discussion about a bad grid is too general. Also speaking about convergence is general, it is about the convergence towards a steady state or convergence of an iterative method in a time-dependent algorithm? The nature of the problem is not only on the grid but in the coupling with the chosen discretization of the equations.
For example, what happen if we use a bad unstructured grid on a simple geometry when we have enough data to compare the results?




Thats why I said when I get time I wish to write small note (book!) about it. Its too wide with multiple aspects.

Quote:

Originally Posted by FMDenaro (Post 794703)
For example, what happen if we use a bad unstructured grid on a simple geometry when we have enough data to compare the results?

I think if the end results are not of acceptable accuracy then convergence has no meaning. So when I think of convergence I think of results that shall not be too much different than one would get with good meshes.

arjun January 28, 2021 13:18

Quote:

Originally Posted by aero_head (Post 794684)
Hello Arjun,

Yes, that was what I was asking about.

Interesting, I guess the moral of the story is that we never know where we are going to end up in our professional lives.


Yes pretty much. But thanks to your comment now I have again started to think of it for particle size distribution. I spent 2 months with population balance and I could not find reliable method to offer so I put it aside until I could come with with reliable method. May be there might be help in neural nets here. My problem is reliable moment inversion methodology. So far I do not like what I have.

FMDenaro January 28, 2021 13:29

Quote:

Originally Posted by arjun (Post 794714)
Thats why I said when I get time I wish to write small note (book!) about it. Its too wide with multiple aspects.



I think if the end results are not of acceptable accuracy then convergence has no meaning. So when I think of convergence I think of results that shall not be too much different than one would get with good meshes.




Ok, you are talking about physically relevant solutions. I think that the key is into understanding the specific problem arinsing from the formulation. Using FEM or using FVM or SEM could be different. And more more different if for a given formulation one considers different accuracy.



Have you ever tried to generate a bad mesh for solving flows in a simple geometry test-case? Like backward facing step flows or pipe/channel flows?


All times are GMT -4. The time now is 09:20.