CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Artificial Intelligence in CFD (https://www.cfd-online.com/Forums/main/205618-artificial-intelligence-cfd.html)

Naresh yathuru August 22, 2018 05:04

Artificial Intelligence in CFD
 
Hello Everyone,

I recently came across an article about using Artificial Neural Network (ANN) back propogation in predicting the heat transfer of heat sinks.
One of Many:
https://www.researchgate.net/profile...l-material.pdf

The oldest article I found was way too back (around 1996). I see the following advantages and disadvantages:
Advantages:
1. Creating a ANN to recognise patterns from previous work and predict the values.
2. Accurate than correlation
3. Eventually could also help in optimization

Disadvantages:
1. Extensive work has to be done for generating training data
2. Predicting Turbulence using patterns is very difficult
3. depending on the number of parameters the number of CFD simulations to generate the training data is enormous.

Having said that the question is if it is worth training a neural network using CFD with thousands of simulations or increase the computational power to perform the simulation of the required case?
I would like to know if there is any work going in this field and hear your ideas on this topic.

Best regards,
Naresh

vesp August 22, 2018 18:05

Dear Naresh,
you are right about the challenges and open issues using ANNs. Here is one example that answers some of the questions you raised and IMO shows the potential of ML:http://https://www.researchgate.net/publication/325737916_Deep_Neural_Networks_for_Data-Driven_Turbulence_Models

best,
Vesparian

LuckyTran August 23, 2018 12:54

I would like to point out that ANN is not a solver like Fluent/OpenFOAM/Star-CCM. What ANN is, is more like a brain / decision maker. You have a history of results and you like to run more cases. ANN can help you decide what cases to run next in a smart way. Your CFD doesn't suddenly become more accurate because you are using a neural network (you should be able to run the exact same case without a neural network and get the same solution). I would never say that ANN predicts a solution, that's what the actual CFD does.



I don't think ANN's are any godsend but it is a computerized method of what we do as humans so I see a lot of value in solving repetitive problems using an ANN approach in order to cut out the human behind it. Most of the human thinking can be digitized in an ANN framework because it's not a binary decision tree (e.g. fuzzy logic works well in ANN).

Quote:

Originally Posted by Naresh yathuru (Post 703482)

1. Creating a ANN to recognise patterns from previous work and predict the values.
2. Accurate than correlation
3. Eventually could also help in optimization


1. You have to train the ANN to recognize the pattern, so I don't see it as an advantage. It's a requirement of ANN.

2. I don't see ANN as providing any accuracy benefit or that accuracy is a advantage of ANN. It's not a solver.
3. Is quite redundant. Of course ANN can be applied here. It's like saying calculators will help in optimization one day.


Quote:

Originally Posted by Naresh yathuru (Post 703482)

Disadvantages:
1. Extensive work has to be done for generating training data
2. Predicting Turbulence using patterns is very difficult
3. depending on the number of parameters the number of CFD simulations to generate the training data is enormous.


1. Yes the point of ANN is that it can be trained and training takes a long time if you don't know what you are doing. But look at AlphaZero and how straightforward it can be to make a very powerful chess engine using simple training rules. If it works, do it! If it doesn't, don't. The difficulty in CFD is it's not so simple the quality of a result you get from it. And a lot of times, we are optimizing for multiple objectives and we don't even know how we want to balance or weigh these objectives (unlike chess whether it is win/draw/lose).

2. This is not a drawback specific to ANN. Humans have the same difficulty. And this tends to happen in fields where your predictive models are not very predictive. If turbulence models worked all the time, it would be cakework. It's not ANN's fault that CFD is not predictive like it needs to be. It's also not ANN's fault that we still don't know how to model turbulence well.
3. If you want a complex neural network yes it will take a lot. But this training takes place in computer time and not human time.



Quote:

Originally Posted by Naresh yathuru (Post 703482)

Having said that the question is if it is worth training a neural network using CFD with thousands of simulations or increase the computational power to perform the simulation of the required case?
I would like to know if there is any work going in this field and hear your ideas on this topic.

Best regards,
Naresh


I guess it's not so obvious that we should consider economics of scale. The number of cases that you have can run is exactly proportional to your processing power. The number of cases that you don't need to run because you have a trained neural network can vastly exceed that. I.e. you can train your neural network using 1000 cases to avoid running 1 million simulations. Maybe you find the optimal solution on the 1001 or 1002 run, I don't know. But if you only need to run 10 cases to find your answer, then there's no point in training a neural network. Use the correct tool for the correct problem.


Not discussed is what if you do not use a neural network but a different optimization technique like basic gradient based searching or genetic growth algorithms? Well ANN is a supplement to these approaches. You don't gain anything by not using ANN but you also don't lose or break anything either.

vesp August 25, 2018 15:16

I believe that the strength of ANNs in CFD can be in model deduction, not in replacing the solver itself. While that is also an intriguing thought, enforcing the governing equations indirectly through an ANN seems dicey to me.

Simbelmynė August 26, 2018 08:12

This Youtube channel has lots of AI papers covered that do CFD


For example:



https://www.youtube.com/watch?v=iOWamCtnwTc

sbaffini August 26, 2018 10:30

Neural Networks are just an interpolation method, whose main advantage comes in those fields where setting up the interpolation problem is a problem in itself.

Their use in optimization is a very old topic, which however hardly met great acceptance in practical cases, because of the availability of techniques which are more efficient and mathematically sound (i.e., known error bounds etc.). Yet, they have their share of use cases.

Unfortunately, today, the unaware reader has to filter all the crazyness exploded around this field in the last 10 years (mostly because of the flattened risk curve due to the quantitative easing, which has made every business appear as viable, especially niche nerdy ones... which might also be good, but we are going off topic here). Reading an authoritative and comprehensive book on Neural Networks is the only defense for such readers; yet, this is affordable in a week for engineers dealing with CFD (i.e., those alteady equipped with mathematical instruments).

The use of neural networks as surrogate for turbulence models (opposed to the previous use case, where they are used to predict global integral quantities) is yet another case which is both old and misleading to the unaware reader. A NN can only extract information which is present in the input and map it into its training set. Using a NN as turbulence model is never going to fill the gap between what we know (i.e., what is currently resolved on your grid) and what we don't (i.e., all the unsolved scales). This is even more so if you consider that the number of degrees of freedom of a turbulent flow increases with the Re number. If they were predictable from few solves ones then turbulence would not be a problem at all.

Finally, using a NN as a full solver (as per last link) is yet another use case, which however is relative to the entertainment industry. Yet it is similar to the previous one. Honestly, as a scientist, I won't consider any numerical simulation technique without a clear, proven, method to control the resulting error and a way to eventually reduce it to 0. Funny and intriguing? Maybe. But please do not confuse science with something else.

Unfortunately, these days are also plagued by a lot of automatic, real time, whatever tools which are user centric etc., but they all forget about science. It seems that nobody cares anymore about convergence and accuracy, only stability and speed.

vesp August 26, 2018 11:12

Quote:

Originally Posted by sbaffini (Post 704019)
Neural Networks are just an interpolation method, whose main advantage comes in those fields where setting up the interpolation problem is a problem in itself.

Well, they are highly non-linear interpolation methods with trainable coefficients. So calling them "just an interpolation" is maybe too simplistic. I would also argue that turbulence modelling is exactly such a non-linear interpolation task. Given the (few) coarse grid data points, a reconstruction of the sub grid information is sought.



Quote:

Originally Posted by sbaffini (Post 704019)
The use of neural networks as surrogate for turbulence models (opposed to the previous use case, where they are used to predict global integral quantities) is yet another case which is both old and misleading to the unaware reader.

It seems rather new to me - do you happen to know any old publications regarding this?


Quote:

Originally Posted by sbaffini (Post 704019)
A NN can only extract information which is present in the input and map it into its training set. Using a NN as turbulence model is never going to fill the gap between what we know (i.e., what is currently resolved on your grid) and what we don't (i.e., all the unsolved scales).


I have to disagree here, please see also the link provided above. Of course, NNs cannot create something from nothing, but it can be trained to approximate the mapping from e.g. coarse grid to DNS grid data. If you are familiar with the deconvolution approaches to turbulence modelling, this is just another method of doing that. NNs cannot fill the gap with 100% accuracy, but they can learn a darn good approximation.



Quote:

Originally Posted by sbaffini (Post 704019)
This is even more so if you consider that the number of degrees of freedom of a turbulent flow increases with the Re number. If they were predictable from few solves ones then turbulence would not be a problem at all.


I agree, how large a net has to be to achieve some form of universal closure model is an open issue.



Quote:

Originally Posted by sbaffini (Post 704019)
Finally, using a NN as a full solver (as per last link) is yet another use case, which however is relative to the entertainment industry. Yet it is similar to the previous one. Honestly, as a scientist, I won't consider any numerical simulation technique without a clear, proven, method to control the resulting error and a way to eventually reduce it to 0. Funny and intriguing? Maybe. But please do not confuse science with something else.

This is little harsh. There is much research currently going on that aim at providing error bounds and incorporating e.g. conservation into NNs. So let us wait and see and keep an open mind.

sbaffini August 28, 2018 07:02

First of all, of course, I didn't want to be harsh (maybe just sound imperative for educational purposes), so don't take all of this on the personal level...

...unless you are the author of the paper at the link you posted.

In this case, I'm sorry, but that doesn't seem JFM material at all, to me. Not to mention the high confusion on when commutation holds in LES.

However, just for the sake of the argument...

Quote:

Originally Posted by vesp (Post 704021)
Well, they are highly non-linear interpolation methods with trainable coefficients. So calling them "just an interpolation" is maybe too simplistic. I would also argue that turbulence modelling is exactly such a non-linear interpolation task. Given the (few) coarse grid data points, a reconstruction of the sub grid information is sought.

I think that having "trainable" coefficients might be the very idea underlying interpolation. Also, Radial Basis Functions existed before their application with NN.

And no, turbulence modeling is not a non-linear interpolation task. It refers to the fact that, on a certain grid, if not a DNS one, you have missing information and, more importantly, a truncated dynamics. Turbulence modeling, in its major acception, refers to the development of a surrogate model for the missing dynamics (i.e., the role of the missing scales in the overall flow dynamics). More on this later.


Quote:

Originally Posted by vesp (Post 704021)
It seems rather new to me - do you happen to know any old publications regarding this?

https://s3.amazonaws.com/academia.ed...scale_mode.pdf

Does this qualify as old? Note submission date please.

We could argue on the differences between the specific NN adopted, but let's face it: peolpe today are doing this stuff just because money is there and the whole world is actually going there (just like for GPU). There's nothing really new under the hood, except: money, availability of software, availability of hardware (in the sense of Amazon, not real hardware, that one already existed).

Admittedly, I haven't followed this field a lot, and probably there is also some interesting idea (maybe in RANS), but the bulk was already there.

Honestly, it is a shame that the work above was cited last in the work you cited. As it is a shame that, in the end, the supposed working SGS model in the work you cited is just an eddy viscosity (really?) working worst than static smagorinsky on HIT.

It would be much more interesting if such NN were used to analyze DNS data, which today are more common and large than back then.

But the plague today is that nobody is using CPU power to gain knowledge, just to prove what they already know. We have increased, by order of magnitudes, both the availability and power of computing systems, since years, yet no significant discovery has changed our life with respect to 25 years ago.

Even if such NN would eventually provide a perfect turbulence model, do you expect it to provide any significant gain, by itself, in the engineering community?

Quote:

Originally Posted by vesp (Post 704021)
I have to disagree here, please see also the link provided above. Of course, NNs cannot create something from nothing, but it can be trained to approximate the mapping from e.g. coarse grid to DNS grid data. If you are familiar with the deconvolution approaches to turbulence modelling, this is just another method of doing that. NNs cannot fill the gap with 100% accuracy, but they can learn a darn good approximation.

I happen to be slightly familiar with the deconvolution concept in LES, and I have to reinforce the disagreement.

In LES we have Sub Grid Scales (SGS) and SFS (Sub Filter Scales).

The former (SGS) refer to scales that are not representable on your grid. As such, there is no trackable information associated to them that you can store somewhere in any form. In LES we talk about functional modeling in relation to the common turbulence modeling practice (as previously explained) of providing a model for such scales. It is called functional because, considering the total lack of information related to them, you can only hope to provide a model that dynamically works like them, that has the same functional role. Typically, for several reasons, we rely on a dissipative model (e.g., through an eddy viscosity). We are also obliged, for a practical reason, to only use available information to formulate such models but, it goes without saying, it has no sound reasoning behind it, except that, at large, it actually works.

The latter (SFS) refer instead to scales that are actually representable on your grid but, for some reason, have been altered. The information is there, but has been transformed.

Deconvolution is one among several techniques aimed at recovering those represented scales. It is not even always applicable. It requires the LES to be explicitly filtered, so that you know what actually altered those scales (the explicit filter) and can try an approximate, well conditioned, recovery (filter inversion).

You should also note that not all explicit filters are usable in this context. Projection filters are not, as they lead to a total loss of information which cannot be recovered (a thing that your cited paper authors also seem to not know).

This has nothing to do with turbulence modeling, and indeed is inspired from other fields.

Let's make an example. You have a detailed picture of a congress meeting with all the partecipants, with each pixel representing a squared mm. Then you do two different operations on such picture.

In the first one, you blur it with a filter but leave the resolution unaltered.
In the second one you just cut the pixel resolution to 50cm.

By deconvolution, you can recover the first image but not the second one. You can teach a NN to do a deconvolution, but that wouldn't change the matter.

Now, you can also actually teach a NN to recover the full picture from the second coarse one, but would it work for any congress with any combination of people?

You can train the NN with all the congress pictures ever taken, yet they all look the same at 50 cm resolution.

Quote:

Originally Posted by vesp (Post 704021)
I agree, how large a net has to be to achieve some form of universal closure model is an open issue.

I think we are not anymore on the same page here. Let me get back to the previous example.

Imagine you trained your NN to work with 50 cm resolution pictures of DLES congresses (order 100 participants). Now you use your NN with 50 cm resolution pictures of AIAA conferences (order 1000 partecipants). Does it work the same?

That's how turbulence works as function of the Re number. You get more and more people in the picture. You can maybe teach a NN how these people typically seat for such a picture but not if, for the given venue, they had to arrange differently because they were too much.


Turbulence is maybe not that drastic as function of the Re number, but the universality concept implies that you know everything about every flow in every condition... and that this information is actually already contained in the resolved scales only. Like saying the 50 cm resolution pictures can represent all the conferences' pictures ever made and that will be ever made. At this point I would also ask if 50 cm is a special number or it applies to other measures (if you know what I mean here, in turbulence terms).

Quote:

Originally Posted by vesp (Post 704021)
This is little harsh. There is much research currently going on that aim at providing error bounds and incorporating e.g. conservation into NNs. So let us wait and see and keep an open mind.

Yet, not a single graph, showing error reduction, is shown in any paper. Quite strange if they actually exist.

And conservation has nothing to do with accuracy and convergence. Finite difference methods do not typically conserve stuff, yet they are accurate and convergent.

FMDenaro August 28, 2018 09:54

Quote:

Originally Posted by vesp (Post 704021)
Well, they are highly non-linear interpolation methods with trainable coefficients. So calling them "just an interpolation" is maybe too simplistic. I would also argue that turbulence modelling is exactly such a non-linear interpolation task. Given the (few) coarse grid data points, a reconstruction of the sub grid information is sought.




It seems rather new to me - do you happen to know any old publications regarding this?





I have to disagree here, please see also the link provided above. Of course, NNs cannot create something from nothing, but it can be trained to approximate the mapping from e.g. coarse grid to DNS grid data. If you are familiar with the deconvolution approaches to turbulence modelling, this is just another method of doing that. NNs cannot fill the gap with 100% accuracy, but they can learn a darn good approximation.






I agree, how large a net has to be to achieve some form of universal closure model is an open issue.




This is little harsh. There is much research currently going on that aim at providing error bounds and incorporating e.g. conservation into NNs. So let us wait and see and keep an open mind.




SGS models for LES based on NN are quite old, see for example


https://www.sciencedirect.com/scienc...45793001000986


Then, let me state that the deconvolution method can NEVER reconstruct a DNS field. When you apply a deconvolution technique to a discrete field that extends up to the Nyquist frequency you get a deconvolved field that still extend up to the Nyquist frequency! What is the content between Nyquist and Kolmogorov frequencies is not reconstructed by the deconvolution.

vesp August 28, 2018 10:10

Quote:

Originally Posted by FMDenaro (Post 704316)
SGS models for LES based on NN are quite old, see for example


https://www.sciencedirect.com/scienc...45793001000986

Yes, although I would not consider this very old. However, in the paper you cite the coefficients of existing models are adjusted, not the closure terms themselves are sought (if I remember correctly)

Quote:

Originally Posted by FMDenaro (Post 704316)
Then, let me state that the deconvolution method can NEVER reconstruct a DNS field. When you apply a deconvolution technique to a discrete field that extends up to the Nyquist frequency you get a deconvolved field that still extend up to the Nyquist frequency! What is the content between Nyquist and Kolmogorov frequencies is not reconstructed by the deconvolution.

Yes, my analogy was incomplete and caused confusion. The idea is not to deconvolve the field (something done here: https://www.cambridge.org/core/journ...C5695136E84DE4 ), but to point the general idea of approximating closure terms by coarse scale data. I see how my comments have been misleading, thank you for pointing that out.



Best
Vesparian

sbaffini August 28, 2018 10:53

Quote:

Originally Posted by vesp (Post 704318)

May I just point out that this work doesn't even cite the one linked by Filippo and that this, to me, is highly disturbing, to say the less?

Peer reviewing is not only flawed, it has also actually crashed, but nobody noticed yet.

I am now seriously scared of how people do stuff in med research.

praveen August 28, 2018 12:14

I would just like to point out a recent review paper on applications to turbulence modeling

https://arxiv.org/abs/1804.00183

ML has had great and real success in other fields. Here is an article on its role in image processing

https://sinews.siam.org/Details-Page/deep-deep-trouble

FMDenaro August 28, 2018 12:28

Quote:

Originally Posted by praveen (Post 704341)
I would just like to point out a recent review paper on applications to turbulence modeling

https://arxiv.org/abs/1804.00183

ML has had great and real success in other fields. Here is an article on its role in image processing

https://sinews.siam.org/Details-Page/deep-deep-trouble




Yes, ML in image processing can be very useful but the problem is quite far from being similar to a new definition of a model closure. Generally, the image reconstruction techniques are governed by parabolic PDE and they are not involved in a fractal-like pictures as happens in turbulence.


The real issue that has not been highlighted is that we already know from DNS solutions that extracting from those data the unresolved fields and inserting them in a practical computations still produce not satisfactory solutions.
Thus, I don't think that a ML algorithm can change this framework.

vesp August 28, 2018 12:33

Quote:

Originally Posted by FMDenaro (Post 704343)

The real issue that has not been highlighted is that we already know from DNS solutions that extracting from those data the unresolved fields and inserting them in a practical computations still produce not satisfactory solutions.
Thus, I don't think that a ML algorithm can change this framework.


Could you please elaborate on this? Do you mean akin to a perfect LES approach, where the exact closure terms are generated from the DNS?

FMDenaro August 28, 2018 12:47

Quote:

Originally Posted by vesp (Post 704344)
Could you please elaborate on this? Do you mean akin to a perfect LES approach, where the exact closure terms are generated from the DNS?


Yes, some studies tried to do this both in LES and RANS formulation.
Have a look to the discussion in this recent article https://www.researchgate.net/publica...ll-Conditioned



Argyris August 28, 2018 13:19

I am a postgraduate student, so I dont have any experience with AI in CFD but I see some serious research regarding data-driven turbulence modeling from NASA, University of Michigan, ONERA etc.

http://turbgate.engin.umich.edu/symp.../Duraisamy.pdf


http://turbgate.engin.umich.edu/symp...2/Fabbiane.pdf


From what I understand they use data from DNS and experiments to optimize the coefficients in Spalart-Allmaras model.They also suggest a similar procedure can be implemented in Reynolds Stress models, which are more difficult to calibrate to be applicable in a wide range of flows.


I am really interested in doing some research in this area, but I have doubts over its future (good funding or will it be abandoned?).

FMDenaro August 28, 2018 13:23

Quote:

Originally Posted by Argyris (Post 704353)
I am a postgraduate student, so I dont have any experience with AI in CFD but I see some serious research regarding data-driven turbulence modeling from NASA, University of Michigan, ONERA etc.

http://turbgate.engin.umich.edu/symp.../Duraisamy.pdf


http://turbgate.engin.umich.edu/symp...2/Fabbiane.pdf


From what I understand they use data from DNS and experiments to optimize the coefficients in Spalart-Allmaras model.They also suggest a similar procedure can be implemented in Reynolds Stress models, which are more difficult to calibrate to be applicable in a wide range of flows.


I am really interested in doing some research in this area, but I have doubts over its future (good funding or will it be abandoned?).






Who can tell you the answer? AI and ML are just new fashionable nomenclatures and it seems this is sufficient at present to get funds (and publications).

vesp August 28, 2018 13:23

Quote:

Originally Posted by FMDenaro (Post 704345)
Yes, some studies tried to do this both in LES and RANS formulation.
Have a look to the discussion in this recent article https://www.researchgate.net/publica...ll-Conditioned

Interesting, thanks a lot - I will have a look. Is this transferable to LES, though? Here is an example of such a perfect LES, and I have seen others - so is there any theory on if this happens in LES as well?


https://journals.aps.org/pre/abstrac...RevE.75.046303

FMDenaro August 28, 2018 13:27

For example you can read the approach for LES


https://www.researchgate.net/publica...ddy_simulation


https://www.researchgate.net/publica...ddy_simulation

Argyris August 28, 2018 13:33

Quote:

Originally Posted by FMDenaro (Post 704354)
Who can tell you the answer? AI and ML are just new fashionable nomenclatures and it seems this is sufficient at present to get funds (and publications).

Yeah that's exactly what bothers me. Are AI and ML being used because they are fashionable or they can actually lead to improvements into the "stagnant" field, for quite some time now, of Turbulence Modeling. (I am not looking for answers, just opinions)


All times are GMT -4. The time now is 04:39.