LES and DNS
Hello all,
i have a question and i hope that somebody could answer me Lets say that we have an LES code (Fluent by exemple) If we are running a simulation on a very thin grid, does it become a DNS?? Thanks Regards 
Roughly speaking, there are two possible approaches in LES:
explicit filtering: a numerical filtering procedure is applied at some terms of the equations at each time step. This filtering procedure is characterized by a parameter delta which, in this case, is somehow bigger than the grid spacing h. implicit filtering: there is no explicit filtering procedure applied but the action of the grid and of the numerical method is seen like an implicit unknown filter with delta=h (as, in any case, the grid acts like a spectral cutoff filter removing all the scales with wavenumber k>pi/h).(*) In addition to this, it is also worth mentioning that, whatever the approach used is, the SGS models in LES are usually proportional to delta^2 (a particular method in the implicit approach, known as Implicit modeling, actually don't even use any SGS model arguing that the low order part of the truncation error of certain convective schemes resemble the SGS models usually used) Now, this is a very simplified view but, as you can imagine, the limit h>0 implies two different things according to the specific approach: explicit approach: as delta is different from h, it follows that in the limit h>0 you are still filtering the equations (even if for such a small h you could actually perform a DNS and would not require the LES approach) hence there will be a numerical convergence toward the true LES equations (like usually happens when the grid refinement is applied in the RANS approach). Also, the SGS model implemented will remain the same (as it is proportional to delta^2). implicit approach:as delta=h, the limit h>0 leads to a DNS computation because the filter cutoff length is reduced (it has its cutoff at pi/h which in the limit h>0 goes to infinity) and the SGS model is also reduced accordingly (as it is proportional to h^2). Most of the people consider the explicit approach as the most correct one because it can be made in such a way to exclude the numerical errors. In contrast, the implicit approach, by definition, is corrupted by numerical errors in the most important part of the resolved spectrum. In practice, as the explicit approach requires at least 8 times more grid points than the implicit one to obtain the same nominal resolution delta (when delta=2h) or, for the same grid, corresponds to disregard the 87.5% of the gridresolved degrees of freedom, then it is almost never used in any computation. In my knowledge it is currently used only at stanford (CTR) but mostly to assess the accuracy of the SGS models or of the method itself and not in practical computations. The vast majority of the CFD solvers (including Fluent) use the implicit approach hence, to answer your question: Yes, in the limit h>0 the vast majority of the LES computations will become DNS. However, it is important to highlight that i used the limit terminology because for any practical computation h is not zero hence the numerical errors are still present as well as the SGS model (proportional to h^2) and the grid implicit filtering effect. Hence, to obtain a real DNS, the grid is required to be fine enough that the wavenumber pi/h is well inside the dissipative range of the turbulent spectrum such that the previous errors are somehow negligible. However the DNS is usually made with, for example, the laminar viscous model in Fluent (that is no model at all) to avoid the use of a useless SGS model. A well known exception to the previos reasoning is the LES performed with a spectral method. In this case there are no spacediscretization errors and the filter and the grid are not different concepts but a unique one hence the numerical filtering procedure is not explicitly required but it is correctly implied by the method itself. (*) It has to be noted that when the Germano dynamic procedure is used with the implicit approach then the test filter has still to be applied explicitly with a specific numerical procedure. But this is a feature of a specific kind of SGS models 
Thankyou
Thank you very much for the elaborate explaining

Dear Paolo,
thank u very much for all this explanation. I still have some questions and comments, and sorry for bothering u again. 1In fluent, the filter is implicit as u said, but Fluent use the wall functions approach for near wall cells, so even with a very fine grid, we dont get a real DNS. 2I didnt understand why explicit filters induce less errors than the implicit one. 3'corresponds to disregard the 87.5% of the gridresolved degrees of freedom' : i didnt understand what u mean here. 4i didnt understand why the germano dynamic model is applied explicitly. (In fact, i am using this model know, and in fluent users guide, they dont mention anything about this model) 5A final question: In LES, we resolve the transport equations for the filtred variables. So how can we have the instantaneous u values?? I need to calculate the turbulence kinetic energy so how can i get it??? I am using the formula 1/2 (Urms^2 + Vrms^2 + Wrms^2) but i am not sure it is good. Must i add the subgrid kinetic energy too ??? Thank u very much and sorry for my bad english 
1) In Fluent (6.3.26) there are two kind of wall functions: the one based on the Werner and Wengle model, which can only be activated by the TUI, and the basic approach which is effectively always activated. However, this basic approach uses an interpolation based on the y+ value at the first cell off the wall and when this value is under 1 the model acts like it is turned off. Hence, yes the wall functions are always active but they recover the DNS behavior when the grid is adeguate.
2) This is a little complex to explain here. However, when you apply the explicit filtering with, say, delta=4h (with h the grid spacing), it turns out that most of the scales where the truncation error is effective are also affected by the filter and filtered out. Also, a variable filtered with delta=4h is likely to be smooth enough to be correctly described by a grid of spacing h (which is not the case when delta=h). What value of delta is necessary to reduce effectively the error is obviusly dependent on the specific filter (the same delta can lead to different results for different filters) and numerical method (the higher the order of the schemes the less is the number of scales affected by the error in a non negligible way) adopted. In contrast, the implicit filter is not actually applied hence it cannot be effective in this. The filter usually implied is the cutoff of the grid which simply removes the scales smaller than 2h but, by definition, the errors are effective on the scales down to 2h, which are not cut away by the grid. 3) Imagine to have a grid of spacing h and to apply an explicit filter with delta=2h. This means that the scales down to 2delta are (in theory) correctly resolved and the ones between 2delta and 2h are highly affected by the filter hence they have to be disregarded (it is actually what you want to do with the filtering). For delta=2h this means that only half of the grid points employed in a space direction are effectively used to correctly describe the flow. Hence, in 3D only 1/8 of the total number of grid points is effectively used to describe the flow, which means that the 7/8 (=87.5%) of the total number of grid points used with the explicit filtering approach are not effectively used in the description of the flow. 4) The Germano dynamic procedure needs an additional filter (also known as test filter) to extract some information from the already filtered (implicitly or explicitly) velocity field. In the implicit approach you don't have to apply the basic filter level as it is implied but, to extract the information from your resolved velocity field you have to apply the test filter. It's more simple than it sounds. 5) As you don't have a sufficiently fine grid, by definition you can't recover u. In the explicit filtering context there are some techniques to recover the information lost in the filtering but they are still blocked by the grid. I mean, if you had a DNSlike grid you would not perform an LES hence the grid is such that you can't recover, on it, the real u. With the aid of the SGS model you can, in theory, recover the approximate Reynolds stresses and the approximate turbulent kinetic energy. In practice this is never done and these values are calculated with the resolved velocity field only (like you did). However, in incompressible flows the SGS kinetic energy is in the pressure term and cannot be calculated by the model. There are specific techniques which are employed for this (as well as for the reynolds stresses). These techniques are mentioned in the Sagaut book. Actually, i don't think that the SGS kinetic energy is available in fluent (unless you use the Dynamic SGS Kinetic Energy Model). The one which is available with the dynamic Smagorinsky model is the subtest kinetic energy, which is different and is already contained in the resolved kinetic energy. 
Dear Mr.Lampitella,
I really thank you for all your explanations and your time. You clarified for me a lot of things...You must be the world champion of simulations :) I used LES in order to predict flow instabilities that occurs for high Re (because RANS models are not at all effective for this class of flows), but my LES failed... In fact, it predicts a lower turbulence kinetic energy than the experiments, and it must be because of this that the instability does not occur numericaly I hope that u still have some time to respond to those questions: 1) In DNS, at the near wall cells u use the formula u+=Y+??? This is what i conclued from ure answer point 1. If this is the case, do this formula holds for any flow??? (included when we use non newtonian fluids?) 2)Can u please tell me what is the subtest k? When i calculate my k from the resolved scales only, can you tell me the order (the degree) of my error?? And why the filter test used with the dynamic smagorinsky model is explicit?? Do u have any idea about it size? 3)In fluent they advice the use of the central differencing scheme with LES. Can u explain to me why? (They tell it has low numerical diffusion...but why not QUICK?) U also told me 'the higher the order of the schemes the less is the number of scales affected by the error in a non negligible way': i didnt understood why. 4) A final question please: In DNS, if we use a grid size that is less than the half of the kolmogrov scale, and a time step that is smaller than the helf of the kolmogrov time scale, do we obtain the REAL flow field???? If not what errors still holding? Only the truncation errors??? One more time, thank u Best regards and respects 
Well, before to answer your questions, in my knowledge the prediction of flow instabilities is a very complicated task. Roughly speaking, you have to be sure that the scale at which the instability is first generated (and all the scales involved in the "path to turbulence") are correctly resolved. Hence there could be several reason for the mismatched prediction. But this all that i know about it.
Coming back to the questions: 1) Actually, in fluent 6.3.26 the unique law of the wall used is: u+ = exp(G) * u+_laminar + exp(1/G) * u+_loglaw with G = [a * (y+)^4] / (1+ b * y+); a=0.01; b=5 which is then "applicable" whatever the y+ of the first cell at wall is. I don't know about its applicability in nonNewtonian flows, but it is obviously not generally valid (i.e., it is not valid for strong pressure gradients or separated flows). Also, the wall laws in LES do not usually perform great even when applicable. 2) As i told you, the test filter in the dynamic model is just a procedure to obtain additional information which is in turn used to compute the dynamic constant. As you need this information for the model and it is available only by explicit filtering then you have to apply it. There are several suggestions for the size of the delta in the explicit filtering. However, in Fluent, it is an approximate tophat filter involving the neighboring cells only. That is, for a given cell i the test filtered variable is computed by: U_test_i = Sum_j (U_j * V_j)/ Sum_j (V_j) where the Sum_j is extended over the neighboring cells only (with respect to the cell i). The subtest k is just the kinetic energy based on the scales between the test filter and the basic filter. I'm not sure but it should be: subtest k = 0.5 * rho * (U_i  U_test_i)(U_i  U_test_i) or something similar. The error in the use of k from the resolved scales is actually problem dependent and things like an error estimate are actually the previously mentioned techiniques (on Sagaut book) to compute the SGS kinetic energy. 3) The difference between the central schemes and the others (including the quick scheme) is that central schemes do not introduce any dissipation (at least for uniform grids) error but only dispersive ones. Why this is necessary in LES is not that easy to explain without referring to several other things. Mostly this has to do with the effect of the numerical error on the resolved spectrum function and the same is true for what concerns the order of the scheme. In LES you don't want that the resolved spectrum is highly affected by the error because, by definition, the scales down to the inertial range should be correctly resolved. It turns out that upwind biased schemes are always too much dissipative to be effective in LES. Even the bounded central scheme in Fluent is not actually suitable and its use is suggested without any SGS model (according to the ILES approach). I can't be more specific 4) If you do not use a spectral method (whose spatial discretization error is spectrally vanishing, in practice there is no error), any other method introduces a truncation error which affects a range of scales usually ranging between 2h (the smallest resolvable on a grid of spacing h) and 246 *2h (according to the specific scheme and its order). In addition to this, the grid also acts like an implicit filter by truncating the scales smaller than 2h. Hence, to perform a "correct" DNS you should be sure that these errors are not influencing the result. If the scales affected by the error are well inside the dissipative range than the physical dissipation is the most important effect at those scales and the errors are negligible. In theory you should perform a grid refinement (like in RANS) to prove it. In practice this is never done. Once you have done a DNS whose truncation error (spatial and temporal) is such that it is concentrated in the dissipative range and totally overwhelmed by the physical dissipation (and you have somehow proved it), then the most you can say is that you have a solution of the mathematical problem determined by the NavierStokes equations with the imposed boundary and initial conditions. Is that the only one solution to the problem? It is not known (in 3D, where the whole discussion has sense) but probably it isn't. Does it represents the reality? Mostly, but the NS equations are based, by theirselves, on approximations (i.e., the Newtonian fluid assumption, the incompressibility assumption, the Stokes hypothesis and so on...). Here the question becomes phylosofical because it is practically impossible to compare DNS and experiment without some uncertainity in the boundary and initial conditions (which in the experiment cannot be fixed like in DNS). Best regards 
Dear Mr.Lampitella,
thank u a lot for ure time. About my first question, u didnt understand me well (because my bad english maybe) The law of the wall does not hold in presence of respiration at the wall or in presence of adverse pressure gradients, that is ok. But i was asking if the law of the viscous sublayer holds (u+=y+). I know that in presence of roughness, it will not necessary hold, so in DNS, how can we evaluate the friction at the wall??? Do we always use u+=y+???? One more time, thank u comrade Best regards 
Well, i'm not sure but i presume that it is dependent on the roughness (or the specific case). In general, the roughness will introduce some disturbance in the flow hence the laminar sublayer will be influenced by it; however, as for the errors, it is possible that for specific flow conditions the disturbance is such that it is totally absorbed by the local dissipative features of the flow without major consequences. If this is not the case i think that the main consequence is that the laminar sublayer still exists but it is such that the local Re number is modified hence it is valid on a smaller scale (e.g., up to y+=0.5 instead of 5). In theory, if the roughness has such an influence, a correct DNS should use a finer grid, such that the law of the wall is still valid, but i don't know if this is effectively done in practical computations.
However these are just suppositions, i don't actually know the answer. You should refer to some good reference (but i'm not aware of them) or i'm sure that there is someone else here who can give you the correct advice. 
Dear Mr.Lampitella,
I did a little research on the subject. In fact, if we have roughness, the law of the viscous sublayer holds at all points of the wall, but if we want to simulate this, we must reproduce the roughnesses geometry (on the grid) and use a mesh with near wall cells that are smaller then the roughness scale....and all this is not possible That is why we try to reproduce the sublayer effect in a more macroscopic way: we define h+=h.uf/(µ/rho) with h is the roughness height. the roughness height is diffrent from the roughness mean heigh epsilon: epsilon/h=0.25>0.6 depending on the roughness nature (Blanchard: Thesis of Poitiers university 1977) Then: if h+<5: the regime is hydrodynamicly smooth: we can use u+=y+. The viscous sublayer is not affected by the roughness presence: all roughnesses are contained in it. The friction is the same than with a smooth wall. If 5<h+<70: the regime is intermediate: some of the roughnesses go out from the viscous sublayer, wich is locally destructed (in term of a structure parallel to the wall). We obtain small wakes behind those roughnesses. Friction become stronger. The logarithmic law is moved: u+=2.5lny+ + 5.5  e : e depends on h+ If h+>70: we resolve as if there was no viscous sublayer. Most of the roughness heigh are bigger than the viscous sublayer height. The term e become more important. I hope that i was helpful. Sorry for my bad english. In a previous answer, you told me that the truncation error in LES affects scales ranged between 246 . 2h (with h : grid resolution), and this depend on the discretization scheme. Could you please give me a little explanation for this point?? Thank you Best regards 
Don't worry about the english as the mine is even worst...and, by the way, we can also go back to Paolo, instead of Mr. ;)
It is interesting to know about what you wrote as i expected that the ranges mentioned by you would have been much more compressed toward the wall, i.e., i'd given for granted that h+>5 is enough to give a totally different near wall flow behavior. So, good to know. About the question, it is not a feature of the LES approach, it is just how it works with numerical schemes. That is, when you approximate the continuous derivative with discrete operators (e.g., finite differences) you can't obtain the correct derivative but an approximate one. When you analize the approximation with the taylor series you can see that it has a trucation error. When you analyze this truncation error in the spectral space it turns out that, for consistency reasons, it is not uniformly spread over all the scales resolved by the grid but it is concentrated on the smallest resolved ones and it affects a range of scales whose extension (toward the largest ones) is dependent on the order of the scheme (e.g., 2h, 4h, 6h etc.). To understand this, think about any finite difference scheme. All of them have to correctly represents constant values, hence, by definition, the truncation error cannot affect the resolution of constant values. Now think about any two points scheme (the less accurate ones), by definition they are capable to represent exactly any linear variation. Hence this also means that at least constant and linear variation over the domain are correctly resolved by any numerical scheme. It follows that the truncation error does not affect these scales but the smaller ones. By extension, it is totally understandable that increasing the number of points in the scheme (which usually leads to an higher order for the scheme) more scales will be correctly resolved (e.g., any three points scheme can represent exactly any parabolic profile, and so on with four points etc.). As a consequence, the truncation error will be limited to even smaller scales. The spectral methods, when applicable, employ all the points of the domain in a given direction to discretize the derivatives hence, when there are "enough points", the error becomes spectrally vanishing (in practice the truncation error vanishes and all the scales represented by the grid are correctly resolved). This is definitely not the correct way to explain this but probably gives you a taste of it. You should consider to read some textbook about this for more precise information (the second edition from Hirsch "Numerical computation of internal and external flows" is a very complete one). 
Hello again :)
In fact i called u Paolo the first time since i dont know u , then when i saw that u got a lot of infos, i did a search on google and i found that u are a professor...so it is not nice to call you Paolo since u must be much older than me. So i call u Mr. like ure students do no? :) I understood all what u wrote for me, exept one thing. I still dont understand how did u get the values 2h , 4h and 6h...Can u please help me again By the way, i will be pleased if u can respond to another question related to the finite volume methods. (sorry for being a very bothering french :) ) When i examine on fluent the instant flow rate at inlets and outlets, i see that they are slightly different (error is about 105). The FV is conservative so from where i get that difference??? Is it from the hardware accuracy (it will vanishes if i use double precision?) or it is due to the fact that the FV is not necesseraly conservative for non linear equations??? Thank u one more time and sorry for bothering, but it seems that CFD teachers in france dont know a lot. Thanks god because he created italian CFD teachers :) 
Well, actually i'm still a graduate student (even if a pretty old one, i'm 26 :o) and i'm going to discuss my graduation thesis the next month. The professor you googled is definitely my grandfather, but he was a Medicinae Doctor (indeed his publications refer to the 50's and 60's, which suggests me you should be thinking that i'm 7080 years old :eek:...sorry, but i wouldn't definitely be here in that case :D).
Coming back to your questions, the values 2h, 4h etc. are just examples. However, when the truncation error in spectral space is plotted in function of the scale you can clearly see that it is nearly zero for a range of scales which usually grows with the order of the method and it is non negligible for the rest of the resolved scales. From this graph, by eye analysis, you can (more or less) determine the smallest scale where the error is still negligible and say: "ok, it is good down to 6h" or 5h, or 5.3h etc, it is just to say a value, there are no formulas or something. The Finite volume method is, by construction, conservative. That is, the fluxes are computed once per face. However, of course they will contain the round off error due to the limited precision (in LES and DNS the double precision is mandatory). In my knowledge the single precision is limited to 1E6 hence it is possible that there is a round off error cumulation (which is very subtle in single precision computations). However, when the RhieChow interpolation method is used, the continuity equation is not anymore solved to machine accuracy but has an h^2 error, which can also be the cause of your mismatch. The first thing to do is definitely switch to double precision. P.S. It seems a little bit unfair to say that the french CFD teachers don't know a lot. There are a lot of them who are leading experts...however, i had a lot of courses on CFD and Fluid Dynamics (3 on Numerics, 4 in Aerodynamics, 1 in Fluid Dynamics, 2 in Turbulence plus the whole set of purely math courses, maybe 4 or 5...you know, in italy we say "going with the lame you learns to limp") but still a consistent part of what i know i learned it by myself, as the university classes can't provide a full view on all the arguments (like the LES, it was almost totally on me). 
Hello again Paolo
hope that u are fine. U are 26 years old but u know a lot ehhehe if i continue like that u will got a nobel price maybe... :) Ure thesis is it about LES??? U did it in italian or in english ?? About mass imbalance: 1Ok. The imbalance is du to the residuals like i supposed. 2I dont think that u must use double precision.Double precision is only important when u have very thin tubes by exemple...Here the precision on the cell location is very important since u have large pressure losses....If we use double precision, maybe this imbalance will vanish as we supposed, but anyway, this imbalance is so small that it does not affect the results. It will cost u more time and u will got the same results (with a smaller imbalance however) I would like to ask u one more question (sorry hehehhe but every time i talk to u i remember questions, because u can respond to all of it :) ) In unsteady RANS, how can we evaluate the rms of velocities???? I am using Fluent. Do we add k to the Urms????? About french CFD teachers , i know there is a lot of good ones... But as u know, most of people now just want to publish results. They use CFD only for this. Most of them dont know the physics that are behind it. They just try to have a good result to publish it. It is the same problem in Italy too i think. I have seen people using Fluent without even knowing what is the finite volume method..... I tried to look to the book that u mentionned for me but i think it is no more available in librairies...However i have a small CFD course.... Thank u a lot dear Paolo and hope to hear from u soon 
Hi
yes, my thesis is about LES, more precisely on "The Quality and Reliability of Large Eddy Simulation in a Commercial CFD Solver" and, as emerges from the title, it is in English. I suppose that the double precision is needed wherever the single precision is not enough. This can be due to geometric reasons but also to numerical errors. In any case, if you can prove the correct error convergence rate then there should not be problems. In theory, the URANS computations should have their RMS in the Reynolds Stress model in the sense that any contribution from the unsteadiness should be zero. I don't know how the statistics in Fluent are computed when the viscous model is of the URANS type but they should not include the rms as they would be related to the nonturbulent unsteadiness, which is wrong. You can surely find the book on Amazon (the 2007 edition) but it is probably better to first take the course. 
Hi again dear Paolo,
can u please send to me a copy of ure thesis? Or is it confidential ??? If it is confidential, can u give me ure publications???? I will look how Fluent calculate it, anyway, in the unsteady statistics menu, we can choose Urms, Vrms, Wrms...The problem is that this quantities depend on time step (as in LES)... I think what do u call contribution from the unsteadiness must be zero, is wrong: 1For a laminar solution (basic solution of NS), if the boundary conditions are steady, the flow must be steady too, so the unsteadiness in the simulation is due to the turbulence....( or non laminar solution as von karman street vortex) 2RANS models by formulation, are the average of a SERIES of the same experience and not the average of ONE experience. The 2 averages will be the same if the turbulent flow is steady. So the k for URANS comes from the fact that U is the mean of several experiences... Try to do a simulation with a big time step: if it does not diverge, u will get very small k and a very small µt, so big velocities..... I think the better way to account for k in URANS, is to choose a small time step ans calculate it from the rms of mean values, just like u told me to do with LES...(with a good filter, u flitred= u mean no?) 
QUICK shceme
Hi,
I wanted to know if anyone has used QUICK scheme in LES despite it is too dissipative. According to Mittal and Moin (1997) the QUICK is too dissipative for reactive flows and generated noise flows and there central schemes should be used. But for ordinary flows it is nearly sufficient. I would appreciate very much it if anyone could tell me about any resent journal articles in which QUICK (or BQUICK) is used. Sincerely, Maani 
Dear friends.Thank you for your constructive discussions.it was very useful for me.
I have a little experience in LES simulation. For underestimation of turbulence intensity (turbulent kinetic energy) in LES with respect to DNS you can see the discussion of Celik in Fluids engineering Journal, (20052009). It seems that it occurs because of low grid resolution in the spanwise direction of your flow domain. 
Hi
I realy enjoy the discussion between paolo and karine, Bravo Paolo! does anybody have an educational LES written in Fortran ( not openfoam or saturne)? 
Hi Maani,
i'm not very confident with such schemes in LES. In general, you can always find a grid which is fine enough to perform a good LES with a certain scheme but it could be much more costly than a simulation with a nondissipative scheme. You can find additional information on upwind schemes in LES also in the following paper: http://dx.doi.org/10.1016/j.jcp.2003.09.027 For what concerns the underestimation of the turbulent kinetic energy, i definitely agree on the influence of the spanwise resolution but its effects are not that obvious. In my experience there is a strong influence of the specific numerical method, and a certain underresolution can also lead to a strong overestimation of the fluctuations (or even of the mean velocity profile!!). In general, a certain grid can be sufficiently fine to properly resolve some features of a certain flow but not some others (e.g., the outer and inner part of a boundary layer respectively); in this case, the whole flow dynamics will depend on how all the parts are coupled and how much of them is properly resolved. A paper shedding some light on this aspect is: http://link.aip.org/link/PHFLE6/v19/i4/p048105/s1 While a more recent one (but somehow less pertinent, not being concentrated on the grid but on the model) is: http://link.aip.org/link/PHFLE6/v22/i2/p021303/s1 Best regards 
All times are GMT 4. The time now is 04:47. 