Dynamic Smagorinsky model Filtering Concept
Hello there,
I am going to add dynamic LES models to my code but at the very begining I faced a conceptual problem regarding the filtering process. I am ok with simple Smagorinsky model since it does not have any explicit filtering. But for dynamic model (which needs test grid filter): I know that I should first calculate the filtered velocity field and then calculate alpha. Then, I have to calculate the filtered beta. Now: My question is : I do not understand how it is possible to use filtering for a vector (velocity) and at the same time for a tensor (beta)? Actually my problem is how implement the filtering routine? I have conceptual problem with it. As long as I do not understand the filtering process clearly, I can not write a routine to use it for vetors or tensors? I appreciate your time. 
Hi Paolo,
That conceptually makes sense. One more question: Practically, is there any difference between different filters like box or Gaussian? (Do they make any change in final result?) Thank you for useful info. 
I was too lazy in the previous post to specify that:
but you probably got it (however, just in case...). What is usually less clear is that this kind of terms usually (but not necessarily) imply the commutation between spatial filter and spatial derivative operator (which is almost never the case). Going to your question, there are two answers. On the theoretical side, the SGS model is dependent on the basic filter level, which directly determines which scales and in which amount are subgrid, represented (just according to the Nyquist criterion) and resolved (correctly, which is different from being just represented). Now, the so called subfilter scales are those scales being represented but not correctly resolved which are strongly dependent on the basic filter type. In my experience this is a sort of "Terra Incognita" because they strongly depends on everything (numerical method, shape and extension of the basic filter, sgs model). In turn, the influence of these scales on the basic flow has more or less importance according to their role in the whole fluid flow and their spectral extension (which becomes increasingly small with the basic filter becoming more and more spectral like). Two important papers on this topic are (but there are much more, these are the first coming to my mind): De Stefano, Vasilyev: Sharpcutoff vs smooth filtering in large eddy simulation, Physics of Fluids (some volume, some year) Carati, Winckelmans, Jeanmart: On the modelling of the subgridscale and filteredscale stress tensors in largeeddy simulation, Journal of Fluid Mechanics (some volume, some year) For what concerns the test filter, still in theory, it is a little bit more subtle. First, the test filter always affects your results trough a SGS model. Hence you can't, in theory, devise conclusions in general. For example, in a dynamic smagorinsky model the filter effect is no higher than the one of the model constant. In a structural model (e.g., a scale similar model) the test filter is directly involved in the determination of the modeled sgs tensor, including anisotropies etc., hence it is presumably a stronger effect. In general, as the dynamic procedure assumes a sort of scale similarity between subtest scales (those "below" the test filter) and subfilter scales (those "below" the basic filter) to build the sgs model, the test filter is in theory important and should be selected to properly separate a certain representative amount of scales. Probably (but this idea just hitted my mind, so handle with care) you can also artificially introduce anisotropies in an otherwise isotropic subgrid scale model, just trough a proper selection of the test filter. Again, the test filter is very important. The second answer to your question is that, again, in LES almost everything depend on everything, especially the numerical method. To be clear, if you do LES with a 1st order upwind scheme and an implicit basic filter, there is no way you can get any difference in the solutions only acting on the test filter (i guess, obviously i never used it!!!). My experience is mostly on Fluent (hence implicit basic filter, 2nd order method, approximate box test filter with , dynamic smagorinsky), and the numerical part has a huge effect. Actually, in several cases, very few differences (or not at all) are appreciated with or without a dynamic smagorinsky model, hence no such a big effect is expected from the test filter alone 
Hi Paolo,
Great answers. Looks you have so much experince on LES. I have done some DNS with our inhouse code (second orderenergy conservative based on 4 level implicit fractional step method) and I am at the begining of adding LES models. BTW, your comments are highly valuable for me. Thank you so much 
Hi,
Paolo was very complete in many details, so I can just add few information: 1) be careful in the way alpha must be prescribed. The only lenghts you really know are dx, dy, dz, that is the computational grid sizes. For implicitbase filtering, the measure of the primary filter lenght depends on the numerical scheme used for solving the NS, as well as the actual testfilter lenght depends on the discrete scheme you use for explicit filtering. Thus, alpha depends on numerics. 2) Dynamic Smagorinsky modelling is quite good but the idea of modelling the contribution of unresolved scales only by an eddy viscosity can be improved ...I suggest studyng also one and twoparameter dynamic mixed modelling. 3) Your results will depend strongly on the builtin shape of the transfer function induced by your numerical scheme...smooth filter or sharp cutoff filter can produce different results with the same SGS modelling. Which one is the best? It depends if you assume that LES is physical or numerical (a paper of Pope described such concepts) good luck with LES 
Ok,
Thank you About 1, I just tought that generally we assume the grid filter length and test grid length to be the computational box length scale and twice of that resectively. But you mean it should more sophisticated thatn that? Could you explian more? 
Quote:
Then, using some formula for practically computing the test filtering, the testfilter lenght is Delta_test/h = P. Therefore: alpha = Delta_test/Delta = P/Q. For example, using FV methods (of some accuracy order) or spectral methods give different Q values. Using a formula for a tophat test filtering or using the cutoff testfilter give different values for P. Setting a suitable value alpha congruent to both your scheme and test filtering is somehow an "artistic touch" ... 
Hi all!
I would like to ask you a naïve question regarding the test filter that used in the dynamic Smagorinsky model. In most of the cases for the dynamic Smagorinsky model the test filter is taken 2 or almost 3 times of the filter width for top hat filter. What will happen if filter width is explicitly taken, say 5 times of the grid size or arbitrarily any values (larger than test filter, like 0.5 or 1 meter etc.), means where I am creating a test filter smaller than the filter width. What will be the theoretical concept in that case? Thanks in advance. 
Assume h the computational size and Delta the primary explicit filter. Then, you can use an explicit filtering procedure such that Delta = Q*h, being Q >1 chosen as you want.
Now, the testfiltering is Delta_t = P*Delta= P*Q*h and must be explicitly conputed with P>=2. That means that in the dynamic procedure the test filtering width Delta_t must be always greater than the primary filtering, even if this latter is explictly computed. However, you have to fulfill the constraint that Delta_t and Delta must lie both within the inertial slope of the energy spectrum. 
If i can add my two cents, recalling what i wrote some posts above, the basic idea of the test filter is to provide a "test" velocity field to extract some information useful for the calibration of the specific model in use. In the case of the Smagorinsky model, the information is used to estimate the energy flux toward the cutoff length of your basic filter (either implicit or explicit) and use such information to adapt the SGS model dissipation.
To be more specific, in the Smagorinsky framework, the test filter is only affecting the computed constant, hence you can turn your question in: for given numerical method, basic/test filter combination and instantaneous spectral energy distribution, how much higher/lower will be the Smagorinsky constant if the test/basic filter width ratio is higher/lower than the mainstream value? I can remember of some studies on this matter but i can't recall the details (you will be certainly lucky in finding something on the Stanford CTR site, early years). A relevant one is certainly: http://pof.aip.org/resource/1/phfle6/v8/i4/p1076_s1 However, as you can see, the answer is much less obvious then you expect as: 1) For a low order numerical method and an implicit filtering approach, you might need a larger test filter width (than usual) in order to make the small scale numerical error less influential in the determination of the dynamic constant (which means that you make the error affected part of the test field a low percent part of the total test field) 2) If your basic filter width is already big enough with respect to the beginning of the inertial range, any bigger test filter width might not give you a proper contribution. In this case, Kuerten et al. have developed an inverse dynamic procedure, where the test filter width is actually smaller than the basic one. 3) You can have a non inertial range spectrum. In this case it is difficult to say what to expect. In general terms, i expect the Leonard term in the dynamic procedure to be the most relevant factor in the constant determination. To have the correct constant, this term should be ideally equal to the true SGS stress tensor (actually this might not be exactly true). Hence it is clear that going too much above a test/basic filter width ratio of 2 might be wrong even with full scale similarity in place. The constant value resulting from this approach might not be that obvious as, even if the test field now contains much energy, it might not give a proper dissipative contribution. One might argue that the large scale part of the spectrum gives a random like contribution while the small scale part is still effective. However, this might not be the case locally in time or space and the dynamic evolution of the system may be strongly affected by this. On the other side, reducing the filter width ratio below some treshold then your test velocity field starts being more and more equal to the basic one. As a consequence, the Leonard term at the numerator starts dropping and the resulting constant should be lower. However, in this case the denominator also becomes lower and, while the proper mathematical behavior to expect is a zero constant in the limit of a null Leonard term, this might again not be the case. In conclusion, i am aware that this is not the answer to your question, but you probably see why a specific definitive answer is hard to come as too much details affect the behavior of the numerically simulated system. Numerical methods and practices are obviously among the prominent factors affecting the result (e.g., if you only consider the constant clipping practice you can go nuts). 
yes, the topic is quite complex... in a few words, one should think to the standard dynamic procedure as a "first order extrapolation" of the unresolved part of the field. For this reason, to have good chances for the estimation, both filters should lie in the inertial region of the spectrum in order for the constant to be accurately evaluated.
In principle, multiparameters dynamic filtering can provide good results also for higher order extrapolation, that means with some test filtering out of the inertial range.... 
Quote:
Thanks for your detailed reply. It is a great help. I have another question regarding this matter. If I go for dynamic smagorinsky I think I can control it by the model parameter (alpha) that you have mentioned above. Please correct me if I am wrong. Now I would like to know, if I add top hat filter in the constant smagorinsky model which means the top hat filter will be larger than the filter width (Delta) if implicitly filtered (taking the cube root of the cell volume). But if I choose filter width Delta arbitrarily any values (like 50mm, 100mm etc according to the computational domain) independent of grid size instead of taking Delta = Q*h, being Q >1 that you have mentioned, then how I will know that my filter width DELTA is less than the top hat filter? Please kindly correct me wherever I have wrong concept. In addition, I am wondering if top hat filter has any correlation with grid size or filter width Delta in the case of standard smagorinsky model. 
be careful ...
1) the computational grid introduces always a spectral cutoff filtering Kc=pi/h. This is the Nyquist limit. 2) A computational method introduces "implicitly" a further filtering. For example, a spectral method has the same cutoff grid filtering Kc, but finite difference and finite volume methods have not! They introduce "implicitly" a smoothing of the resolved spectral content (k<Kc), depending on the accuracy of the method. 3) the tophat filtering (transfer function G=sin(k*Delta)/(k*Delta)) is only approximated by the implicit filtering introduced by FM and FV. 4) If you want to use an explicit tophat filtering, you have two possible strategies: the first is working in the spectral space and apply explicitly G, the second is performing a volume average of the convective resolved flux using a measure Delta>h (for example Delta=4*h) 5) The test filtering can be chosen also a tophat filtering, now performing a volume averagin using a measure Delta_t>Delta 
Very briefly  Look at the transfer function of your filter. Any method you implement, you want a filter with a monotonically decreasing transfer function with F(k_max) = 0 9, where F(k) is the transfer function.
The top hat does not do this and tends to introduce weird artifacts in just about any context. 
Quote:

Quote:
Let me ask another naïve question, I would like to know that when we are talking about the top hat filtering what is the physical significance of it? I mean to say if we consider a flow parameter velocity, how this top hat filter works in the velocity field? As we know that top hat filter in physical space is G(xr) =(1/DELTA), if mod(xr)< or = (DELTA/2) = 0, otherwise But when I am going to use the subroutine top hat filter for the velocity term, it filter the U field to obtain U_bar in this following way U_BAR (i) = 0.5 U(i)+0.25(U(i1)+U(i+1)) , where it represents quadrature for trapezoidal rule and for simpsons U_BAR will be different. I am little bit confused in the case of top hat filtering term G(xr), I would like to know how they represents the value U_BAR using trapezoid rule and linked it to the term G(xr)? Till now I have understanding that top hat filter only represents arbitrary value like filter width DELTA. 
Dear Mahfuz,
i'm trying to understand your question but i might have some difficulty, so let me give you this first shot. In LES you have:  basic "grid" filter: it is always present and is due to the fact that your numerical simulation is using a limited number of degrees of freedom with respect to those required by a DNS. In the same moment you define your grid, the number of Fourier terms in a spectral code or whatever is related to the grid in any other given approach then you are defining this filter. By its own nature it is implicit. Either or not it is a true spectral cut off i think depends from the specific approach (in spectral element methods you usually have a basis function which is not trigonometric, hence its Fourier transform might not be an exact cut off i suppose), but let us assume it is.  basic "numerical" filter: it is just due to the fact that for some methods you can't compute exactly the numerical derivatives for the given grid. Its specific form greatly depends from the numerical method, of course. Usually, spectral methods are assumed to be (exponentially) error free but, if you consider the requirement for antialiasing, this might not be exactly the case. For the most common methods (FD, FV) it is a sort of tophat filter. The FV method itself, also has an additional filtering level due to the fact that it involves the evolution of integral quantities hence, even if derivatives are computed exactly, a tophat filter is still embedded in the method itself.  any other additional, explicitly applied, filter: for some reason you decide to apply an additional filter to your equations. As said by Filippo, there is no theoretical reason (as far as i know) to choose a specific filter instead of another. For practical reasons, this is a spectral cutoff in most spectral codes (or, more generally, truncation in the chosen basis representation), a compact lower order filter for codes working in physical space. The main reason to apply this filter is to introduce a dominant known filter with respect to the previous one in order to remove the numerical error from the most important part of the resolved spectrum, where SGS models are most effective. The antidealiasing in spectral codes is an example of such filters; a quadrature rule applied to your convective term is another example; a finite volume integration of the convective term, as a matter of fact, is still another example.  SGS model filter: as a matter of fact (as clearly explained in the book of Pope and some work of Sagaut on POF) an effective eddy viscosity SGS model is such that it realizes a modification in the flow behavior such that the Kolmogorov scale is now dependent on the model length scale (C * delta in the Smagorinsky model). This is an additional filter whose width is equal to the model length scale (approximately or exactly, i don't remember).  test filter for dynamic procedures: this is just a filter used to extract information to be used in SGS models. The assumptions underlying the dynamic procedure are usually such that this test filter, as discussed before, has to respect some similarity rules with respect to the previous filters (when present). Now, if you consider the filters above, you understand that you are in charge to define all of your filters (either implicitly or explicitly). When you define a grid (in the most general sense) and a numerical method you already have defined 2 of the above filters and an overall filter width, which is k*grid_step, k being dependent on your numerical method (these are the only two filters where a direct explicit specification of delta is missing, i.e., there is no point in any code where you write down delta=something; the remaining filter applications, if any, are all such that, at some point, in your code, you write down delta = something). At this point, you can go explicit (i.e., add an additional explicit filter) or implicit (be fine with what you have):  If you go explicit, you can't do it random, you have to know something about your basic method before the filter in order to apply the explicit filter more effectively. Usually, you have an estimate of k (see above) and you use delta = m * k * grid_step, m being usually in the range 16, depending on the requirements (higher order methods usually require lower m).  If you go implicit, you still have to remember that, approximately, delta = k * grid_step and you still need to know your k. A typical approach in Finite Volume (FV) methods is to use delta = cubic root of cell volume (sometimes a factor 2 appears, as in Code_Saturne). Then, you have to pickup your SGS model and insert your delta. Here, few possibilities still arise. If you went explicit, the only approach known to date is to use delta as defined above in your SGS model. If you went implicit, you can either still use delta as defined above (in the implicit case, of course) or either pick up a larger value in order to introduce a sort of explicit filtering based just on the SGS model. Again, as before, you are in charge of everything, and you can't pick up a random value for delta in the SGS model. Finally, if you use a dynamic procedure, you want it to extract useful information from the resolved field in order for your model to "work at best". Again, at this stage you should know, at least approximately, how far you went with your previous filters and how much you should remove to extract useful information. I don't know of any automatic procedure to select all the filter lengths; the best you can do is to perform some preliminary RANS computation and try to extract information on the turbulent length scales of the flow and then start building your filters (starting from the grid). 
Quote:
U_bar(x)=(1/Delta^3) Int [V(x)] U(x')dx' 
Quote:
The problem is "scale" isn't a mathematically precise term. It only becomes precise when you express your solution field in terms of a Fourier series (or any kind of series with basis functions defined in terms of length, actually). So if your turbulence model is built on a scalesimilarity assumption and comparing scales (like Germano), you should use a filter that does a good job of extracting chunks of the spectrum. Top hat doesn't, and in general, all filters of the form (u_i1 + B * u_i + u_i+1)/(2+B) do a pretty bad job of separating scales. Hence they're not going to do a particularly good job of doing what you want in that sort of turbulence model. 
All times are GMT 4. The time now is 06:20. 