Invoking or not commutation in the LES equation?
The commutation error in LES is still highlighted as a source of (possibly) relevant errors in the solution. Many papers proposed methods to work with this issue but, despite few studies, why do people still use to commute filtering and derivatives in the equations? Apart from the hystorical consuetude, I wonder what other reasons at present have to be considered...
|
Quote:
And then there is "LES" in practice. These use stretched/unstructured meshes, play fast-and-loose with the definition of the filter ("The mesh is the filter!"), and give little or no attention to the spectral characteristics of the numerical methods being employed when mesh stretching, skewness, or general unstructured treatments are involved. I think this group of LES work is not nearly as rigorous as the first, but is steeped in engineering pragmatism--as with Reynold-averaging approach before them. I think the vagaries of the spectral characteristics of the "central differencing" scheme being applied to a general unstructured mesh and its matching "filter" are likely much more a source of error than commutation of the filter and derivatives. But I may be wrong. So, I honestly don't know where we are now. LES done with OpenFOAM or Fluent is a tool, and perhaps the benevolent gods of Turbulence have willed that it is representative of engineering reality for those who truly believe. Rigorous LES may be pushed into engineering flow configurations with discontinuous galerkin or spectral elements, getting good othogonal basis functions and clean definitions of per-element filters. In such cases, commutation should not be assumed. This is interesting topic for me. I studied it extensively in grad school, but moved on to "normal" CFD and multiphase later. But I am anxious to hear current trends and get caught up. Perhaps my true LES and "LES" perspective is wrong or incomplete now. |
What is somehow strange is the great effort of many studies in trying to reduce the error but not to eliminate it by simply not interchanging the operators.
And both open-source and commercial codes upgrade seems to focus more on modelling the unresolved terms than consider this issue. Is that just laziness??? |
Quote:
Numerical calculations are never exact, always you have some error. The question is what is their boundedness. To me, "eliminating errors" is paradoxical in numerics. Ghosal & Moin already showed that one can construct an SOCF filter such that the commutation error is less than the discretization error (+ some conditions). From a practical standpoint, if the errors can be controlled then that is good enough. The same goes for discretization schemes... We truncate higher order terms, which always introduces some error, but we can control the error. Of course, adding corrections always involves introducing higher order derivatives (which now require more boundary conditions). That doesn't mean that there isn't any progress that can be done. I can imagine there are many folks out that like super accurate LES and that there are folks right now designing very accurate LES codes. But from a practical standpoint, what exists works, so I can see that there is not a pressing need. |
Here is my small addition to the topic (I am not sure if I am really adding anything).
From the ideal case of Cartesian mesh with equal sized control volume to real world industrial case of polyhedral meshes the things that change most is the introduction of aspect ratio and skew. Both of these are cause of inaccuracies in solution. The most frustrating part is that gradients that we use to interpolate various values are limited (sometimes correctly and sometimes incorrectly) and gradients are control volume shape dependent. Since gradients play cardinal role in accuracy and stability of the solution improvement in them shall bring improvement in various aspects of CFD (Not only LES). The impact of gradients on LES is very high because subgrid models derive the turbulence viscosity from it and pretty much everything that is used is gradient based in LES. Another place where I think gradients have greatest impact is viscoelastic flows. I personally have another gradient calculation algorithm in my mind which I believe shall improve gradient quality. This algorithm I briefly implemented in Starccm during my time at CD Adapco and did test against inbuilt algorithms and found to be better. Due to reasons that this algorithm uses extended stencil for gradient computation only serial version was done that time. Parallel version needs lots of efforts and framework change so this is not yet available in FVUS. (Some day it shall be if shows increase in accuracy). Now that Prof. Denaro suggested me a test case for LES, expect that the same case to be tested on unstructured meshes for this algorithm (even in serial to start things off). We shall see if it really brings any improvements to table. Second part is gradient limiting that needs to account for aspect ratio change. Current versions of gradient limiting are based on same sized control volumes. This limiting in unstructured framework does limiting when it shall not be and does not limit where it shall be. |
Quote:
That's no true, you can write an LES equation both commuting and not commuting. The difference is in the use of an explicit filtering on the divergence of the fluxes and in the definition of the residual terms to be modelled. And I am not sure that what exists works fine for complex geometry and unstructured grid. At least I know that on unstructured grid, FLuent provides poor solution on simple flow. |
Quote:
Thinking about the problem in the gradient reconstruction, consider that if you commute you will discretize the divergence of the resolved fluxes (filtering remains implicitly defined by the discretization and contains errors) but if you do not commute you really apply a filtering on the divergence of the resolved fluxes that can cut away numerical errors |
I give some details about the LES equations. Consider the unfiltered equation,
dv/dt + Div F(v)=0 and apply the filtering on each term.1) Commuting: dv_f/dt + Div F(v_f)=Div[ F(v_f) -F(v)] 2) without commuting dv_f/dt + [Div F(v_f) ]_f = [Div[ F(v_f) -F(v)]]_f |
Quote:
Jokes apart, I almost did my whole Ph.D. (http://scholar.google.it/citations?v...J:Tyk-4Ss8FVUC) on this, starting from work of Filippo (https://www.researchgate.net/publica...ied_turbulence) and Bert Vreman (http://www.vremanresearch.nl/etc9.pdf), which somehow both relate to the very beginning of LES era (http://www.google.it/url?sa=t&rct=j&...4YTMlS5J8m1UqA). If I can synthetize my two major concerns on the topic of High Order Commuting Filters (HOCF): 1) The resulting commutation errors and the divergence of the resulting SGS stress tensor have the same scaling with respect to the filter cutoff length (http://aip.scitation.org/doi/10.1063/1.1852579). Which, roughly, means that they both need modeling. Moreover, HOCF also imply that the required order of the models (both SGS and Commutation) increases with the filter order. It is just a shame that second order scale similar models have been used when the used filter required much higher order SGS models (and commutation error models!). 2) HOCF tend, with increasing order, toward spectral cut-off filters. As soon as you promote HOCF as the answer to the commutation error issue, you are just saying that LES equations filtered by, say, a gaussian filter, are not possible without commutation errors (and those which are commutation error free are only amenable of a spectral discretization, to keep pace with the high order moment constraints on the filter). Which is just not true (as you can easily see by just putting a bar over the Navier-Stokes equations, no mattert what that bar means), it is actually the form of the equations that you want to use (actually, have been teached to use) that does not allow you to do otherwise. Moreover, HOCF also have some undesirable properties (non-realizable, non-symmetric, etc.). However, as all of you might have realized, this is a very niche problem... it never got main stream in the 90's (somehow the LES gold era) and I strongly doubt it could today, and there are several reasons for this, which would be an equally interesting topic. EDIT: I just want to add that I have strong evidence that: a) commutation error modeling (or sort of) especially improves the spectral content near the cut-off, in hybrid spectral/high-order finite difference settings (all spectral doesn't need it) as well as unstructured finite volume frameworks; b) the near wall dynamics can be better reproduced as well (especially because of the typical grid stretching found there), but at those scales the numerics comes necessarily into play (in LES, stream-wise vortices are, by definition, under-represented) and properly tuning the near wall behavior of the SGS viscosity model to make it the dominant term (even abandoning the celebrated y^3 scaling) might give better overall results than anything else. |
I think that this could be a new era where we can perform LES for quite complex engineering problems using unstructured grid. But we should come back to ask what filtered equations we should write for such cases...
I assume that on unstructured grids the commutation error will become much more relevant than on a simple non-uniform structured grid (Paolo, what about your experience in solving the channel flow using Fluent with unstructured grid?). Higher order error means nothing if you cannot ensure that derivatives are of O(1). And do not forget that when commuting, you have no other filtering than that one induced by the numerics (both shape and width), top-hat, gaussian, etc. being only a mere theory. At present, all LES codes have the dynamical procedure implemented so the explicit filtering procedure is already a subroutine available. Why do not use it in the equation?? The LES equations are also described in Sec.4: https://www.researchgate.net/publica...-uniform_grids |
Sorry, I just don't follow.
If the unfiltered equation is: dv/dt + Div F(v)=0 Then applying the filtering operation without commuting you should have: [dv/dt]_f + [Div F(v)]_f = 0 And since you do not commute, then you must stop here. But that's what I meant by filtered derivatives of original (unfiltered) variables and closure problem. You have quite literally a filtered Navier-Stokes and not the LES equations. I don't see right away how you go from here to what you finally wrote. You have unclosed terms from filtering the flux operator F but this is not the same as not commuting the divergence and time-derivative operator. Quote:
|
Quote:
You have just to complete the manipulation of the equation [dv/dt]_f + [Div F(v)]_f = 0 by decomposing the flux function in resolved F(v_f) and unresolved F(v)-F(v_f) parts. Follow Section 4 in the paper I linked before. |
No more ideas? I really wonder why no codes adopt the filtering when they all have already implemented the required subroutine for the dynamic modelling
|
Quote:
|
Quote:
Still, some formulations taking into account the commutation error, automagically (because not even the author typically realizes that, as actually happened to me) take also that into account. My experience with commutation error modeling (or SGS modeling in a commutation error free framework) is that some terms, those arising from the linear parts of the momentum flux, have very clear identities (when modeled scale similarly): - the pressure part strongly resembles a Rhie-Chow like term - the viscous part, analogously, provides an hyperviscous term but the analysis of the convective part is certainly more complex. While these have certain merits in general and I found some advantages in the resulting SGS model used even on unstructured grids, I can't give general rule of thumbs. For what concerns Fluent (or any similar competitor, for that matter), however, I would put it differently. Also because I've been on the other side. We are talking of a piece of software that has to work in the most desperate conditions. Honestly, it is a miracle that they managed to put a fractional step, an unbounded 2nd-order central scheme and a dynamic SGS model into their code and made it work in most of the reasonable cases. The point is, how far should have they gone in exploiting LES? As far as marketing and results would justify, I guess. There are two issues here: - the whole task of implementing LES in a commercial code, when all this started, certainly was mostly marketing. In such case you don't want some fancy stuff, you want the same LES other people are doing. - besides the marketing, the man in charge of a new implementation in a commercial code is not necessarily an expert of that field (otherwise, this would require tens of developers only for models) and what he can do is just follow the main stream. Obviously, as soon as anybody else will be using LES in their code, Ansys Fluent will certainly evaluate if, at no such bigger cost, also propose an explicitly filtered alternative as a new marketing addition. But that also opens up a lot of issues, among which: - as we just said, there is no such overall agreement on how explicitly filtering the equations should be implemented (not even the form of the equations). - the clear advantages of explicitly filtered LES or, to be more provocative, of the true LES, are somehow still missing from the table; those degree of freedom which you should filter out are probably more worth than several SGS models. - how to specify the filter type and width is something far from being sistematically studied, especially for unstructured grids. Implementers should just be on their own or do it a la OpenFOAM (implement whatever passes from your mind in that single day), none of which is actually a feasible route in a commercial product. |
Both of you focused on the right way... So do you think that, somehow, the academic research has to trust this issue before in order to put the code developers to go a steap ahead?
|
Quote:
Yes in a commercial framework along with accuracy the stability is equally big concern because the meshes we end up could be very bad at some places. To make things worse sometimes they are not avoidable. When I write anything for FVUS first concern is to make sure that it is accurate but as soon as this part is confirmed, the second part is to see how stable the implementation is. I try to run the cases with bad cells too. Quote:
Having said this if someone is interested and suggests me something advance to implement I would certain work on it and add it (given that it can be done with user code straight away). OpenFOAM is also platform where someone can quickly add model and develop his research. |
Quote:
Well, the question is if one can take advatage in the quality of the LES solution on unstructured grids if the equations do not contain the commutation error. As you see from the comments above, at present everyone seems to accept the LES on unstructured grid as-it-is commonly used. And, yes, the main reason is that no one assessed that the commutation error is responsible of failed solution. And, yes, many people consider that the dynamic modelling is "per se" able to adapt its respond to this error. |
I guess we can summarize the matter with:
1) Commutation error modeling is very niche, even within the LES community (less than 10 relevant works probably have it as main/side topic). 2) Explicit filtering is somehow more popular, but more in theory than in practice (I still count the number of relevant works below 20). 3) Managing both the points above, somehow, requires acquaintance with a large body of works, most of which didn't get theirselves the deserved relevance, even if published in mainstream journals. So that, even among LES researchers, there is no agreed upon point (I'm pretending this is because not everybody has the will/time to delve deep enough in such works). 4) There is a very specific form of LES which got mainstream. No surprise this is exactly the same as the URANS approach, with the only exception of a factor multiplying the turbulent viscosity. Good or bad, that's it. 5) If you add the previous points, there is not much a developer can actually do to fix this in production code. Probably, who decided for LES doesn't even know anything about it except what you can learn from, say, Versteeg and Malalasekera. 6) But even in the case of, say, arjun here, which is working, I guess, independently, and hasn't spent its whole Ph.D. on commutation error, his best bet is still trying to do classical stuff. Also because there are much more things relevant to a production code and worth developing than a specific LES module (Adjoint solver, Harmonic Balance, MHD, Species transport, combustion - premixed/non premixed- and related stuff, VOF, particles, Eulerian multiphase, acoustics, radiation, FSI, dynamic meshes, etc. etc.), each one probably having its "commutation error" issue in some aspect we will probably never understand or care about. In practice, at the industrial level, a clear recipe is needed which is proven stable and accurate enough to be pursued. But this must be already present (which is not the case for explicit filtering and/or commutation error modeling). Typically, the insight put from the developer side "only" regards: - picking up the best method among a set of possible choices (more important and much less trivial than it might look, especially because most people actually end up not being good at this) - develop an implementation strategy which is computationally efficient and possibly elegant, avoiding epsilons at denominators which most people tend to use when they have no more time to work on implementation - try to make it 10x faster with 10x less code than a trivial implementation |
Ok, I will spend my time to the beach :D:D
|
All times are GMT -4. The time now is 20:14. |