CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Wiki > Introduction to turbulence/Homogeneous turbulence

Introduction to turbulence/Homogeneous turbulence

From CFD-Wiki

Jump to: navigation, search

A first look at decaying turbulence

Look, for example, at the decay of turbulence which has already been generated. If this turbulence is homogeneous and there is no mean velocity gradient to generate new turbulence, the kinetic energy equation reduces to simply:

 
\frac{d}{dt} k = - \epsilon
(1)

This is often written (especially for isotropic turbulence) as:

 
\frac{d}{dt} \left[ \frac{3}{2} u^{2} \right] = - \epsilon
(2)

where

 
k \equiv \frac{3}{2} u^{2}
(3)

Now you can't get any simpler than this. Yet unbelievably we still don't have enough information to solve it. Let's try. Suppose we use the extanded ideas of Kolmogorov we introduced in Chapter 3 to related the dissipation to the turbulence energy, say:

 
\epsilon = f \left( Re \right) \frac{u^{3}}{l}
(4)

Already you can see we have two problems, what is  f \left( Re \right) , and what is the time dependece of  l ? Now there is practically a different answer to these questions for every investigator in turbulence - most of whom will assure you their choice is only reasonable one.

Figure 6.1 shows an attempt to correlate some of the grid turbulence data using the longitudinal integral scale for  l , i.e.,  l = L^{(1)}_{11} , or simply  L . The first thing you notice is the problem at low Reynolds number. The second is probably the possible asymptote at the higher Reynolds numbers. And the third is probably the scatter in the data, which is characteristic of most turbulence experiments, especially if you try to compare the results of one experiment to the other.

Let's try to use the apparent asymptote at high Reynolds number to our advantage by arguing that   f \left( Re \right) \rightarrow A , where  A is a constant. Note that this limit is consistent with the Kolmogorov argument we made back when we were talking about the dissipation earlier, so we might feel on pretty firm ground here, at least at high turbulent Reynolds numbers. But before we feel too comfortable about this, let's look at another curve shown in figure 6.2. This one is also due to Sreenivasan, but a compiled a decade later and based on large scale computer simulations of turbulence. There is less scatter, but it appears that the asymptote depends on the details of how the experiment was forced at the large scales of motion. This is not good, since it means that the answer depends on the particular flow - exactly what we wanted to avoid by modelling in the first place.

Nonetheless, let's proceed by assuming in spite of the evidence that  A \approx 1 and  L is the integral scale. Now how does  L vary with time? Figure 6.3 shows the ration of the integral scale to the Taylor microscale from the famous Comte-Bellot/Corrsin (1971) experiment. One might assume, with some theoretical justification, that  L / \lambda \rightarrow const . This would be nice since you will be able to show that if the turbulence decays as a power law in time, say  u^{2} \sim t^{n} , then  \lambda \sim t^{1/2} . But as shown in Figure 6.4 from Wang et all (2000), this is not very good assumption for the DNS data avialable at this time. Now I believe this is because of problems in the simulations, mostly having to do with the fact that turbulence in a box is not a very good approximation for truly homogeneous turbulence unless the size of the box is much larger than the energetic scales. Figure 6.5 shows what happens if you try to correct for the finite box size, and now the results look pretty good.

So the bottom line is that we don't really know yet for sure how  L behaves with time, or even whetherwe should have confidence in the expiremental and DNS attempts to determine it. Regardless, most assume that  L varies as a power of time, say  L = Bt^{p} . There are various justifications for this and everyone has his own choice for  p , but the truth is that the main justification is that it allows us to solve the equation/ In fact it is easy to show by substitution that this implies directly that the energy decays as a power law in time; in fact:

 
u^{2} \sim t^{p-1}
(5)

You can see immediately that if I am right and  L \sim \lambda \sim t^{1/2} then  u^{2} \sim t^{-1} . Now any careful study of the data will convince you that the energy indeed decays as a power law in time, but there is no question that  n \neq -1 , but  n < - 1 , at least for most of the experiments. Most people have tried to fix this problem changing  p . But I say the problem is in  f \left( Re \right) and the assumption that  \epsilon \sim u^{3} / L at finite Reynolds numbers. I would argue that  n \rightarrow -1 only in the limit of infinite Reynolds number.

To see why I believe this, try doing the problem another way. We know for sure that if the turbulence decays as a power law, then the Taylor microscale,  \lambda_{g} , must be proportional exactly to  t^{1/2} . Thus we must have (assuming isotropy):

 
\frac{dk}{dt} = - 10 \nu \frac{k}{\lambda^{2}_{g}} \propto \frac{k}{t}
(6)

It is easy to show that  k \propto t^{n} where  n is gyven by:

 
\frac{d \lambda^{2}_{g}}{dt} = - \frac{10}{n}
(7)

and any value of  n \leq -1 is acceptable. Obviosly the difference lies in the use of the relation  \epsilon \propto u^{3} / L at finite Reynolds numbers.

Believe it or not, this whole subject is one of the really hot debates of the last decade, and may well be for the next as well. Who knows, maybe some of you will be involved in resolving it, since it really is one of the most fundamental qustions in turbulence.

The dissipation equation and turbulence modelling

If you are really more inclined toward engineering than physics, you might be wondering whether the ambiguities above make any difference. They might. And in fact they might lie at the core of the reasons why we can't do things better with our existing single point turbulence models. To see this let's consider the dissipation equation.

The derivation of this equation begins by taking the gradient of the equation for the fluctuating velocity, then interchanging the order of the time and space derivatives to get it into an equation for the fluctuating strain-rate, then averaging and multiplying by twice the kinematic viscosity to obtain an equation for the dissipation of kinetic energy per unit mass due to the fluctuating velocity field. After some rearrangement the result is:

A second look at simple shear flow turbulence

Let's consider another homogeneous flow that seems pretty simple at first sight, homogeneous shear flow turbulence with constant mean shear. We already considered this flow when we were talking about the role of the pressure-strain rate terms. Now we will only worry, for the moment, about the kinetic energy which reduces to:

 
 \frac{\partial }{\partial t} \left[ \frac{1}{2} k \right] = - \left\langle uv \right\rangle \frac{d U}{d y} - \epsilon
(8)

Now turbulence modellers (and most experimentalists as well) would love for the left-hand side to be exactly zero so that the production and dissipation exactly balance. Unfortunately Mother Nature, to this point at least, has not allowed such flow to be generated. In every experiment to-date, the energy increases with time (or equivalently, down the tunnel).

Let's make a few simple assumptions and see if we can figure out what is going on. Suppose we assume that the correlation coefficient  \left\langle uv  \right\rangle / u^{2} = C is a constant. Now, we could

My wiki