CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Wiki > Introduction to turbulence/Stationarity and homogeneity

Introduction to turbulence/Stationarity and homogeneity

From CFD-Wiki

(Difference between revisions)
Jump to: navigation, search
(Bias and variability of time estimators)
(Bias and variability of time estimators)
Line 276: Line 276:
</math>
</math>
</td><td width="5%">(29)</td></tr></table>
</td><td width="5%">(29)</td></tr></table>
 +
 +
Since the process has been assumed stationary, <math>  \left\langle u \left( t \right) \right\rangle </math> is independent of time. It follows that:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle U_{T} \right\rangle = \frac{1}{T} \left\langle u \left( t \right) \right\rangle T = U
 +
</math>
 +
</td><td width="5%">(30)</td></tr></table>

Revision as of 16:31, 20 January 2008

Contents

Processes statistically stationary in time

Many random processes have the characteristic that their statistical properties do not appear to depend directly on time, even though the random variables themselves are time-dependent. For example, consider the signals shown in Figures 2.2 and 2.5

When the statistical properties of a random process are independent of time, the random process is said to be stationary. For such a process all the moments are time-independent, e.g.,  \left\langle \tilde{ u \left( t \right)} \right\rangle = U , etc. In fact, the probability density itself is time-independent, as should be obvious from the fact that the moments are time independent.

An alternative way of looking at stationarity is to note that the statistics of the process are independent of the origin in time. It is obvious from the above, for example, that if the statistics of a process are time independent, then  \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle , etc., where  T is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product  \left\langle u \left( t \right) u \left( t' \right) \right\rangle depends only on time difference  t'-t and not on  t (or  t' ) directly. This consequence of stationarity can be extended to any product moment. For example  \left\langle u \left( t \right) v \left( t' \right) \right\rangle can depend only on the time difference  t'-t . And  \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle can depend only on the two time differences  t'- t and  t'' - t (or  t'' - t' ) and not  t ,  t' or  t'' directly.

The autocorrelation

One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the autocorrelation defined as the average of the product of the random variable evaluated at two times, i.e.  \left\langle u \left( t \right) u \left( t' \right)\right\rangle . Since the process is assumed stationary, this product can depend only on the time difference  \tau = t' - t . Therefore the autocorrelation can be written as:

 
C \left( \tau \right) \equiv \left\langle u \left( t \right) u \left( t + \tau \right)  \right\rangle
(1)

The importance of the autocorrelation lies in the fact that it indicates the "memory" of the process; that is, the time over which is correlated with itself. Contrast the two autocorrelation of deterministic sine wave is simply a cosine as can be easily proven. Note that there is no time beyond which it can be guaranteed to be arbitrarily small since it always "remembers" when it began, and thus always remains correlated with itself. By contrast, a stationary random process like the one illustrated in the figure will eventually lose all correlation and go to zero. In other words it has a "finite memory" and "forgets" how it was. Note that one must be careful to make sure that a correlation really both goes to zero and stays down before drawing conclusions, since even the sine wave was zero at some points. Stationary random process always have two-time correlation functions which eventually go to zero and stay there.

Example 1.

Consider the motion of an automobile responding to the movement of the wheels over a rough surface. In the usual case where the road roughness is randomly distributed, the motion of the car will be a weighted history of the road's roughness with the most recent bumps having the most influence and with distant bumps eventually forgotten. On the other hand, if the car is travelling down a railroad track, the periodic crossing of the railroad ties represents a determenistic input an the motion will remain correlated with itself indefinitely, a very bad thing if the tie crossing rate corresponds to a natural resonance of the suspension system of the vehicle.

Since a random process can never be more than perfectly correlated, it can never achieve a correlation greater than is value at the origin. Thus

 
\left| C \left( \tau \right) \right| \leq C\left( 0 \right)
(2)

An important consequence of stationarity is that the autocorrelation is symmetric in the time difference  \tau = t' - t . To see this simply shift the origin in time backwards by an amount  \tau  and note that independence of origin implies:

 
\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle  = \left\langle u \left( t - \tau \right)  u \left( t \right) \right\rangle
(3)

Since the right hand side is simply  C \left( - \tau \right)   , it follows immediately that:

 
C \left( \tau \right) = C \left( - \tau \right)
(4)

The autocorrelation coefficient

It is convenient to define the autocorrelation coefficient as:


\rho \left( \tau \right) \equiv \frac{ C \left( \tau \right)}{ C \left( 0 \right)} = \frac{\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle}{ \left\langle  u'^{2} \right\rangle }
(5)

where


\left\langle u^{2} \right\rangle = \left\langle u \left( t \right) u \left( t \right) \right\rangle = C \left( 0 \right) = var \left[ u \right]
(6)

Since the autocorrelation is symmetric, so is its coefficient, i.e.,


\rho \left( \tau \right) = \rho  \left( - \tau \right)
(7)

It is also obvious from the fact that the autocorrelation is maximal at the origin that the autocorrelation coefficient must also be maximal there. In fact from the definition it follows that


\rho \left( 0 \right) = 1
(8)

and


\rho \left( \tau \right) \leq 1
(9)

for all values of  \tau .

The integral scale

One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by


T_{int} \equiv \int^{\infty}_{0} \rho \left( \tau \right) d \tau
(10)

It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width  T_{int} .

The temporal Taylor microscale

The autocorrelation can be expanded about the origin in a MacClaurin series; i.e.,


C \left( \tau \right) = C \left( 0 \right) + \tau \frac{ d C }{ d t }|_{\tau = 0} + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \frac{1}{3!} \tau^{3} \frac{d^{3} C}{d t^{3} }|_{\tau = 0}
(11)

But we know the aoutocorrelation is symmetric in  \tau , hence the odd terms in  \tau must be identically to zero (i.e.,  dC / dt |_{\tau = 0} = 0 ,  d^{3}C / dt^{3} |_{\tau = 0} = 0  , etc.). Therefore the expansion of the autocorrelation near the origin reduces to:


C \left( \tau \right) = C \left( 0 \right) + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \cdots
(12)

Similary, the autocorrelation coefficient near the origin can be expanded as:


\rho \left( \tau \right) = 1 + \frac{1}{2}\frac{d^{2}\rho}{d t^{2}}|_{\tau = 0} \tau^{2}+ \cdots
(13)

where we have used the fact that  \rho \left( 0 \right) = 1 . If we define  ' = d / dt  we can write this compactly as:


\rho \left( \tau \right) = 1 + \frac{1}{2} \rho '' \left( 0 \right) \tau^{2} + \cdots
(14)

Since  \rho \left( \tau \right) has its maximum at the origin, obviously  \rho'' \left( 0 \right) must be negative.

We can use the correlation and its second derivative at the origin to define a special time scale,  \lambda_{\tau} (called the Taylor microscale) by:


\lambda^{2}_{\tau} \equiv - \frac{2}{\rho'' \left( 0 \right)}
(15)

Using this in equation 14 yields the expansion for the correlation coefficient near the origin as:


\rho \left( \tau \right) = 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}} + \cdots
(16)

Thus very near the origin the correlation coefficient (and the autocorrelation as well) simply rolls off parabolically; i.e.,


\rho \left( \tau \right) \approx 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}}
(17)

This parabolic curve is shown in Figure 3 as the osculating (or 'kissing') parabola which approaches zero exactly as the autocorrelation coefficient does. The intercept of this osculating parabola with the  \tau -axis is the Taylor microscale,  \lambda_{\tau} .

The Taylor microscale is significant for a number of reasons. First, for many random processes (e.g., Gaussian), the Taylor microscale can be proven to be the average distance between zero-crossing of a random variable in time. This is approximately true for turbulence as well. Thus one can quickly estimate the Taylor microscale by simply observing the zero-crossings using an oscilloscope trace.

The Taylor microscale also has a special relationship to the mean square time derivative of the signal,  \left\langle  \left[ d u / d t \right]^{2} \right\rangle . This is easiest to derive if we consider two stationary random signals at two different times say  u = u \left( t \right) and  u' = u' \left( t' \right) . The derivative of the first signal is  d u / d t and the second  d u' / d t' . Now lets multiply these together and rewrite them as:


\frac{du'}{dt'} \frac{du}{dt} = \frac{d^{2}}{dtdt'} u \left( t \right) u' \left( t' \right)
(18)

where the right-hand side follows from our assumption that  u is not a function of  t' nor  u' a function of  t .

Now if we average and interchenge the operations of differentiation and averaging we obtain:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{dtdt'} \left\langle u \left( t \right) u' \left( t' \right) \right\rangle
(19)

Here comes the first trick: we simply take  u' to be exactly  u but evaluated at time  t' . So  u \left( t \right) u' \left( t' \right) simply becomes  u \left( t \right) u  \left( t' \right) and its average is just the autocorrelation,  C \left( \tau \right) . Thus we are left with:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle =  \frac{d^{2}}{dtdt'} C \left( t' - t \right)
(20)

Now we simply need to use the chain-rule. We have already defined  \tau = t' - t . Let's also define  \xi = t' + t and transform the derivatives involving  t and  t' to derivatives involving  \tau and  \xi . The result is:


\frac{d^{2}}{dtdt'} = \frac{d^{2}}{d \xi^{2}} - \frac{d^{2}}{d \tau^{2}}
(21)

So equation 20 becomes


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{d \xi^{2}}C \left( \tau \right) - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)
(22)

But since  C is a function only of  \tau , the derivative of it with respect to  \xi is identically zero. Thus we are left with:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)
(23)

And finally we need the second trick. Let's evaluate both sides at  t = t' (or   \tau = 0 ) to obtain the mean square derivative as:


\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)|_{ \tau = 0}
(24)

But from our definition of the Taylor microscale and the facts that  C \left( 0 \right) = \left\langle u^{2} \right\rangle and  C \left( \tau \right) = \left\langle u^{2} \right\rangle \rho \left( \tau \right) , this is exactly the same as:


\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = 2 \frac{ \left\langle u^{2} \right\rangle}{\lambda^{2}_{\tau}}
(25)

This amasingly simple result is very important in the study of turbulence, especially after we extend it to spatial derivatives.

Time averages of stationary processes

It is common practice in many scientific disciplines to define a time average by integrating the random variable over a fixed time interval, i.e. ,


U_{T} \equiv \frac{1}{T} \int^{T_{2}}_{T_{1}} u \left( t \right) dt
(26)

For the stationary random processes we are considering here, we can define  T_{1} to be the origin in time and simply write:


U_{T} \equiv \frac{1}{T} \int^{T}_{0} u \left( t \right) dt
(27)

where  T = T_{2} - T_{1} is the integration time.

Figure 5.4. shows a portion of a stationary random signal over which such an integration might be performed. The ime integral of  u \left( t \right) over the integral  \left( O, T \right) corresponds to the shaded area under the curve. Now since  u \left( t \right) is random and since it formsthe upper boundary of the shadd area, it is clear that the time average,  U_{T}  is a lot like the estimator for the mean based on a finite number of independent realization,  X_{N} we encountered earlier in section Estimation from a finite number of realizations (see Elements of statistical analysis)

It will be shown in the analysis presented below that if the signal is stationary, the time average defined by equation 27 is an unbiased estimator of the true average  U . Moreover, the estimator converges to  U as the time becomes infinite; i.e., for stationary random processes


U = \lim_{T \rightarrow \infty} \frac{1}{T} \int^{T}_{0} u \left( t \right) dt
(27)

Thus the time and ensemble averages are equivalent in the limit as  T \rightarrow \infty , but only for a stationary random process.

Bias and variability of time estimators

It is easy to show that the estimator,  U_{T} , is unbiased by taking its ensemble average; i.e.,


\left\langle U_{T} \right\rangle = \left\langle \frac{1}{T}  \int^{T}_{0} u \left( t \right) dt \right\rangle = \frac{1}{T} \int^{T}_{0} \left\langle u \left( t \right) \right\rangle dt
(29)

Since the process has been assumed stationary,   \left\langle u \left( t \right) \right\rangle is independent of time. It follows that:


\left\langle U_{T} \right\rangle = \frac{1}{T} \left\langle u \left( t \right) \right\rangle T = U
(30)
My wiki