CFD Online Logo CFD Online URL
Home > Wiki > Introduction to turbulence/Stationarity and homogeneity

Introduction to turbulence/Stationarity and homogeneity

From CFD-Wiki

(Difference between revisions)
Jump to: navigation, search
(Bias and variability of time estimators)
(Bias and variability of time estimators)
Line 333: Line 333:
</td><td width="5%">(37)</td></tr></table>
</td><td width="5%">(37)</td></tr></table>
Now if our averaging time, <math> T </math>, is chosen so large that <math> \left| \tau \right| / T << 1 </math> over the range for which <math> \rho \left( \tau \right) </math> is non-zero, the integral reduces:
<table width="70%"><tr><td>
var \left[ U_{T} \right] & \approx & \frac{2 var \left[ u \right]}{T} \int^{T}_{0} \rho \left( \tau \right) d \tau \\
& = & \frac{2 T_{int}}{T} var \left[ u \right] \\
</td><td width="5%">(38)</td></tr></table>
where <math> T_{int} </math> is the integral scale defined by equation 10. Thus the ''variability'' of our estimator is given by:
<table width="70%"><tr><td>
\epsilon^{2}_{U_{T}} = \frac{2T_{int}}{T}
</td><td width="5%">(39)</td></tr></table>

Revision as of 20:27, 21 January 2008


Processes statistically stationary in time

Many random processes have the characteristic that their statistical properties do not appear to depend directly on time, even though the random variables themselves are time-dependent. For example, consider the signals shown in Figures 2.2 and 2.5

When the statistical properties of a random process are independent of time, the random process is said to be stationary. For such a process all the moments are time-independent, e.g.,  \left\langle \tilde{ u \left( t \right)} \right\rangle = U , etc. In fact, the probability density itself is time-independent, as should be obvious from the fact that the moments are time independent.

An alternative way of looking at stationarity is to note that the statistics of the process are independent of the origin in time. It is obvious from the above, for example, that if the statistics of a process are time independent, then  \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle , etc., where  T is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product  \left\langle u \left( t \right) u \left( t' \right) \right\rangle depends only on time difference  t'-t and not on  t (or  t' ) directly. This consequence of stationarity can be extended to any product moment. For example  \left\langle u \left( t \right) v \left( t' \right) \right\rangle can depend only on the time difference  t'-t . And  \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle can depend only on the two time differences  t'- t and  t'' - t (or  t'' - t' ) and not  t ,  t' or  t'' directly.

The autocorrelation

One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the autocorrelation defined as the average of the product of the random variable evaluated at two times, i.e.  \left\langle u \left( t \right) u \left( t' \right)\right\rangle . Since the process is assumed stationary, this product can depend only on the time difference  \tau = t' - t . Therefore the autocorrelation can be written as:

C \left( \tau \right) \equiv \left\langle u \left( t \right) u \left( t + \tau \right)  \right\rangle

The importance of the autocorrelation lies in the fact that it indicates the "memory" of the process; that is, the time over which is correlated with itself. Contrast the two autocorrelation of deterministic sine wave is simply a cosine as can be easily proven. Note that there is no time beyond which it can be guaranteed to be arbitrarily small since it always "remembers" when it began, and thus always remains correlated with itself. By contrast, a stationary random process like the one illustrated in the figure will eventually lose all correlation and go to zero. In other words it has a "finite memory" and "forgets" how it was. Note that one must be careful to make sure that a correlation really both goes to zero and stays down before drawing conclusions, since even the sine wave was zero at some points. Stationary random process always have two-time correlation functions which eventually go to zero and stay there.

Example 1.

Consider the motion of an automobile responding to the movement of the wheels over a rough surface. In the usual case where the road roughness is randomly distributed, the motion of the car will be a weighted history of the road's roughness with the most recent bumps having the most influence and with distant bumps eventually forgotten. On the other hand, if the car is travelling down a railroad track, the periodic crossing of the railroad ties represents a determenistic input an the motion will remain correlated with itself indefinitely, a very bad thing if the tie crossing rate corresponds to a natural resonance of the suspension system of the vehicle.

Since a random process can never be more than perfectly correlated, it can never achieve a correlation greater than is value at the origin. Thus

\left| C \left( \tau \right) \right| \leq C\left( 0 \right)

An important consequence of stationarity is that the autocorrelation is symmetric in the time difference  \tau = t' - t . To see this simply shift the origin in time backwards by an amount  \tau  and note that independence of origin implies:

\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle  = \left\langle u \left( t - \tau \right)  u \left( t \right) \right\rangle

Since the right hand side is simply  C \left( - \tau \right)   , it follows immediately that:

C \left( \tau \right) = C \left( - \tau \right)

The autocorrelation coefficient

It is convenient to define the autocorrelation coefficient as:

\rho \left( \tau \right) \equiv \frac{ C \left( \tau \right)}{ C \left( 0 \right)} = \frac{\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle}{ \left\langle  u'^{2} \right\rangle }


\left\langle u^{2} \right\rangle = \left\langle u \left( t \right) u \left( t \right) \right\rangle = C \left( 0 \right) = var \left[ u \right]

Since the autocorrelation is symmetric, so is its coefficient, i.e.,

\rho \left( \tau \right) = \rho  \left( - \tau \right)

It is also obvious from the fact that the autocorrelation is maximal at the origin that the autocorrelation coefficient must also be maximal there. In fact from the definition it follows that

\rho \left( 0 \right) = 1


\rho \left( \tau \right) \leq 1

for all values of  \tau .

The integral scale

One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by

T_{int} \equiv \int^{\infty}_{0} \rho \left( \tau \right) d \tau

It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width  T_{int} .

The temporal Taylor microscale

The autocorrelation can be expanded about the origin in a MacClaurin series; i.e.,

C \left( \tau \right) = C \left( 0 \right) + \tau \frac{ d C }{ d t }|_{\tau = 0} + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \frac{1}{3!} \tau^{3} \frac{d^{3} C}{d t^{3} }|_{\tau = 0}

But we know the aoutocorrelation is symmetric in  \tau , hence the odd terms in  \tau must be identically to zero (i.e.,  dC / dt |_{\tau = 0} = 0 ,  d^{3}C / dt^{3} |_{\tau = 0} = 0  , etc.). Therefore the expansion of the autocorrelation near the origin reduces to:

C \left( \tau \right) = C \left( 0 \right) + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \cdots

Similary, the autocorrelation coefficient near the origin can be expanded as:

\rho \left( \tau \right) = 1 + \frac{1}{2}\frac{d^{2}\rho}{d t^{2}}|_{\tau = 0} \tau^{2}+ \cdots

where we have used the fact that  \rho \left( 0 \right) = 1 . If we define  ' = d / dt  we can write this compactly as:

\rho \left( \tau \right) = 1 + \frac{1}{2} \rho '' \left( 0 \right) \tau^{2} + \cdots

Since  \rho \left( \tau \right) has its maximum at the origin, obviously  \rho'' \left( 0 \right) must be negative.

We can use the correlation and its second derivative at the origin to define a special time scale,  \lambda_{\tau} (called the Taylor microscale) by:

\lambda^{2}_{\tau} \equiv - \frac{2}{\rho'' \left( 0 \right)}

Using this in equation 14 yields the expansion for the correlation coefficient near the origin as:

\rho \left( \tau \right) = 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}} + \cdots

Thus very near the origin the correlation coefficient (and the autocorrelation as well) simply rolls off parabolically; i.e.,

\rho \left( \tau \right) \approx 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}}

This parabolic curve is shown in Figure 3 as the osculating (or 'kissing') parabola which approaches zero exactly as the autocorrelation coefficient does. The intercept of this osculating parabola with the  \tau -axis is the Taylor microscale,  \lambda_{\tau} .

The Taylor microscale is significant for a number of reasons. First, for many random processes (e.g., Gaussian), the Taylor microscale can be proven to be the average distance between zero-crossing of a random variable in time. This is approximately true for turbulence as well. Thus one can quickly estimate the Taylor microscale by simply observing the zero-crossings using an oscilloscope trace.

The Taylor microscale also has a special relationship to the mean square time derivative of the signal,  \left\langle  \left[ d u / d t \right]^{2} \right\rangle . This is easiest to derive if we consider two stationary random signals at two different times say  u = u \left( t \right) and  u' = u' \left( t' \right) . The derivative of the first signal is  d u / d t and the second  d u' / d t' . Now lets multiply these together and rewrite them as:

\frac{du'}{dt'} \frac{du}{dt} = \frac{d^{2}}{dtdt'} u \left( t \right) u' \left( t' \right)

where the right-hand side follows from our assumption that  u is not a function of  t' nor  u' a function of  t .

Now if we average and interchenge the operations of differentiation and averaging we obtain:

\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{dtdt'} \left\langle u \left( t \right) u' \left( t' \right) \right\rangle

Here comes the first trick: we simply take  u' to be exactly  u but evaluated at time  t' . So  u \left( t \right) u' \left( t' \right) simply becomes  u \left( t \right) u  \left( t' \right) and its average is just the autocorrelation,  C \left( \tau \right) . Thus we are left with:

\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle =  \frac{d^{2}}{dtdt'} C \left( t' - t \right)

Now we simply need to use the chain-rule. We have already defined  \tau = t' - t . Let's also define  \xi = t' + t and transform the derivatives involving  t and  t' to derivatives involving  \tau and  \xi . The result is:

\frac{d^{2}}{dtdt'} = \frac{d^{2}}{d \xi^{2}} - \frac{d^{2}}{d \tau^{2}}

So equation 20 becomes

\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{d \xi^{2}}C \left( \tau \right) - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)

But since  C is a function only of  \tau , the derivative of it with respect to  \xi is identically zero. Thus we are left with:

\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)

And finally we need the second trick. Let's evaluate both sides at  t = t' (or   \tau = 0 ) to obtain the mean square derivative as:

\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)|_{ \tau = 0}

But from our definition of the Taylor microscale and the facts that  C \left( 0 \right) = \left\langle u^{2} \right\rangle and  C \left( \tau \right) = \left\langle u^{2} \right\rangle \rho \left( \tau \right) , this is exactly the same as:

\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = 2 \frac{ \left\langle u^{2} \right\rangle}{\lambda^{2}_{\tau}}

This amasingly simple result is very important in the study of turbulence, especially after we extend it to spatial derivatives.

Time averages of stationary processes

It is common practice in many scientific disciplines to define a time average by integrating the random variable over a fixed time interval, i.e. ,

U_{T} \equiv \frac{1}{T} \int^{T_{2}}_{T_{1}} u \left( t \right) dt

For the stationary random processes we are considering here, we can define  T_{1} to be the origin in time and simply write:

U_{T} \equiv \frac{1}{T} \int^{T}_{0} u \left( t \right) dt

where  T = T_{2} - T_{1} is the integration time.

Figure 5.4. shows a portion of a stationary random signal over which such an integration might be performed. The ime integral of  u \left( t \right) over the integral  \left( O, T \right) corresponds to the shaded area under the curve. Now since  u \left( t \right) is random and since it formsthe upper boundary of the shadd area, it is clear that the time average,  U_{T}  is a lot like the estimator for the mean based on a finite number of independent realization,  X_{N} we encountered earlier in section Estimation from a finite number of realizations (see Elements of statistical analysis)

It will be shown in the analysis presented below that if the signal is stationary, the time average defined by equation 27 is an unbiased estimator of the true average  U . Moreover, the estimator converges to  U as the time becomes infinite; i.e., for stationary random processes

U = \lim_{T \rightarrow \infty} \frac{1}{T} \int^{T}_{0} u \left( t \right) dt

Thus the time and ensemble averages are equivalent in the limit as  T \rightarrow \infty , but only for a stationary random process.

Bias and variability of time estimators

It is easy to show that the estimator,  U_{T} , is unbiased by taking its ensemble average; i.e.,

\left\langle U_{T} \right\rangle = \left\langle \frac{1}{T}  \int^{T}_{0} u \left( t \right) dt \right\rangle = \frac{1}{T} \int^{T}_{0} \left\langle u \left( t \right) \right\rangle dt

Since the process has been assumed stationary,   \left\langle u \left( t \right) \right\rangle is independent of time. It follows that:

\left\langle U_{T} \right\rangle = \frac{1}{T} \left\langle u \left( t \right) \right\rangle T = U

To see whether the etimate improves as  T increases, the variability of  U_{T} must be examined, exactly as we did for  X_{N} earlier in section Bias and convergence of estimators (see chapter The elements of statistical analysis). To do this we need the variance of  U_{T} given by:

var \left[ U_{T} \right] & = &  \left\langle \left[ U_{T} - \left\langle U_{T}  \right\rangle  \right]^{2} \right\rangle = \left\langle \left[ U_{T} - U \right]^{2} \right\rangle \\
& = &  \frac{1}{T^{2}} \left\langle \left\{ \int^{T}_{0} \left[ u \left( t \right) - U \right] \right\}^{2} \right\rangle \\
& = & \frac{1}{T^{2}} \left\langle \int^{T}_{0} \int^{T}_{0} \left[ u \left( t \right) - U \right] \left[ u \left( t' \right) - U \right] dtdt' \right\rangle \\
& = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} \left\langle u'\left( t \right) u'\left( t' \right)    \right\rangle dtdt' \\

But since the process is assumed stationary  \left\langle u' \left( t \right) u' \left( t' \right)  \right\rangle = C \left( t' - t \right) where  C \left( t' - t \right) = \left\langle u^{2} \right\rangle \rho \left( t'-t \right) is the correlation coefficient. Therefore the integral can be rewritten as:

var \left[ U_{T} \right] & = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} C \left( t' - t \right) dtdt' \\
& = & \frac{ \left\langle u^{2} \right\rangle }{ T^{2} } \int^{T}_{0} \int^{T}_{0} \rho \left( t' - t \right) dtdt' \\

Now we need to apply some fancy calculus. If new variables  \tau= t'-t  and  \xi= t'+t are defined, the double integral can be transformed to (see Figure 5.5):

var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left[ \int^{T}_{0} d \tau \int^{T-\tau}_{\tau} d \xi \rho \left( \tau \right) + \int^{0}_{-T} d \tau \int^{T+\tau}_{-\tau} d \xi \rho \left( \tau \right) \right]

where the factor of  1/2 arises from the Jacobian of the transformation. The integrals over  d \xi can be evaluated directly to yield:

var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left\{ \int^{T}_{0} \rho \left( \tau \right) \left[ T - \tau \right] d \tau  + \int^{0}_{-T} \rho \left( \tau \right) \left[ T + \tau \right] \right\}

By noting that the autocorrelation is symmetric, the second integral can be transformed and added to the first to yield at last the result we seek as:

var \left[ U_{T} \right] = \frac{var \left[ u \right]}{T} \int^{T}_{-T} \rho \left( \tau \right) \left[ 1 - \frac{ \left| \tau \right| }{T} \right] d \tau

Now if our averaging time,  T , is chosen so large that  \left| \tau \right| / T << 1 over the range for which  \rho \left( \tau \right) is non-zero, the integral reduces:

var \left[ U_{T} \right] & \approx & \frac{2 var \left[ u \right]}{T} \int^{T}_{0} \rho \left( \tau \right) d \tau \\
& = & \frac{2 T_{int}}{T} var \left[ u \right] \\

where  T_{int} is the integral scale defined by equation 10. Thus the variability of our estimator is given by:

\epsilon^{2}_{U_{T}} = \frac{2T_{int}}{T}
My wiki