0% found this document useful (0 votes)
16 views8 pages

Week 7

The document discusses intermittency and extreme events in turbulence, jointly distributed random variables, functions of random variables, random processes, statistics and sampling, and correlation functions in turbulence. It provides mathematical definitions and explanations of key concepts in probability and statistics as applied to turbulence modeling.

Uploaded by

Gautham Giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views8 pages

Week 7

The document discusses intermittency and extreme events in turbulence, jointly distributed random variables, functions of random variables, random processes, statistics and sampling, and correlation functions in turbulence. It provides mathematical definitions and explanations of key concepts in probability and statistics as applied to turbulence modeling.

Uploaded by

Gautham Giri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

INTERMITTENCY AND EXTREME EVENTS

The PDF of a R.V. is flatter, with a wider “tail”, if there is some


nontrivial probability of very large fluctuations (normalized by the
r.m.s fluctuation).

In turbulence, velocity gradients, hence strain rates and vorticity,


have this property. Even more pronounced if we consider PDFs of
squares of the velocity gradients, and as Reynolds no. increases.

We may think about dissipation rate as an R.V., w/o averaging:


evidence from DNS shows ϵ can take samples hundreds or thousands
times larger than the mean (now called ⟨ϵ⟩).

Such “extreme events” are of low probability, and localized in time


and space. But their intensity, and consequences, make understand-
ing and prediction important. (e.g. think disaster preparedness)

Intermittency: the property of intense fluctuations localized in time


and space. A key puzzle of central role in turbulence theory.

JOINTLY DISTRIBUTED RANDOM VARIABLES

Consider 2 random variables, X, Y and joint events of the type


{X ≤ x , Y ≤ y}
(A natural question: are X and Y related, in some sense?)

Define the joint (cumulative) distribution function


FXY (x, y) = P [X < x, Y < y]

1
As for the single r.v. version, FXY (x, y) is within [0,1], and a non-
decreasing function of both arguments. We also have
FXY (x, ∞) = P [X < x] = FX (x)
FXY (x, −∞) = 0

The Joint PDF

∂ 2FXY (x, y)
fXY (x, y) =
∂x∂y
Non-negative, integrates to 1.0 if taken over all x, y.

Integrate over entire range of one variable:


ˆ ∞ 2 ˆ ∞
∂ FXY (x, y) ∂ ∂FXY (x, y)
dy = dy
−∞ ∂x∂y ∂x −∞ ∂y
∂  
= FXY (x, ∞) − FXY (x, −∞)
∂x
= fX (x)
recovers single-variable PDF, here called “marginal PDF”.

If the 2 r.v.’s are independent:


P [X < x, Y < y] = P [X < x] P [Y < y]
This implies
fXY (x, y) = fX (x)fY (y)
behaves as the product of the 2 individual marginal PDFs.

Covariance and correlation coefficient:


Cov(X, Y ) = E[(X − µX )(Y − µY )]
p
ρ(X, Y ) = Cov(X, Y )/ V ar(X) V ar(Y )
2
Independent versus uncorrelated

Independent implies uncorrelated


¨ ∞
E[(X − µX )(Y − µY )] = (x − µX )(y − µY ) fXY (x, y) dx dy
−∞
If X and Y are independent, the double integral becomes
¨
= (x − µX )(y − µY ) fX (x) fY (y) dx dy
ˆ ˆ
= (x − µX ) fX (x) dx (y − µY ) fY (y) dy
= E[X − µX ] E[Y − µY ] = 0 · 0 = 0

But the converse does not hold

For a simple counter-example: let an angle θ be uniformly dis-


tributed, in [0, 2π]. Both cos θ and sin θ are random variables with
zero mean. They are uncorrelated, but obviously not independent.

Special case: Joint Gaussian variables

If X and Y are jointly Gaussian, their joint PDF is


1  
f (x, y) = p exp − Q(x, y)/2
2πσxσy 1 − ρ2
where
" 2  2 #
1 x − µX (x − µX )(y − µY ) y − µY
Q(x, y) = − 2ρ +
1 − ρ2 σX σX σY σY
If ρ = 0 this joint PDF can be readily factorized into the product of
the two marginal PDFs.
3
FUNCTIONS OF A RANDOM VARIABLE

If Y (X) is a deterministic function of a random variable X, then Y


is also a r.v. and has its own PDF fY (y).

If the functional relationship concerned is 1-to-1:


P [y ≤ Y < y + ∆y] = P [x ≤ X < x + ∆x]
Let both ∆x and ∆y go to zero:
fY (y) dy = fX (x) dx
This gives the PDF of Y as
fY (y) = fX (x) |dx/dy|
where the derivative on the RHS is evaluated at x = x(y).
The absolute value sign is needed to ensure fY (y) is non-negative.

There are situations where X → Y is many-to-one, such that mul-


tiple inverses exist – i.e. multiple values of X may correspond to a
specific value of Y. A simple example is Y = X 2.

In that case, a summation over these multiple inverses is required on


the R.H.S of the three equations above.

RANDOM PROCESSES

Let a random variable X be a function of time (or some other pa-


rameter). The random process or stochastic process {X(t)} is char-
acterized by (1) How do the statistics of X(t) vary with time; and (2)
Properties of the joint distribution of X(t) at different times, such
as, the multivariate joint PDF of {X(t1), X(t2), ...X(tN )}.
4
Many types of stochastic processes are known, and studied in MATH
or ISYE courses.

Stochastic forecasting: Given the value of X at a given time t, can


we predict (in the mean) the value of X(t) at later times?

The hope for an accurate prediction depends on how closely related


X(t) and X(t+τ ) are, for some reasonably small τ . This information
is given in the “autocorrelation function” — basically the correlation
function between X(t) and X(t + τ ), as a function of τ .

Sometimes we want to use information from past history: i.e. {X(t′) :


t0 ≤ t′ ≤ t} rather than just the current data point X(t). This gives
rise to a “non-Markovian” model.

STATISTICS AND SAMPLING

Statistics as a subject: the estimation of mean values, variances,


probabilities, using a finite number of N samples. Also want to
quantify uncertainties as a function of N , and be assured that con-
vergence comes steadily at large N .

The Law of Large Numbers

Let X1, X2, .... XN be “iid” samples, each of mean µ and variance
σ 2. The sample mean X N = N1
P
Xi is itself a random variable.

Hopefully, X N would, in some sense, converge to µ as N → ∞.


We can show: E[X N ] = µ, and E[(X N − µ)2] = σ 2/N . This says
statistical error decreases as N −1/2 in the estimation of variance.

5
The Central Limit Theorem

X N ∼ N (µ, σ 2/N ) for large N (even if the individual X1, X2 etc


may not be Gaussian on their own).

Result holds for all N if all X1, X2, etc are iid Gaussian.

Point vs Interval Estimation

Instead of (or in additional to) trying to predict µ and σ 2, we may


wish to specify the level of uncertainty around a statistical estimate.
For example, find some value of ∆ such that
P [X − ∆ ≤ µ < X + ∆] = 90%
where the [..........] is called a 90% “confidence interval” (CI).
Normally, as N increases, the size of the CI decreases.

CORRELATION FUNCTIONS IN TURBULENCE

Consider velocity fluctuation at a fixed location in space, at two


different times. Define the velocity autocorrelation:

′ u(t)u(t′)
ρ(t, t ) =
(u2(t)(u2(t′))1/2
If the turbulence is statistically stationary, this should be a function
of the time lag, τ = |t − t′| only. We then write, independent of t,
u(t)u(t + τ )
ρ(τ ) = = ρ(−τ ) = ρ(|τ |)
u2
We have ρ(0) = 1 always; expect ρ(τ ) → 0 at suitably “large” τ .
6
Functional form of the autocorrelation function

(For stationary turbulence) ρ(τ ) must be an even function of τ .

Its Taylor series expansion can only contain even-order powers:


1 ∂ 2ρ
ρ(τ ) = 1 + τ 2 + ......τ 4 + ...
2 ∂τ 2 τ =0

Small time lags

Since ρ(τ ) < 1 for any nonzero τ , we know the second derivative
above (evaluated at τ = 0) is negative.
1 ∂ 2ρ 1
Let =−
2 ∂τ 2 τ =0 λ2t
If τ is small (say in the order of a couple of τη ) we have a parabolic
decrease:
ρ(τ ) = 1 − (τ /λt)2 + O(τ 4) + ...
It can be shown (by expanding u(t + τ ) as a Taylor series before
forming the autocorrelation) that
⟨u2⟩
λ2t =2 .
⟨(∂u/∂t)2⟩
λt is called the Taylor time scale. It is analogous to the Taylor
(length) scale, which is defined based on the spatial derivative ∂u/∂x.

Large and intermediate time lags

If τ is large enough, we expect u(t) and u(t+τ ) to become statistically


independent (turbulence has a finite memory): ρ(τ ) → 0 at large τ .
7
Usually this decay of ρ(τ ) is fast enough such that the integral
ˆ ∞
T = ρ(τ ) dτ
0
converges. This is called the “integral time scale”, same as the area
under the entire autocorrelation curve.

Experimental and numerical evidence indicate, except for small τ


(quadratic, as discussed earlier) the autocorrelation is close to expo-
nential — of the form
ρ(τ ) ≈ exp(−τ /T )
In practice, data records available may not be long enough to deter-
mine T with high precision. Sometimes, T is approximated as the
value of τ where ρ(τ ) = exp(−1) = 0.368.

Time averaging

Question: How long do we have to keep taking samples, in order to


obtain accurate averages for a statistically stationary turbulent flow?

1. Strictly speaking, statistically independent samples are required.


2. Averaging time period (T ) should be much larger than T : e.g.,
exp(−3) = 0.0498, so as to ensure reliable sampling.

You might also like