Random Field: Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

10 Random field theory

Thus far we have considered descriptive summaries of spatial variability in soil and
rock formations. We have seen that spatial variability can be described by a mean trend
or trend surface, about which individual measurements exhibit residual variation. This
might also be thought of as separating the spatial variability into large-scale and small-
scale increments, and then analyzing the two separately. We have seen that residual
variations themselves usually exhibit some spatial structure, which can be described by
an autocorrelation function. In this section, we consider more mathematical approaches
to modeling spatial variation, specifically random field theory. Random field theory is
important for two reasons: first, it provides powerful statistical results which can be used
to draw inferences from field observations and plan spatial sampling strategies; secondly, it
provides a vehicle for incorporating spatial variation in engineering and reliability models.
Random field theory is part of the larger subject of stochastic processes, of which we will
only touch a small part. For more detailed treatment, see Adler (1981), Christakos (1992;
2000), Christakos and Hristopoulos (1998), Parzen (1964), or Vanmarcke (1983).

10.1 Stationary Processes

The application of random field theory to geotechnical issues is based on the assumption
that the spatial variable of concern, z(x), is the realization of a random process. When
this process is defined over the space x E S, the variable z(x) is said to be a stochastic
process. In this book, when S has dimension greater than one, and especially when S is
a spatial domain, z(x) is said to be a randomJieZd.This usage is more or less consistent
across civil engineering, although the geostatistics literature uses a vocabulary all of its
own, to which we will occasionally refer.
A random field is defined as the joint probability distribution

This joint probability distribution describes the simultaneous variation of the variables z
within the space x E S,. Let
E[z(x)l = A x ) (10.2)

Reliability and Statistics in Geotechnical Engineering Gregory B. Baecher and John T. Christian
0 2003 John Wiley & Sons, Ltd ISBN 0-471-49833-5
244 RANDOM FIELD THEORY

for all x E S, be the mean or trend of ~ ( x )presumed


, to exist for all x; and let

be the variance, also assumed to exist for all x. The covariances of z(xl), . . . , z(x,,) are
defined as
Cov[~(xi),z(xj)l zr E[(z(xi) - xi)) . (z(xj) - ~ ( x j ) ) ] (10.4)

10.1.1 Stationarity
The random field is said to be second-order stationary (sometimes referred to as weak or
wide-sense stationary) if E[z(x)] = for all x, and Cov[z(xi), z(x;)] depends only upon
vector separation of x i , and xi , not on absolute location:

COV[Z(Xi), z(x;)] = CZ(Xi- x;) (10.5)

in which Cz(xi - xj) is the autocovariance function. The random field is said to be
stationary (sometimes referred to as strong or strict stationarity) if the complete probability
distribution, F,y,,...,,,.,?( z ,~. . . , z,*), is independent of absolute location, depending only on
vector separations among xi . . . x,, (Table 10.1). Obviously, strong stationarity implies
second-order stationarity. In the geoteclznical literature, stationarity is sometimes referred
to loosely as statistical homogeneity. If the autocovariance function depends only on the
absolute separation distance of xi, and xj, not vector separation (i.e. direction), the random
field is said to be isotropic.

10.1.2 Ergodicity
A final, but important and often poorly understood property is ergodicity. Ergodicity is
a concept originating in the study of time series, in which one observes individual time
series of data, and wishes to infer properties of an ensemble of all possible time series.
The meaning and practical importance of ergodicity to the inference of unique realizations
of spatial random fields is less well defined and is much debated in the literature.
Simply, but inelegantly, stated, ergodicity means that the probabilistic properties of a
random process (field) can be completely estimated from observing one realization of that

Table 10.1 Summary properties of random fields

Property Meaning

Honiogerieous, stationary Joint probability distribution functions are invariant to


translation; joint PDF depends on relative, not absolute,
locations.
isotropic Joint probability distribution functions are invariant to
rotations
ergodic All information on joint pdf's can be obtained from a single
realization of the random field.
second-order stcitionnry E[z(x)] = p, for all x E S
COV[Z(XI), z(x2)I = C,(XI,XZ), for all X I , x2 E S
STATIONARY PROCESSES 245

+
process. For example, the stochastic time series z ( 6 ) = u E ( $ ) , in which u is a discrete
random variable and E ( $ ) is an autocorrelated random process of time $, is non-ergotic.
In one realization of the process there is but one value of v, and thus the probability
distribution of u cannot be estimated. One would need to observe many realizations
of z ( $ ) to have sufficiently many observations of u , to estimate F,(v). Another non-
ergotic process of more relevance to the spatial processes in geotechnical engineering is
+
z(x) = px ~ ( x )in, which the mean of z ( x ) varies linearly with location x. In this case,
z(x) is non-stationary; V n r [ z ( x ) ]increases without limit as the window within which
z ( x ) is observed increases, and the mean m, of the sample of z ( x ) is a function of the
location of the window.
It is important to note that, ergodicity of a stochastic process (random field) implies
strong stationarity. Thus, if a assumed second-order stationary field is also assumed
to be ergotic, the latter assumption dominates, and the second-order restriction can be
relaxed (Cressie, 1991).
The meaning of ergodicity for spatial fields of the sort encountered in geotechnical
engineering is less clear, and has not been widely discussed in the literature.' An assump-
tion weaker than full ergodicity, which nonetheless should apply for spatial fields, is
that the observed sample mean m, and sample autocovariance function e,(6) converge
in mean squared error to the respective random field mean and autocovariance function
as the volume of space within which they are sampled increases (Cressie, 1991; Chris-
takos, 1992). This means that, as the volume of space increases, E [ ( m ,- P ) ~+ ] 0, and
E [ ( ( C z ( 6 )- Cz(S))2]+. 0. In Adler (1981), the notion of a continuous process in space,
observed continuously within intervals and then compared to the limiting case of the
interval tending to infinity, is introduced.
When the joint probability distribution F, ,,,,,,x,,(zl, . . . , z,) is multivariate Normal
(Gaussian), the process z(x) is said to be a Gaussian random field. A sufficient
condition for ergodicity of a Gaussian random field is that l i m ~ ~ ~ ~ ~ + m= C ,0.( 6This
)
can be checked empirically by inspecting the sample moments of the autocovariance
function to ensure they converge to 0. Cressie (199 1) notes philosophical limitations
of this procedure. Essentially all the analytical autocovariance functions common in
the geotechnical literature obey this condition, and few practitioners - or even most
theoreticians - are concerned about verifying ergodicity. Christakos (1992) suggests that,
in practical situations, it is difficulty or impossible to verify ergodicity for spatial fields
and, thus, ergodicity must be considered only a falsifiable hypothesis, in the sense that it
is judged by the successful application of the random field model.

10.1.3 Nonstationarity
A random field that does not meet the conditions of Section 10.1.1 is said to be non-
stationary. Loosely speaking, a non-stationary field is statistically heterogeneous. It can
be heterogeneous in a number of ways. In the simplest case, the mean may be a function
of location, for example, if there is a spatial trend that has not been removed. In a more
complex case, the variance or autocovariance function may vary in space. Depending
on the way in which the random field is non-stationary, sometimes a transformation of

' Cressie cites Adler (1981) and Rosenblatt (1985) with respect to ergodicity of spatial fields.
246 RANDOM FIELD THEORY

variables can convert a non-stationary field to a stationary or nearly stationary field. For
example, if the mean varies with location, perhaps a trend can be removed.
In the field of geostatistics, a weaker assumption is made on stationarity than that
described in Section 10.I. 1. Geostatisticians usually assume only that increments of a
spatial process are stationary (i.e. differences IZI - z21) and then operate on the proba-
bilistic properties of those increments. This leads to the use of the variogram (Chapter 9)
rather than the autocovariance function as a vehicle for organizing empirical data. The var-
iogram describes the expected value of squared differences of the random field, whereas
the autocovariance describes the expected values of products. Stationarity of the latter
implies stationarity of the former, but not the reverse.
Like most things in the natural sciences, stationarity is an assumption of the model,
and may only be approximately true in the world. Also, stationarity usually depends upon
scale. Within a small region, such as a construction site, soil properties may behave as if
drawn from a stationary process; whereas, the same properties over a larger region may
not be so well behaved.

10.2 Mathematical Properties of Autocovariance Functions

By definition, the autocovariance and autocorrelation functions are symmetric, meaning

C,(6) = C,(-6) and R,(6) = R,(-6) (10.6)

and they are bounded, meaning

C,(6) IC,(O) = o2 and IRx(6)I 5 1 (10.7)

In the limit, as absolute distance 161 becomes large:

(10.8)

10.2.1 Valid autocovariancefunctions


In general, for Cz(6) to be a permissible autocovariance function, it is necessary and
sufficient that a continuous mathematical expression be non-negative-definite,that is
m m

(10.9)

for all integers m , scalar coefficients k l , . . . km, and vectors 6 = Ixi - x j I. This condi-
tion follows from the requirement that variances of linear combinations of the z(xi), of
the form
MATHEMATICAL PROPERTIES OF AUTOCOVARIANCE FUNCTIONS 2 47

be non-negative (Cressie 1991). The argument for this condition can be based on spectral
representations of Cz(6), following from Bochner (1955), but beyond the present scope
(see also Yaglom 1962). Christakos (1992) discusses the mathematical implications of
this condition on selecting permissible forms for the autocovariance. Suffice it to say that
analytical models of autocovariance common in the geotechnical literature usually satisfy
the condition.
Autocovariance functions valid in R d , the space of dimension d , are valid in spaces of
lower dimension, but the reverse is not necessarily true. That is, a valid autocovariance
function in 1-D is not necessarily valid in 2-D or 3-D. Christakos (1992) gives the example
of the linearly declining autocovariance:

(10.11)

which is valid in 1-D, but not in higher dimensions. This is easily demonstrated by
considering a 2 x 2 square grid of spacing SO/&, and constants ni,l = a;,? = a j , ~= I ,
and aj,2 = - 1.1, which yields a negative value of the variance.
Linear sums of valid autocovariance functions are also valid. This means that if Czl(6)
+
and Cz2(6) are valid, then the sum Cz1(8) Cz2(8)is also a valid autocovariance function.
Similarly, if C,(6) is valid, then the product with a scalar, aCz(6),is also valid.

10.2.2 Separable autocovariance functions


An autocovariance function in d-dimensional space Rd is separable if

(10.12)

in which 6 is the d-dimensioned vector of orthogonal separation distances (61, . . . ,8<,),


and C;(8;) is the one-dimensional autocovariance function in direction i. For example,
the autocovariance function in R d ,

(10.13)

is separable into its d, one-dimensional components.


The function is partially separable if

in which Cz(6j+i) is a (d - 1) dimension autocovariance function, implying that the


function can be expressed as a product of autocovariance functions of lower dimen-
sion fields. The importance of partial separability to geotechnical applications, as noted
248 RANDOM FIELD THEORY

by Vanmarcke (1983), is the 3-D case of separating autocorrelation in the horizontal plane
from that with depth
cz(61,8 2 , '33) = cz(sl,8 2 ) c z ( 6 3 ) (10.15)

in which &,&, are horizontal distances, and 83 is depth.

10.3 Multivariate (vector) random fields

Thus far, we have considered random fields of the scalar variable z(x), in which x is
a vector of location coordinates. By direct analogy, we can define a random field of
the vector variable z(x), in which z is an ordered n-tuple of variables. For example,
water content in a clay stratum can be modeled as a scalar random field. The joint pair
of variables water content and shear strength might be modeled as a vector (bi-variate)
random field. Note, that in this case each of the variables, water content and shear strength,
would have its own autocorrelated properties, but the pair of variables would also be cross
correlated one to the other, and as a function of spatial separation.
In analogy to Equation (lO.l), a vector random field is defined by the joint probability
distribution
FXI, -
...,X,%(Zl*,* z,) = P { Z ( X l ) 5 z1,* * . , z(xn> I
9 znl (10.16)

in which zi is a vector of properties, and x is spatial location, x E S. Let

for all x E S, be the mean or trend of z(x), presumed to exist for all x; and let

Var[z(x)] = &(x) (10.18)

be the covariance matrix of the zi ,zj, such that

is the matrix comprising the variances and covariances of the zi ,zj, also assumed to exist
for all x. The cross-covariances of z(x1), . . . ,z(x,) as functions of space are defined as

in which Cov[zh(xj), Zk(xj)] is the covariance of the ( h , k)th components of z,at locations
xi and xj. Again, by analogy, the spatial cross-correlation function is

(10.21)

The matrix C,(J) is non-negative definite, and symmetric, as a covariance matrix is.
FUNCTIONS OF RANDOM FIELDS 249

10.4 Gaussian random fields

The Gaussian random field is an important special case, because it is widely applicable
due to the Central Limit Theorem (Chapter 2), has mathematically convenient properties,
and is widely used in practice. The probability density distribution of the Gaussian or

(10.22)

for -00 5 z 5 00. The mean is E [ z ] = p, and variance V a r [ z ]= u2.For the multivariate
case of vector z, of dimension n, the corresponding pdf is

in which p is the mean vector, and X the covariance matrix,

Gaussian random fields have the following convenient properties (Adler 1981): (1) They
are completely characterized by the first and second-order moments: the mean and auto-
covariance function for the univariate case, and mean vector and autocovariance matrix
(function) for the multivariate case; (2) any subset of variables of the vector is also jointly
Gaussian; (3) the conditional probability distributions of any two variables or vectors are
also Gaussian distributed; (4) if two variables, z1 and 22 are bi-variate Gaussian, and if
their covariance Cov[zl,z2] is zero, then the variables are independent.

10.5 Functions of random fields

Thus far, we have considered the properties of random fields themselves. In this section,
we consider the extension to properties of functions of random fields.

10.5. I Stochastic integration and averaging


Spatial averaging of random fields is among the most important considerations for geotech-
nical engineering. Limiting equilibrium failures of slopes depend upon the average strength
across the failure surface. Settlements beneath foundations depend of the average com-
pressibility of subsurface soils. Indeed, many modes of geotechnical performance of
interest to the engineer involve spatial averages - or differences among spatial aver-
ages - of soil and rock properties. Spatial averages also play a significant role in mining
geostatistics, where average ore grades within blocks of rock have important implications
for planning. As a result, there is a rich literature on the subject of averages of random
fields, only a small part of which can be reviewed here.
Consider the one-dimensional case of a continuous, scalar stochastic process (1D ran-
dom field), z ( x ) , in which x is location, and z ( x ) is a stochastic variable with mean
pz, assumed to be constant, and autocovariance function CZ(r), in which Y is separation
250 RANDOM FIELD THEORY

distance, r = ( x I - x 2 ) . The spatial average or mean of the process within the interval
[O,X] is
(10.25)

The integral is defined in the common way, as a limiting sum of z(x) values within
infinitesimal intervals of x , as the number of intervals increases. We assume that z ( x )
converges in a mean square sense, implying the existence of the first two moments of z ( x ) .
The weaker assumption of convergence in probability, which does not imply existence
of the moments, could be made, if necessary (see Parzen 1964, 1992) for more detailed
discussion.
If we think of Mx(z(x)} as a sample observation within one interval of the process z ( x ) ,
then over the set of possible intervals that we might observe, Mx(z(x)] becomes a random
variable with mean, variance, and possibly other moments. Consider first, the integral of
z ( x ) within intervals of length X . Parzen (1964) shows that the first two moments of
~ t z ( x ) d are
x

E [lx 1"
z(x)dx] = k(x)dx = k X (10.26)

Vnr [I" z(x)dx] = 1' 1' C,(xi - x j ) d x i d x j = 2


I" (X - r ) C Z ( r ) d r (10.27)

and that the autocovariance function of the integralJtz(x)dx as the interval [O,X] is
allowed to translate along dimension x , is (Vanmarcke 1983),

The corresponding moments of the spatial mean M x ( z ( x ) } are

(10.29)

Var[Mx{z(x)}]
= Vnr (X - r)C,(r)dr (10.30)

(10.31)

The effect of spatial averaging is to smooth the process. The variance of the averaged
process is smaller than the original process z ( x ) , and the autocorrelation of the aver-
aged process is wider. Indeed, averaging is sometimes referred to as, smoothing (Gelb
et nl. 1974).
FUNCTIONS OF RANDOM FIELDS 251

The reduction in variance from z ( x ) to the averaged process M x { z ( x ) ]can be repre-


sented in a variance reductionfunction, y(X):

(10.32)

The variance reduction function is 1.O for X = 0, and decays to zero as X becomes large.
Using Equation (10.32), y(X), can be calculated from the autocovariance function of
z ( x ) as

in which R Z ( r )is the autocorrelation function of z ( x ) . Note, the square root of y(X)
gives the corresponding reduction of the standard deviation of z ( x ) . Table 10.2 gives 1-D
variance reduction functions for common autocovariance functions. It is interesting to
note that each of these functions is asymptotically proportional to 1/X. Based on this
observation, Vanmarcke (1983) proposed a scale ofjuctuation, 8 , such that

8 = lim Xy(X) (10.34)


X+CO

or y ( X ) = e / X , as X -+ GO; that is 8 / X is the asymptote of the variance reduction


function as the averaging window expands. The function y(X) converges rapidly to this
asymptote as X increases. For 0 to exist, it is necessary that R,(r) + 0 as r -+ GO, that
is, that the autocoirelation function decreases faster than l/r. In this case, 8 can be found
from the integral of autocorrelation function (the moment of R,(r) about the origin),

(10.35)

Table 10.2 Variance reduction functions for common 1D autocovariance (after Vanmarcke, 1983)

Scale of
fluctuation,
Model Autocorrelation Variance reduction function 0

White noise R,(6) = = 1 ifX=O 0


10 otherwise = 10 otherwise

Exponential

Squared
R, (6) = exp(-6/60)

R,(S) = exp'(-161/60)
y ( X ) =2(~o/X)'

Y(X) =
G -- 1 + exp*(-X/so) 460

h S o
exponential
(Gaussian) (SO/XP
in which
[,E;W-X/W
Q,
+exp2(-x/so)
is the error function
1
-1
252 RANDOM FIELD THEORY

(It should be noted that this variance reduction function is not to be confused with the
variance reduction schemes described in Chapter 17.)
This concept of summarizing the spatial or temporal scale of autocorrelation in a single
number, typically the first moment of R z ( r ) ,is used by a variety of other workers, and
in many fields. Taylor (1921) in hydrodynamics called it the dzjfusion constant (Papoulis
and Pillai 2002), Christakos (1992) in geoscience calls 8/2 the correlation radius, and
Gelhar (1993) in groundwater hydrology calls 0 the integral scale.
In two dimensions, the equivalent expressions to Equations (10.25) and (10.27) for the
x x lo
mean and variance of the planar integral, lo z ( x ) d x , are

(10.36)

Papoulis (2002) discusses averaging in higher dimensions, as do Elishakoff (1999) and


Vanmarcke (1983).

10.5.2 Stochastic differentiation


The issue of continuity and differentiability of a random field depends on the convergence
of sequences of random variables {z(x,), z(xb)}, in which x,, Xb are two locations, with
(vector) separation r = Ix, - Xbl. The random field is said to be continuous in mean
square at x,, if for every sequence {z(xu),z(xb)}, E*[z(x,) - z(xb)] -+ 0, as r --+ 0.
The random field is said to be continuous in mean square throughout, if it is continuous
in mean square at every xu. Given this condition, the random field z(x) is mean square
differentiable, with partial derivative

(10.38)

in which the delta function 6i is a vector of all zeros, except the ith term, which is unity.
While stronger, or at least different, convergence properties could be invoked, mean
square convergence is often the most natural form in practice, because we usually wish
to use a second-moment representation of the autocovariance function as the vehicle for
determining differentiability.2 A random field is mean square continuous if and only if
its autocovariance function, CZ(r),is continuous at Irl = 0. For this to be true, the first
derivatives of the autocovariance function at Irl = 0 must vanish

a cz(r) = 0, for all i (10.39)


axi

Other definitions of convergence sometimes applied to continuity and differentiability include corzvergerzce
with probability one, convergence in probability, and weak or in-distribution convergence. For further discussion
and definitions of these convergence criteria, see Adler (1981).
FUNCTIONS OF RANDOM FIELDS 253

If the second derivative of the autocovariance function exists and is finite at Irl = 0, then
the field is mean square differentiable, and the autocovariance function of the derivative
field is
Caz/ax, 6-1 = a2Cz(r>/ax? (10.40)

and the variance of the derivative field can be found by evaluating the autocovari-
ance Cazpxi(r) at Irl = 0. Similarly, the autocovariance of the second derivative field
a2z(x)/axiaxj is
cazzlaXiaxj
(r) = a4cz(r)/ax?ax? (10.41)

The cross covariance function of the derivatives az(x)/axi and az(x)/axj in separate
directions is
Caz/axi,az/ax, (r) = -a2Cz(r)/axiaxj (10.42)

Importantly, for the case of homogeneous random fields, the field itself, z(x), and its
derivative field, az(x)/axi, are uncorrelated (Vanmarcke 1983).
So, the behavior of the autocovariance function in the neighborhood of the origin is
the determining factor for mean-square local properties of the field, such as continuity
and differentiability (CramCr and Leadbetter 1967). Unfortunately, the properties of the
derivative fields are sensitive to this behavior of C,(r) near the origin, which in turn is
sensitive to the choice of autocovariance model. Empirical verification of the behavior of
C,(r) near the origin is exceptionally difficult.

10.5.3 Linear functions of random fields


Assume that the random field z(x), is transformed by a deterministic function g(.),
such that
Y ( X ) = s[z(x)I (10.43)

g[z(x~)]is a function of z alone, that is, not of XO, and not of the value of z(x) at
any x other than XO. Also, we assume that the transformation does not depend on the
+
value of x ; that is, the transformation is space- or time-invariant, y(z 8 ) = g[z(x +
S ) ] . Thus, the random variable y(x) is a deterministic transfoilnation of the random
variable ~ ( x )and
, its probability distribution can be obtained from the derived distribution
methods of Chapter 3. Similarly, the joint distribution of the sequence of random variables
{y(xl), . . . , y(x,)} can be determined from the joint distribution of the sequence of random
variables R XI), . . . ,x(x,)}. The mean of y(x) is then

(10.44)

and the autocorrelation function is

s_L
0 0 0 0

R,(Yl, Y2) = E[Y(Xl)Y(X2)1 = ~(zI)g(z2)fi(z(~I)z(X2))~Zl~Z2 (10.45)

Papoulis (2002) shows that the process y(x) is (strictly) stationary if Z(X) is
(strictly) stationary.
254 RANDOM FIELD THEORY

The solution to Equations (10.44) and (10.45) for non-linear transformations may be
difficult, but for linear functions general results are available. The mean of y(x) for linear
g ( z ) is found by transforming the expected value of z(x) through the function

The autocorrelation of y(x) is found in a two-step process

in which L,, is the transformation applied with respect to the first variable z(x1) with the
second variable treated as a parameter, and Lx, is the transformation applied with respect
to the second variable z(x2) with the first variable treated as a parameter.

10.5.4 Excursions (level crossings)


A number of applications arise in geotechnical practice for which one is interested not
in the integrals (averages) or differentials of a stochastic process, but in the probability
that the process exceeds some threshold, either positive or negative. For example, we
might be interested in the probability that a stochastically varying water inflow into a
reservoir exceeds some rate or in the weakest interval or seam in a spatially varying soil
mass. Such problems are said to involve excursions or level-crossings of a stochastic
process. The following discussion follows Cram& and Leadbetter (1967), Parzen (1964),
and Papoulis (2002).
To begin, consider the zero-crossings of a random process: the points xi at which
z(xj) = 0. For the general case, this turns out to be a surprisingly difficult problem. Yet,
for the continuous Normal case, a number of statements or approximations are possible.
Consider a process z ( x ) with zero mean and variance c2.For the interval [x,x 61, if+
the product
+
z(x)z(x 6) < 0 (10.48)

then there must be an odd number of zero-crossings within the interval, for if this product
is negative, one of the values must lie above zero and the other beneath. Papoulis (2002)
+
demonstrates that, if the two (zero-mean) variables z ( x ) and z ( x 6) are jointly normal
with correlation coefficient
+
E[z(x)z(x 6)l
(10.49)

then
1 arcsin(r) - arccos(r)
P(z(x)z(x +6) < 0) = - - -
2 TC n
1 arcsin(r) TC - arccos(r)
+ +
P ( z ( x ) z ( x 6) > 0 ) = -
2 n
-
n
(10.50)

The correlation coefficient, of course, can be taken from the autocorrelation function,
&(a); thus
+
cos[nP(z(x)z(x 6 ) < O)] = -
R, (6)
R, (0)
(10.5 1)
FUNCTIONS OF RANDOM FIELDS 255

and the probability that the number of zero-crossings is positive is just the complement
of this.
For simplicity, let the probability of an odd number of zero-crossings within the interval
+
[x,x 61 be po ( 6 ) . If 6 is small, then this probability is itself small compared to 1.O. Thus,
the probability of two or more zero-crossings is negligible. Therefore, the probability
of exactly one zero-crossing, p1(6), is approximately, p l ( 6 ) = p0(6), and thus, from
Equation (10.50), expanding the cosine in a Fourier series and truncating to two terms:

(10.52)

or
(10.53)

In the case of a regular autocorrelationfunction, for which the derivative dRz(0)/d6 exists
and is zero at the origin, the probability of a zero-crossing is approximately

(10.54)

The non-regular case, for which the derivative at the origin is not zero (e.g. dRz(6) =
exp(6/So)),is discussed by Parzen (1964). Elishakoff (1999) and Vanmarcke (1983) treat
higher dimensional results.
The related probability of the process crossing an arbitrary level, z*, can be approxi-
mated by noting that, for small 6 and thus Y -+ 1, from Equation (10.50),

For small 6, the correlation coefficient Rz(6) is approximately 1, and the variances of
z(x) and z ( x + 6) are approximately Rz(0), thus

(10.56)

and for the regular case

(10.57)

Many other results can be found for continuous Normal processes, e.g. the average
density of the number of crossings within an interval, the probability of no crossings (i.e.
drought) within an interval, and so on. A rich literature is available of these and related
results (Adler 1981; Christakos 1992, 2000; Christakos and Hristopulos 1998; Cliff and
Ord 1981; CramCr and Leadbetter 1967; Cressie 1991; Gelb et al. 1974; Papoulis and
Pillai 2002; Parzen 1964; Yaglom 1962).

You might also like