0% found this document useful (0 votes)
13 views7 pages

Notes 10

The document discusses the spectral representation for weakly stationary processes, defining weak and strict stationarity and introducing the spectral distribution function. It explains the relationship between stochastic processes and Hilbert spaces, leading to the derivation of the spectral representation formula for continuous time weakly stationary processes. Additionally, it elaborates on the decomposition of the spectral distribution into discrete and continuous components, detailing their contributions to the total variance of the process.

Uploaded by

sejalnahar14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views7 pages

Notes 10

The document discusses the spectral representation for weakly stationary processes, defining weak and strict stationarity and introducing the spectral distribution function. It explains the relationship between stochastic processes and Hilbert spaces, leading to the derivation of the spectral representation formula for continuous time weakly stationary processes. Additionally, it elaborates on the decomposition of the spectral distribution into discrete and continuous components, detailing their contributions to the total variance of the process.

Uploaded by

sejalnahar14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

10: THE SPECTRAL REPRESENTATION

FOR WEAKLY STATIONARY PROCESSES

Weak and Strict Stationarity; The Spectral Distribution Function

Let {X (t ) : t ∈ T } be a stochastic process on a probability space (Ω , S , P ). If T is the set of all

real numbers, X (t ) is a continuous time process. If T is the set of integers (or some other set of

equally spaced numbers) then X (t ) is a discrete time process. The process X (t ) is strictly stationary

if its finite-dimensional distributions do not change with time:

P (X (t 1 + τ) ∈ S 1 , . . . , X (tn + τ) ∈ Sn ) = P (X (t 1) ∈ S 1 , . . . , X (tn ) ∈ Sn )

for all real τ, where t 1 , . . . , tn are arbitrary points in T , and S 1 , . . . , Sn are arbitrary events. Here,

the finite- dimensional distributions depend on the relative time separations of the indices, but not on

their absolute time locations. If the X (t ) have finite means and variances, stationarity implies that

E [X (t )] = E [X (0)] = m < ∞ , all t ,

E [X (t + τ) X (t )] = E [X (τ) X (0)] = C (τ) < ∞ , all t .

The process is said to be weakly stationary if the above conditions are met. All stationary

processes with finite variance are weakly stationary. Weak stationarity does not in general imply strict

stationarity. For Gaussian processes, however, any weakly stationary process is also strictly stationary.

In the rest of this section, we will assume that X (t ) is a continuous time weakly stationary process with

zero mean.

The covariance function C (τ) is nonnegative definite:

n n
Σ Σ a j adk C (t j − tk ) ≥ 0
j =1 k =1

for any sequences {a 1 , . . . , an } , {t 1 , . . . , tn }. By Bochner’s theorem, there exists a nondecreasing

bounded function F (λ) defined on (−∞ , ∞) such that


i λτ
C (τ) = ∫e
−∞
F (d λ) .
-2-

The function F (λ) is called the spectral distribution function. F (λ) determines a distribution, denoted

(with an abuse of notation) by F (A ) for measurable sets A . F (A ) is called the spectral distribution.

Hilbert Spaces

A vector space is said to be an inner product space if there exists an inner product (x , y ) which

assigns scalar values to pairs of vectors such that

(x , y ) = (y
ddddd
,x) ,

(αx + βy , z ) = α(x , z ) + β(y , z ) ,

(x , x ) ≥ 0 , (x , x ) = 0 <==
> x =0 .

The quantity e e x e e = (x , x ) ⁄2 is called the norm of x . x and y are said to be orthogonal if (x , y ) = 0.


1

The Schwarz inequality states that e (x , y ) e ≤ e e x e e . e e y e e . If x 1 , . . . , xn are orthogonal, have norm 1

and form a basis for the space, they are said to be an orthonormal basis. Any vector z can be

represented as

z = β 1x 1 + . . . + β n x n ,

where βi = (z , xi ). The βi are called the Fourier coefficients of z . The Parseval relation is

n
e ez e e2 = Σ
i =1
e (z , xi ) e 2 .

A sequence {xn } converges to a vector x in the space (and we write xn → x ) if lim ||xn − x || = 0. A
n →∞

sequence for which lim ||xm − xn || = 0 is called a Cauchy Sequence. It can be shown that every con-
m ,n →∞

vergent sequence is a Cauchy sequence. If, conversely, every Cauchy sequence converges to an ele-

ment of the space, the space is said to be complete. A complete inner product space is called a Hil-

bert Space. All finite-dimensional inner product spaces are Hilbert spaces. Some specific Hilbert

spaces of importance to time series analysis are discussed in Koopmans, pp. 17-25. A Hilbert space

needed for the spectral representation is L 2(P ): the space of zero mean complex valued random vari-

ables on (Ω , S , P ) such that E e X e 2 < ∞, with inner product (X , Y ) = E [X Yd ]. A sequence of random


-3-

variables Yn is said to converge to Y in mean square if ||Yn − Y ||2 → 0. Since L 2(P ) is complete, a

mean square Cauchy sequence will always have a limit in L 2(P ). The other Hilbert space needed for

the spectral representation is L 2(F ) (where F is the spectral distribution of the weakly stationary sto-

chastic process {Xt }), the space of complex-valued functions g (λ) of a real variable such that

∫ e g (λ) e dF (λ) < ∞


2
,

with inner product

(g (λ) , h (λ))F = ∫g (λ)hdddd


(λ) dF (λ) .

The Spectral Representation

The spectral representation for {Xt } has the form


Xt = ∫ exp(i λt )dZ (λ)
−∞
,

where the increments dZ (λ) are uncorrelated random variables with mean zero and variance dF (λ). We

now derive and interpret this formula.

Let MX denote the subspace of L 2(P ) generated by the linear combinations of the components of

the time series {Xt }. It can be shown that the functions

ht (λ) = exp(i λt ) (−∞ < t < ∞)

generate the space L 2(F ). Thus, we can define a correspondence between MX and L 2(F ) by the rule

Xt ←
→ exp(i λt ) ,

and then extending by linearity to include all elements in the two spaces. This correspondence

preserves the inner product (and hence the norm), since


(Xt , Xu )P = E [Xt Xdud] = C (t − u ) = ∫ exp(i λ(t − u )dF (λ)
−∞


= dddddddd
∫ exp(i λt )exp(i λu ) dF (λ) = (exp(i λt ) , exp(i λu ))F .
−∞
-4-

It follows that for any elements g (λ) , h (λ) in L 2(F ), if G and H are the random variables in MX such

that G ←
→ g (λ) and H ←
→ h (λ), then


dd ] =
E [G H ∫ g (λ)hdddd
(λ) dF (λ) .
−∞

If A is a Borel set, its characteristic (indicator) function IA (λ) is a member of L 2(F ). Thus, there must

be a random variable Z (A ) in MX such that

Z (A ) ←
→ IA (λ) .

Note that Z (A ) is a set function whose values are random variables. Z (A ) is called the (ran-

dom) spectral measure of the process, and it indeed has the properties of a measure since Z (∅) = 0
∞ ∞
(almost surely), and Z ( ∪ Ai ) =
i =1
ΣZ (Ai )
i =1
(almost surely) for disjoint measurable sets Ai . Note that

E [Z (A )] = 0 and since the process is real we have F (−A ) = F (A ) and hence Z (−A ) = Zd(A
ddd). Further-

more,


E [Z (A ) Zd(B
ddd)] = ∫ IA (λ)IdBddd
(λ) dF (λ) = F (A ∩B ) .
−∞

Thus if A and B are disjoint, E [Z (A ) Zd(B


ddd)] = 0. Also, F (A ) = E e Z (A ) e 2.

Any function g (λ) in L 2(F ) can be written as the limit of simple functions of the form Σk ak IA (λ)
k

where the Ak are disjoint. Now, Σk ak IA (λ) ←


k
→ Σak Z (Ak )
k
, a sum of uncorrelated random variables

which converges in mean square (as Σk ak IA (λ) → g (λ)) to a random variable in MX


k
denoted by

∫ g (λ)dZ (λ)
−∞
.

The notation is analogous to the notation used for the Lebesgue integral. The increments dZ (λ) are

uncorrelated random variables with variance dF (λ). That is,

ddddd
E [dZ (λ)dZ (µ)] = dF (λ) if µ = λ ,

ddddd
E [dZ (λ)dZ (µ)] = 0 if µ ≠ λ .
-5-


We have proved that g (λ) ←
→ ∫ g (λ) dZ (λ).
−∞
As a special case, then,


exp(i λt ) ←
→ ∫ exp(i λt )dZ (λ)
−∞
.

But by the original definition of the correspondence, exp(i λt ) ←


→ Xt . It must be, then, that


Xt = ∫ exp(i λt )dZ (λ)
−∞
,

and the spectral representation is proved.

To interpret the spectral representation, approximate the function exp(i λt ) by the simple function

Σk exp(i λk t )IA (λ), where λk = 2πk /n , n


k
is a large integer, and Ak = (λk −1 , λk ]. The spectral representa-

tion says that Xt can be approximated arbitrarily well in mean square by the sum

Σk exp(i λk t )Z (Ak ) .

This sum is a superposition of complex exponentials with random (complex) amplitudes Z (Ak ) which

are uncorrelated with mean zero and variances E e Z (Ak ) e 2 = F (Ak ). Note that F (Ak ) is the contribution

to the total variance of Xt from the frequency interval (λk −1 , λk ]. Passing to the limit (n → ∞), we

obtain


Xt = ∫ exp(i λt )dZ (λ)
−∞
,

so that Xt is a (generalized) sum of complex exponentials with uncorrelated random amplitudes dZ (λ).

The key interpretation of the spectrum is that dF (λ), the increment in the spectrum at λ, is the

contribution to the total power (variance) of Xt due to the component at frequency λ. (A more rigorous

interpretation of this statement follows.) Thus, the spectrum provides a decomposition of the total vari-

ance into components which are attributable to specific frequencies.

The Discrete and Continuous Components of the Process

The spectral distribution F may be decomposed (by the Lebesgue decomposition) into discrete
-6-

and continuous components. If d λ is a small interval containing the frequency λ, we can write

F (d λ) = p (λ) + f (λ)d λ .

p (λ) = F ({λ}) is called the spectral function and represents the discrete power (if any) at λ, and f (λ)

is the spectral density. Thus, the total power in d λ consists of the discrete power at λ plus the continu-

ous power f (λ)d λ. The random spectral measure Z (A ) may also be decomposed into discrete and con-

tinuous parts: Z (A ) = Zd (A ) + Zc (A ) with E [Zd (A ) Zddddd


c (B )] = 0 for all A , B : The discrete and continuous

components are uncorrelated. We can write

X (t ) = Xd (t ) + Xc (t ) ,

where


Xd (t ) = ∫ exp(i λt )Zd (d λ)
−∞
,

and


Xc (t ) =
−∞
∫ exp(i λt )Zc (d λ)

are the discrete and continuous components of the process. It can be shown that

E e Zc (d λ) e 2 = f (λ)d λ ,

so that the contribution to the total power of Xc (t ) from the interval d λ is f (λ)d λ. Strictly speaking,

Xc (t ) does not have a component at λ, and it is only meaningful to talk about the contribution to the

total power from sets (e.g., intervals) of frequencies. The total amount of continuous power or continu-

ous spectral mass in a set of frequencies A is ∫ f (µ)d µ. On the other hand, the discrete component can
A

be written as

Xd (t ) = Σj exp(i λ j t )Z j ,

where the λ j are the points for which the spectral function p (λ) is positive (there can only be a count-

able number of such points) and Z j = Z ({λ j }). Thus, Xd (t ) is (exactly) a sum of complex exponentials
-7-

at frequencies λ j . Here, there really is a component at the exact frequency λ j and the contribution to

the total power from λ j is given by the discrete power p (λ j ). Further, the contribution to the power in

Xd (t ) from the set A is

E e Zd (A ) e 2 = Σ p (λ j )
λ j ∈A
.

You might also like