0% found this document useful (0 votes)
26 views

Exercises

This document contains instructions and exercises related to random processes and stochastic control. It discusses generating random variables in MATLAB with different probability distribution functions and estimating their mean and variance from samples. Exercises also cover histograms, conditional probability, eigenvalues, eigenvectors, and linear transformations of random vectors. The goal is to familiarize readers with fundamental concepts in random processes through practical MATLAB exercises and applications in signal processing.

Uploaded by

Patryk Martyniak
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Exercises

This document contains instructions and exercises related to random processes and stochastic control. It discusses generating random variables in MATLAB with different probability distribution functions and estimating their mean and variance from samples. Exercises also cover histograms, conditional probability, eigenvalues, eigenvectors, and linear transformations of random vectors. The goal is to familiarize readers with fundamental concepts in random processes through practical MATLAB exercises and applications in signal processing.

Uploaded by

Patryk Martyniak
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

'

Gdansk University
of Technology

Random Processes and Stochastic


Control – Part 1 of 2

Exercises

by

Maciej Niedźwiecki

Gdańsk University of Technology

Departament of Automatic Control

Gdańsk, Poland
Exercise 1

Using MATLAB generate and plot the sequence of 100


consecutive realizations of 3 random variables: A with
uniform PDF, B with Gaussian PDF and C with Laplace
PDF. All random variables should have the same mean:

mA = mB = mC = 2

and the same variance:

2 2 2
σA = σB = σC =2

Use the same scale on the x and y axes

x ∈ [1, 100], y ∈ [−3, 7]


Exercise 2

Using MATLAB generate and plot the sequence of 100


consecutive realizations of the Cauchy random variable D ∼
C(2, 1).
Exercise 3

The mean and variance of a random variable X can be


estimated using the following relationships

N
1 
 X (N ) =
m xi
N i=1

N
2 1
X
σ (N ) =  X (N )]2
[xi − m
N i=1

Estimate the mean and the variance of the random variables


A, B, C and D defined in Exercises 1 and 2, based on N =
100, 1,000, 10,000 and 100,000 samples.
Exercise 4

Using MATLAB plot normalized histograms obtained for


10,000 samples of random variables C and D (defined in
exercises 1 and 2) and compare them with the corresponding
PDF plots. In both cases divide the interval [-3, 7] (subset
of the observation space) into 20 bins of equal width 0.5.
Exercise 5

Write MATLAB procedure generating samples of a random


variable E with the following PDF

 0.50 e ∈ [−1, 0]
a) pE (e) = 0.25 e ∈ (0, 2]

0 elsewhere

 −e e ∈ [−1, 0]
b) pE (e) = 0.5 e ∈ (0, 1]

0 elsewhere

Evaluate the mean, median and standard deviation of E.


 0 
2 0
e  1
epE (e)de = 0.5 = −
−1 2 −1 4
 2 
2 2
e  1
epE (e)de = 0.25  =
0 2 0 2
 0 
3 0
2 e  1
e pE (e)de = 0.5  = −
−1 3 −1 6
 2 
3 2
2 e  2
e pE (e)de = 0.25  =
0 3 0 3

1 1 1
mX = − =
2 4 4
 2
2 1 2 1 37
σX = + − =
6 3 4 48
αX = 0
Exercise 6

Denote by E1, . . . , E16 the sequence of 16 independent


random variables with PDF specified in Exercise 5. Let

16
1  Ei − mE
F =
4 i=1 σE

Using MATLAB plot the normalized histogram obtained for


10,000 samples of F . Compare it with an appropriately
scaled plot of a standard Gaussian PDF N (0, 1). Divide
the interval [-5, 5] (subset of the observation space) into 20
bins of equal width 0.5.
Exercise 7

Let X be a random variable that is uniformly distributed


over the interval (1,100). Form a new random variable Y
by rounding X to the nearest integer. In MATLAB code,
this could be represented by Y = round(X). Finally, form
the roundoff error variable according to Z = X − Y . Using
MATLAB generate 10,000 realizations of Z and estimate
its mean and variance. Based on the available data create
a normalized histogram. Find the analytical model that
seems to fit your estimated PDF.
Exercise 8
Which of the following mathematical functions could be the
cumulative distribution function of some random variable?

1
1. FX (x) = 2 + π1 tan−1(x)

2. FX (x) = [1 − e−x] 1(x)

−x2
3. FX (x) = e

4. FX (x) = x21(x)

Note: 1(x) is the unit step function.

Exercise 9
Suppose a random variable has a cumulative distribution
function given by FX (x) = [1 − e−x] 1(x). Find the
following quantities:

1. P (X > 5)

2. P (X < 5)

3. P (3 < X < 7)

4. P (X > 5|X < 7)


Exercise 10

Suppose a random variable has a cumulative distribution


function given by


 0 x≤0

0.04x 0<x≤5
FX (x) =

 0.6 + 0.04x 5 < x ≤ 10

1 x > 10

Find the following quantities:

1. P (X > 5)

2. P (X < 5)

3. P (3 < X < 7)

4. P (X > 5|X < 7)


Exercise 11

Which of the following are valid probability density


functions?

1. pX (x) = e−x1(x)

2. pX (x) = e−|x|
3 2
4 (x − 1) |x| < 2
3. pX (x) =
0 otherwise

2
4. pX (x) = 2xe−x 1(x)

Exercise 12

The conditional cumulative distribution function of random


variable X, conditioned on the event A having occurred is

P (X ≤ x, A)
FX|A(x) = P (X ≤ x|A) =
P (A)

(assuming that P (A) = 0).

Suppose that X ∼ U(0, 1). Find the mathematical


expression for FX|X<1/2(x).
Exercise 13

Consider the Laplace random variable with a PDF given by

b
pX (x) = exp(−b|x|)
2

Evaluate the mean, the variance, the coefficient of skewness


and the coefficient of kurtosis of X.

Note:
 
1 n ax n
xneaxdx = x e − xn−1 eaxdx
a a
Exercise 14

Consider two independent normally distributed random


variables
X ∼ N (0, 1), Z ∼ N (0, 1)
and let
Y = aX + bZ, a, b ∈ R
Determine the coefficients a and b in such a way that
Y ∼ N (0, 1) and ρXY takes any prescribed value from
[−1, 1].

Using MATLAB generate 100 realizations of the pair (X, Y )


for ρXY = 0, ρXY = 0.5, ρXY = −0.5, ρXY = 0.9,
ρXY = −0.9, ρXY = 1 and ρXY = −1.

Display the obtained results using scatter plots.


Facts about matrices

Fact 1

Given a square matrix An×n, an eigenvalue λ and its


associated eigenvector v are a pair obeying the relation

Av = λv

The eigenvalues λ1, . . . , λn of A are the roots of the


characteristic polynomial of A

det(A − λI)

For a positive definite matrix A it holds that λ1, . . . , λn > 0,


i.e., all eigenvalues are positive real.

Fact 2

Let A be a matrix with linearly independent eigenvectors


v1, . . . , vn. Then A can be factorized as follows

A = VΛV−1

where Λn×n = diag{λ1, . . . , λn} and Vn×n = [v1| . . . |vn].


Facts about matrices

Fact 3

Let A = AT be a real symmetric matrix. Such matrix has


n linearly independent real eigenvectors. Moreover, these
eigenvectors can be chosen such that they are orthogonal
to each other and have norm one

wiTwj = 0, ∀i = j
wiTwi = ||wi||2 = 1, ∀i

A real symmetrix matrix A can be decomposed as

A = WΛWT

where W is an orthogonal matrix: Wn×n = [w1| . . . |wn],


WWT = WTW = I.

The eigenvectors wi can be obtained by normalizing the


eigenvectors vi

vi
wi = , i = 1, . . . , n
||vi||2
Exercise 15

Consider a vector random variable X = [X1, X2]T with the


mean and covariance matrix given by

3 5 4
mX = , ΣX =
4 4 5

1. Perform eigendecomposition of the matrix ΣX

ΣX = WΛWT

where Λ is a diagonal matrix made up of eigenvalues of ΣX


and W is an orthogonal matrix.

Sketch the covariance ellipse of X. What is the correlation


coefficient of X1 and X2?

2. Suppose that Z = [Z1, Z2]T is made up of two


uncorrelated random variables with zero mean and unit
variance. Consider the following linear transformation

Y = WΛ1/2Z

where W and Λ are the matrices determined in point 1


above. Show that
ΣY = ΣX .
3. Use a linear transformation, defined in point 2, to
generate two-dimensional Gaussian random variable X with
the mean and covariance matrix specified above. Create a
scatter plot based on 200 realizations of X.

Exercise 16

Consider a zero-mean vector random variable X with


covariance matrix

ΣX = E[XXT] = WΛWT

where Λ is a diagonal matrix made up of eigenvalues of ΣX


and W is an orthogonal matrix.

Let
Y = WΛ−1/2WTX
Show that
ΣY = I
i.e., Y is the vector random variable made up of uncorrelated
components.
Project 1

Record and mix signals obtained from two independent


speech sources (30 second long recordings, 2 linear mixtures
with different mixing coefficients).
Perform blind source separation using the FastICA
algorithm:
1. Center the data to make its mean zero.
2. Whiten the data to obtain uncorrelated mixture signals.
3. Initialize w0 = [w1,0, w2,0]T, ||w0|| = 1 (e.g. randomly)
4. Perform an iteration of a one-source extraction algorithm
  T 3 
 i = avg z(t) wi−1z(t)
w − 3wi−1

where z(t) = [z1(t), z2(t)]T denotes the vector of whitened


mixture signals and avg(·) denotes time averaging

N
1 
avg{x(t)} = x(t)
N t=1

 i by dividing it by its norm


5. Normalize w

i
w
wi =
 i||
||w
6. Determine a unit-norm vector vi orthogonal to wi.

wiTvi = 0, ||vi|| = 1

7. Display and listen to the current results of source


separation

1(t)
x wiT z1(t)
= , t = 1, . . . , N
2(t)
x viT z2(t)

8. If wiTwi−1 is not close enough to 1, go back to step 4.


Otherwise stop.

Literature: A. Hyvärinen, E. Oja. “A fast fixed-point


algorithm for independent component analysis”, Neural
Computation, vol. 9, pp. 1483-1492, 1997.

A special bonus will be granted for writing a procedure that


extracts n > 2 source signals from n mixtures.
Project 2

Create a noisy audio recording (add artificially generated


noise to a clean music or speech signal). The first second
of the recording should contain noise only. Then denoise
recording using the method of spectral subtraction.
Exercise 17

Consider a sinusoid with a random frequency

X(t) = cos(2πf t)

where f is a random variable uniformly distributed over the


interval [0, f0].

1. Show that the theoretical mean function of X(t) is given


by
sin(2πf0 t)
mX (t) = E[X(t)] =
2πf0t

2. Estimate the mean function of X(t) using ensemble


averaging in the case where f0 = 0.01

1000
1 
 X (t) =
m X(t, ξi), t = −300, . . . , 300
1000 i=1

3. Plot the obtained estimates and compare them with the


theoretical mean function.
Exercise 18

Generate 400 samples of the first-order autoregressive


process governed by

Y (t) = 0.99Y (t − 1) + X(t), t = 1, 2, . . .

where Y (0) = 0 and X(t) denotes Gaussian white noise


2
with mean 0 and variance σX = 1.

1. Depict on one plot 100 realizations of Y (t), t ∈ [1, 400].


Starting from what time instant the process can be
regarded as wide-sense stationary?

2. Determine analytically the mean function of Y (t) for any


initial condition Y (0) = y0 .

3. Estimate the steady-state autocorrelation function of


Y (t) using time averaging

300

X (τ ) = 1
R Y (t)Y (t + τ ), τ = 0, . . . , 100
100 t=201

4. Estimate the steady-state autocorrelation function of


Y (t) using ensemble averaging

100
 1 
RX (τ ) = Y (t, ξi )Y (t + τ, ξi)
100 i=1
t = 201, τ = 0, . . . , 100

5. Plot the obtained estimates and compare them with the


theoretical autocorrelation function

(0.99)|τ |
RX (τ ) =
1 − (0.99)2

Exercise 19

Consider random telegraph signal governed by

X(t) ∈ {−1, 1}

P (X(t + 1) = 1|X(t) = 1) = 0.99


P (X(t + 1) = −1|X(t) = 1) = 0.1
P (X(t + 1) = 1|X(t) = −1) = 0.1
P (X(t + 1) = −1|X(t) = −1) = 0.99

t = 1, . . . , 200

P (X(0) = 1) = P (X(0) = −1) = 0.5


Estimate and plot RX (τ ), τ = 0, . . . , 100. To obtain the
estimates use a) time averaging b) ensemble averaging.
Plot and compare the obtained results.
Exercise 20

Consider a zero-mean Gausiian white noise X(t) with


2
variance σX = 1.

1. Evaluate spectral density function SX (ω) of this process.

2. Compute and plot periodogram-based estimates of


SX (ω) using datasets consisting of N = 100, 1000 and
10000 samples
N 2

1  

PX (ωi, N ) =  X(t)e−jωit
N  t=1 

ωi = 2πi/100 , i = −50, . . . , 50
Compare the obtained results with the true spectral
density function of X(t).

3. Compute time-averaged periodogram-based estimates

M
1  j
P̄X (ωi, N ) = P (ωi, N )
M j=1 X

ωi = 2πi/100 , i = −50, . . . , 50
j
where PX (ωi, N ) denotes periodogram obtained for the
j-th segment of X(t) of length N . Plot the results
obtained for N = 100.
Exercise 21

Y (t) is a random process observed at the output of the


second-order filter

1
H(z −1) =
1 + a1z −1 + a2z −2

excited with white Gaussian noise X(t) ∼ N (0, 1) under


zero initial conditions. The poles of the filter are given in
the form
z1 = rejφ, z1 = re−jφ
where
A) r = 0.9, φ = π/100
B) r = 0.9, φ = π/10
C) r = 0.99, φ = π/100
D) r = 0.99, φ = π/10

1. Compute the coefficients a1 and a2 in each of the four


cases indicated above.

2. Plot the amplitude characteristics of H(z −1)


 

A(ωi) = H(e−jωi 
) , ωi = πi/1000, i = 1, . . . , 1000

What is rhe influence of r and φ on the shape of the


amplitude characteristic of H(z −1) ?

You might also like