0% found this document useful (0 votes)
50 views106 pages

ECN-511 Random Variables 11

This document provides information about the Linear Algebra and Random Processes course with subject code ECN-511. The course covers topics such as vector spaces, linear operators, matrices, random variables, random processes, and probability distributions. Students will be assessed through assignments, quizzes, mid-term and end-term exams. Recommended textbooks for the course are also listed.

Uploaded by

jaiswal.mohit27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views106 pages

ECN-511 Random Variables 11

This document provides information about the Linear Algebra and Random Processes course with subject code ECN-511. The course covers topics such as vector spaces, linear operators, matrices, random variables, random processes, and probability distributions. Students will be assessed through assignments, quizzes, mid-term and end-term exams. Recommended textbooks for the course are also listed.

Uploaded by

jaiswal.mohit27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Department of Electronics and Communication

Engineering

Linear Algebra and Random


Processes
Subject Code - ECN-511

Random Variables
Course Content
1. Vector spaces, subspaces, bases and dimensions, linear dependence and
independence, vector products, orthogonal bases and orthogonal
projections,
2. Linear operators and Matrices: Eigen values and Eigen vectors,
characteristic polynomial, diagonalization, Hermitian and unitary
matrices, singular value decomposition
3. Discrete and continuous random variables: distribution and density
functions, conditional distributions and expectations, functions of
random variables, moments, sequence of random variables
4. Random process: Probabilistic structure; Mean, autocorrelation and auto-
covariance functions; Strict-sense and wide-sense stationary processes;
Power spectral density; LTI systems with WSS process as the input;
Examples of random processes - white noise, Gaussian, Poisson and
Markov Processes
Marks Distribution
• Assignments + Quiz: 10 % + 15%
• Mid-term Exam: 35%
• End-Term Exam: 40%
S. Name of Books / Authors/ Publishers Year
No.
1. S. Axler, "Linear Algebra Done Right", 3rdEdn., Springer International 2015 2015
Publishing.
2. G.Strang, "Linear Algebra and Its Applications", 4thEdn., Cengage Learning. 2007

3. K.M. Hoffinan and R. Kunze, "Linear Algebra", 2nd Edn.,Prentice Hall 2015 2015
India.
4. A. Papoulis and S. Pillai, "Probability, Random Variables and Stochastic 2017
Processes", 4thEdn., McGraw Hill.
5. H. Stark and J.W. Woods, "Probability and Random Processes with 2001
Applications to Signal Processing", 3rd Edn., Pearson India.
Random Variable

• A random variable is a number or a


function x(ζ) assigned to every outcome ζ of
an event.
• In the die experiment, we assign to six
outcomes.
x(f1)=1, x(f2)=2, x(f3)=3, x(f4)=4, x(f5)=5 x(f6)=6
• If we assign number 1 to every even
outcome and number 0 to every odd
outcome
x(f1)=x(f5)=x(f3)=0, x(f4)=x(f2)=x(f6)=1
Definition
• A random variable x is process of assigning a
number x(ζ) to every outcome ζ. The
function should satisfy following
conditions
The set (x<x) is an event for every x.
The probability of the events (x= ∞)
and (x=-∞) equal 0.
A complex random variable z is a sum
z=x+jy
• Random variables produce probability
distributions based on experimentation,
observation, or some other data-generating
process. Random variables allow us to
understand the world around us based on a
sample of data, by knowing the likelihood
that a specific value will occur in the real
world or at some point in the future.
• They are of two types of random variables:
– Continuous
– Discrete
Discrete random variable
• Takes countable numbers of distinct values such
as 0,1,2,3,4,....
• Discrete random variables are usually (but not
necessarily) counts.
• If a random variable can take only a finite number
of distinct values, then it must be discrete.
• Examples:
– Number of children in a family,
– Count of people in a cinema,
– The number of patients in a doctor's surgery,
– The number of defective light bulbs in a box of
ten.
(Definitions: Valerie J. Easton and John H. McColl's Statistics Glossary v1.1)
Continuous random variable

• Takes an infinite number of possible values.


Examples
– Height,
– Weight,
– The amount of sugar in an orange,
– The time required to run a mile.

• Not defined at specific values.


• Instead, it is defined over an interval of values, and
is represented by the area under a curve (in
advanced mathematics, this is known as
an integral).
Probability Distribution Function

• The distribution function of the random variables x


is the function
Fx(x)=P(x ≤ x)
defined for every x from -∞ to ∞.
• The distribution function of the random variables
x, y, and z is designated as Fx(x) , Fy(y) and Fz(z) .
• The variables themselves could be represented by
any other variable, e.g.,
Fx(w)=P(x ≤ w)
is the distribution function on random variable x.
To remove ambiguity, distribution functions can be
simply defined as F(x) , F(y) and F (z).
Probability Density Function

• Suppose a random variable X may take all values


over an interval of real numbers. Then the
probability that X is in the set of outcomes A, P(A),
is defined to be the area above A and under a
curve. The curve, which represents a function p(x),
must satisfy the following:

1: The curve has no negative values (p(x) > 0 for all x)


2: The total area under the curve is equal to 1.

A curve meeting these requirements is known as a


density curve.
Example of discrete random variable
• Suppose a variable X can take the values 1,
2, 3, or 4.
• The probabilities associated with each
outcome are described by the following table:
Outcome 1 2 3 4
Probability 0.1 0.3 0.4 0.2
• The probability that X is equal to 2 or 3 is
the sum of the two probabilities: P(X = 2 or X
= 3) = P(X = 2) + P(X = 3) = 0.3 + 0.4 = 0.7.
• Similarly, the probability that X is greater
than 1 is equal to 1 - P(X = 1) = 1 - 0.1 = 0.9
Probability histogram
• This distribution may also be described by
the probability histogram
0.4

0.3

0.2

0.1

0
1 2 3 4
Cumulative Distribution Function
• The cumulative distribution function for the above probability
histogram is calculated as follows:
 The probability that X is less than or equal to 1 is 0.1,
 The probability that X is less than or equal to 2 is 0.1+0.3
= 0.4,
 The probability that X is less than or equal to 3 is
0.1+0.3+0.4 = 0.8, and
 The probability that X is less than or equal to 4 is
0.1+0.3+0.4+0.2 = 1.
10
0.8
The probability histogram
0.6
for the cumulative
distribution, also called the 0.4
probability distribution 0.2
function
0
1 2 3 4
Action of random variable

Ω is sample description space


ζ is random outcome of the experiment
X(ζ ) is a real number assigned to the event.
We establish a corresponding rule between ζ and R using X(ζ ).
Practically, random variable is a function whose domain is Ω and whose range is
some subset of the real line.
 Being a function, every ζ can generate an specific X(ζ ), but
X(ζ ) can be assigned to more than one outcome.
 The event {ζ : X(ζ ) ≤ x} is abbreviated to {X ≤ x}, which is of
unique importance, we would like to assign a probability to it.
 This probability P[X ≤ x]=FX(x) is called probability
distribution function.
 FX(x) is staircase for discrete X, and continuous for continuous
X.
It is shown in advanced books that X can be a random number
only if inverse image under X of all Borel subsets in R, making
up the field are events.
Inverse image: Consider an arbitrary Borel set of real numbers B,
the sets of points EB in Ω for which X(ζ ) assumes values in B is
called inverse image of the Set B under the mapping X.
 Sets of practical importance are sets {x=a}, {x: a ≤ x ≤ b},
{x: a< x ≤ b}, {x: a ≤ x <b}, {x: a < x < b} and their unions
and intersections.
 Above 5 sets are generally abbreviated as [a], [a, b], (a, b], [a,
b), and (a, b).
 Intervals which includes ends are called closed and intervals
that exclude ends are called open.
Example: If we ask a random person if he or she has a daughter?

The underlying experiment has sample description space Ω ={no, yes}.

Complete set of outcomes will be (ϕ, no, yes, Ω)

P[ϕ]=0, P[Ω]=1, P[yes]=any value between 0 to 1, P[no]=1-P[yes].


• Example of associate probabilities by assigning value to
responses can be
P[ϕ]=0,
P[X ≤ ∞]=1,
P[X = 0]=1/4 (assumed), 0 assigned to no
P[X = 1]=3/4 (assumed), 1 assigned to yes.
• The probabilities that X lies in the closed or open sets are
P[3 ≤ X ≤ 4] = P[ϕ] = 0
P[0 ≤ X < 1] = P[no] =1/4
P[0 ≤ X ≤ 2] = P[Ω] = 1
P[0 < X ≤ 1] = P[yes] = 3/4
Every set is related to an event defined on Ω, and hence X is a
random variable. x denotes values that X can take, for example o, 1,
3, 4 in the above example.
Example
• A wheel of fortune can take any values between 0 to
99.
• They get prize if there is odd number.
• What will be Ω?
• If we assign exact value on needle to the X, is that X
(having values 1, 2,…99) a random variable of the
event here?

X is not valid random variable for this probability


space, because it is not a function on Ω.
Properties of FX(x)
• FX(∞)=1, FX(-∞)=0. ({ζ : X(ζ ) ≤ x} and [-∞, x] are
equivalent terms)
• x1 ≤ x2 FX(x1) ≤ FX(x2). (FX(x) is a non-decreasing
function of x.
Proof:
Similarly we can prove following relations
• FX(x) is continuous from right, that is

Examples 2.3-1, 2.3-2 and 2.3-4 from reference book -5.


2.3-1
Discontinuous FX(x)
• If FX(x) is a continuous function of x, then
FX ( x)  FX ( x  )
• However if FX(x) is discontinuous at point x,

FX ( x)  FX ( x  )  P[ x   X  x]
 lim P[ x    X  x]
 0

P[ X  x]
• Typically, if P[ X  x] is a continuous function of x, it is
zero, wherever FX(x) is continuous and non-zero only
at discontinuities in FX(x)
2.3-4

(Use slide-20)
Probability Density Function
• If FX(x) is continuous and differentiable, the pdf is given as,
dFX ( x)
f X ( x) 
dx
 Interpretation of fX(x)
P[ x  X  x  x]  FX ( x  x)  FX ( x)
Probability
distribution Probability
function Histogram

See that in the case of staircase PDF, pdf is 0 within the step.
• If FX(x) is continuous in its first derivative, then, for
sufficiently small x
FX ( x  x)  FX ( x)  f X ( x)x

• For small x
P[ x  X  x  x] f X ( x)x
• If fx(x) exists, then FX(x) is continuous, and therefore P[X=x]=0.
Properties
1. fX(x) ≥ 0

2. . f X ( )d  FX ()  FX ()  1


3. .FX ( x)   f X ( )d   P [ X  x]


x2 x1

4. .F X ( x2 )  FX ( x1 )   f X ( )d    f X ( )d 
 
x2

 f
x1
X ( )d 

 P[ x1  X  x2 ]
Some important pdfs
1) The univariate Normal (Gaussian) pdf
2
1  x 
1  
2  

f X ( x)  e
2 2

f X ( x)
Mean and Variance

• Mean   xf

X ( x)dx

2     2
• Variance ( x ) f X ( x)dx


• For random variables having discrete values such as Bernoulli,


Poisson, etc. 
   xi P[ X  xi ]
i 

  xi    P[ X  xi ]
2

i 
Examples
• Find mean and variance.

(a)

(b)
Gaussian pdf to standard normal
Given
Need to evaluate

b' 1
1  2
P[a  X  b] 
2
e
a'
2
d

b' 1 a' 1
1  2 1  2

2
e
0
2
d 
2
e
0
2
d
Error function
In literature erf2(x)=
Commonly used Density Functions
2) Rayleigh Distribution

Examples :
rocket landing
errors,
radial distribution of
misses around target
in rifle range,
random fluctuations
in some certain
waveforms
Commonly used Density Functions
3) Exponential Distribution

Examples :
Lifetime of
λ=1/μ machinery, intensity
variation in
incoherent light,
waiting time
problem
Commonly used Density Functions
4) Uniform Distribution

The density function of the


uniform distribution with a
constant success rate on the
interval a<=x<=b, where
2<=x<=7, is:

e.g. Queuing model, when


we have no priory knowledge
apart from end point
Commonly used Density Functions
4) Chi-Square
Gamma function
α>0

If we integrate by parts,

If α is an integer
Exercise: Finding and .
Exercise: Finding
Commonly used Density Functions
5) Gamma (b>0, c>0)

6) Student-t
• 7) Laplacian

Example: Image and


speech
signal processing
8) Rician distribution

• Also called Rice or nakagami-n


distribution.
• I0(z) is a modified Bessel
function of the first kind with
order zero.
• The distribution is valid for real
positive numbers.
• Two shape parameters define
the Rician distribution: ν and σ.
 The Rice Distribution is related to the normal distribution, because it is
basically the distribution of the norm of two normally distributed variables.
 The pdf of the Rice distribution [0, σ] is the same as the pdf of the Rayleigh
distribution [σ]
Gudbjartsson, H and Patz S. The Rician Distribution of Noisy MRI Data. Magn Reson Med. 1995 Dec; 34(6): 910–914.
Background on Bessel function
Solution to Bessel equation:

I0(x) =J0(x) in
rician pdf
expression

https://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html
• As an example, in case there are multiple
event s having continuous R.V.:
Probability Mass function
• Probability measure for discrete signal is
probability mass function

• Probability distribution function for discrete


R.V. is given as

• For any event B


• PX(x) is 0, where FX (x) is continuous and finite
whenever there is discontinuity.

• For a countable output PMF can be given as

• It is also called frequency function.


Probability Mass Functions
• 1.

• 2.

• 3.
Mixed distribution function
Referring slide-41

Example: Find the pdf of the discrete R.V. as follows


Example: PDF of a mixed R.V. is given as follows:
(a) Find values of K.
(b) Compute
(c) Compute
Conditional PDF
Distribution function as weighted sum of conditional
distributions
 Probability of event B in terms of n mutually exclusive and
exhaustive events {Ai}, defined on the same probability space as B.

 Considers Fx(x) as weighted sum of conditional distributed


function.
Example
Baye’s formula for probability
density functions
• Consider that event B and X=x are defined on same probability
space.
• From definition of conditional probability

• If X is continuous R.V., ?, so we apply limits


and refine it as

• Using , and
applying
• This is also known as posteriori probability or posteriori
density of B given X=x.
• Above equation provides another important relation as
follows:

• It is called average probability of B.


Example
• First find probability of any switch being thrown
• Then

• We wish to compute max among


• Calculating for source A:

Similarly can be calculated for B and C.


• The max. posteriori probability of any source will suggest that
that particular source is cause of failure.
Functions of random variables
• Functions are relation/rule/correspondence which maps
output to input.
• If we know the PDF,PMF or pdf of the input R.V., can we
compute output R.V.s.
• Sometimes computation is too complex, and we settle for
descriptors of output which have less information than
PDF/pdf., for example expectation or ‘average’.
• When system has memory (output is function of previous
input), then such calculations are even more complex.
• Problem statement is : Given a rule g(.), and a random
variable with pdf fx(x), what is the pdf fY(y) of the random
variable Y=g(X)?
Simple Example of two-level decoder
• A two-level waveform is made analog because of the effect of
additive Gaussian noise.
• Decoder samples the analog waveform x(t) at t0 and decode
according to the following rule.

Input signal has noise having Gaussian pdf. Here , we assume X:


N(1,1).
• Looking at input and output, we can decide following events,
when X≈x(t0)
{Y=0} = {X < 0.5}
{Y=1} = {X ≥ 0.5}
• We can calculate PMF for output as pdf for input R.V.is known
P{Y=0} = P{X < 0.5}
1 0.5 −0.5(𝑥−1)2
= ‫𝑒 ׬‬ dx
2π −∞
= 0.31
• We can calculate P{Y=1}=.69
• What would be P{Y=y}, y≠1,0?
• We can write pdf of Y:
Solving Problems of Y=g(X)

Example:
Generalization
General Formula for determining the pdf of
Y=g(X)
This equation is applicable where y=g(x) has several real roots. If the
equation does not have real roots then
Example 3.2-7
H. Stark and J.W. Woods, "Probability and Random Processes with Applications to
Signal Processing", 3rd Edn., Pearson India.

We will solve above problem (i) by using step by step calculations (ii) Using the relation
used in slide 66.
Case: When g(x) is a constant value

• When g(x) is a constant value, it is difficult to apply formula of


slide-66 directly because g’(x)=0 .
Expectation of Random variable
 The expected value or mean of X is given by

E[ X ]   xf ( x)dx


 Note that E[X] is indeed the center of gravity of the mass distribution
described by the function f:
Example
1. Let X be uniform U(a, b). Then f(x)= 1/(b-a) for x in [a, b] and zero outside
this interval.
 b b
1  1 2 1  1 b2  a 2 b  a

E ( X )  xf ( x)dx   x
 a
ba
dx   x  
 2 b  a a 2 b  a

2

2. . ?

• For any R.V., Y


Example

Example
Example

(a) Expected value of the number of Seizures

(b)
Conditional Expectations
• In some situations, we want to know a subset of the
population. For E.g. the average lifespan of people still alive at
an age of more than 70, is the average BP of a long-distance
runner.

• Example: Marks obtained by students in ECN511 are 12, 35,


45, 67, 77, 88, 99.
• What is average score? 60.42
• IF passing score is 47, then what is average passing score?
82.75
• Conditional expectation of X given that event B has occurred
is (a)

• If X is a discrete, then above equation can be replaced with


(b)

• Let

(c)
(d)

Example: What is E(Y |X = 1): the conditional expectation of Y, given that X = 1?

Ans: Using formula (c): 3.19


• What would be E(Y)?
Using formula (d), ans. would be 2.9753.
Joint Distribution and Densities
Properties of FXY(x, y)
• Example:
 0.19
Marginal Distributions

Marginal Densities
Independent Random Variables
For independent Events

Please note that it is different from the one on slide-81. We can derive this from
slide -81 by applying independence property for PDF.
Example: Are R.V. X and Y random independent?
Example containing a non-independent
R.V.

(a) Find A.
(b) What are marginal pdfs?
(c) .
Example-2:
Are x, and Y represent independent variables:

No! only for ρ=0, R.V.s becomes independent!


Expectation of function of two variables
Example

Computation of E(X) and E(Y):

Marginal PMFs
Example
Moments
• Two samples can have similar mean, but completely different
deviations from mean value.

From definition,
The most frequently used moment is m2, called variance, Var[X], and
Result can be generalized to higher order

• Example:
Joint Moments
• It is measure to define the state of one R.V., by observing
another R.V.
Joint Central Moment

• Order is given as (i+j)


• Examples of second order joint central moments
Properties of un-correlated R.V.

1.

2.

0
Example

A) Are X and Y correlated?

B) Are X and Y independent?


Example of Jointly Gaussian pdf
Random Process

• A random process is a collection of random


variables Xt indexed by time. Each realization of the process is
a function of t. For every fixed time t, Xt is a random variable.
• Random processes are classified as continuous
time or discrete-time, depending on whether time is continuous
or discrete. We typically notate continuous-time random
processes as {X(t)} and discrete-time processes as {X[n]}.
• Example: A sine wave having amplitude as R.V.
Basic Definitions
1.

2.

3.
Hermitian Symmetry of correlation and Covariance
functions

You might also like