0% found this document useful (0 votes)
4 views

Lecture Set 3

Uploaded by

berkeal260
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lecture Set 3

Uploaded by

berkeal260
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Lecture Set 3

DNSC 6311
Stochastic Foundations: Probability

Korel Gundem

The George Washington University


Department of Decision Sciences

1 / 26
Lecture Outline

▶ Random Variables

▶ Types of Random Variables

▶ Probability Distributions

▶ Joint Probability Distributions

▶ Marginal and Conditional Distributions

▶ Expectation of a Random Variable

▶ Higher Moments and Variance of a Random Variable

2 / 26
Random Variable: Basic Ideas
▶ A random variable X is a function (rule) that assigns
numerical values (real numbers) to the outcomes in the
sample space X : S → R.

S *
*
*

* * *

0
▶ For example, consider the experiment of flipping two coins:

S = {HH, TH, HT , TT }

We can define X =Number of tails observed and x = (0, 1, 2)


are the realizations of X .
3 / 26
Random Variable Types
▶ We have discrete and continuous random variables (RVs).
▶ A discrete RV has a countable number of values: finite or
infinite.
For example, we can observe the stock market for 10
consecutive days and define X as number of times the market
is up during these 10 days, then X takes finite values.
Alternatively, starting tomorrow we can observe the market
until it is up (the first time) and define X as number of days
until the market is up.
In this case X takes infinite but countable values.
▶ A continous RV has an uncountable number of values.

For example, X can be defined as the value of the DJ index at


the end of a trading day or it can represent the market share
of a company during a specific period.
4 / 26
An Example
▶ You wager on any number from 1 to 6, say 4, and then three
dice are rolled.
If any one of the three dice comes up with a 4, you win the
amount of your wager and you also get back your original
stake.

▶ It could happen that more than one dice comes up with a 4.

For example, if you had a $1 bet on a 4, and each of the three


dice came up with a 4, you would win $3.
If two of the three dice came up with a 4, you would win $2.

▶ Assume that you bet $1 on a number and let X be the net


amount you win in one play of the game.

▶ Possible values (realizations) of X are (−1, 1, 2, 3).

5 / 26
Probability Distribution of RVs: Discrete RVs
▶ We are interested in computing probabilities of random
variables.
▶ A probability distribution is a rule that assigns probabilities to
the outcomes in sample space.
Therefore, it assigns probabilities to any possible value of a
random variable.

▶ For a discrete RV X we denote the probability distribution of


X as
PX (x) = P(X = x)
denoting the probability assigned to x, a specific value of X .
▶ Possible representations of a probability distribution can be via

▶ Tabular form
▶ Graphical form
▶ Mathematical formula.
6 / 26
Probability Distribution: Tabular Representation
▶ Consider the example on betting numbers 1-6 in tossing the
three dice where you bet on 4.

The RV X has the possible values are x = (−1, 1, 2, 3).

▶ Probability distribution of X

What is P(X = −1) ? Any assumptions ? Note that this can


only happen when all three dice come up with a number
different than you wager on.
Thus, P(X = −1) = PX (−1) = 5/6 × 5/6 × 5/6 = 125/216.

▶ What is P(X = 2) ? Here we need 2 of the 3 dice come up


with the number which was bet.
Thus, P(X = 2) = PX (2) = 3 × 1/6 × 1/6 × 5/6 = 15/216.

7 / 26
Probability Distribution: Tabular Representation

▶ For a discrete RV the probability distribution PX (x) is also


referred to as the probability mass function.

x PX (x)
−1 (5/6)(5/6)(5/6) = 125/216
1 3 (1/6)(5/6)(5/6) = 75/216
2 3 (1/6)(1/6)(5/6) = 15/216
3 (1/6)(1/6)(1/6) = 1/216
P
x PX (x) = 1

▶ Probability distribution properties


▶ 0 ≤ PX (x) ≤ 1, for all x

P
x PX (x) = 1

8 / 26
Probability Distribution: Graphical Representation

Probability Distribution for X


0.0 0.2 0.4 0.6
P(X=x)

−1 0 1 2 3
x

9 / 26
Cumulative Distribution Function

▶ The cumulative distribution function (CDF) of a random


variable X is denoted FX (x).

▶ The CDF gives the probability that X is less than or equal to


x, for all values of x.

▶ Thus, we need evaluate FX (x) = P(X ≤ x).

x PX (x) FX (x)
−1 125/216 125/216
1 75/216 200/216
2 15/216 215/216
3 1/216 216/216

10 / 26
Cumulative Distribution Function Plot

0.8 Cumulative Distribution Function of X


F(x)
0.4
0.0

−2 −1 0 1 2 3 4
x

CDF is a nondecreasing function of x and is always a right


continuous function (for all RVs). Thus, in the case of discrete
random variables it will have jumps.
11 / 26
Expectations and Moments of RVs
Summarizing probability distributions: mode, median, moments
▶ The expected value or the mean of a discrete random variable
X is defined as
X
µX = E [X ] = xP(X = x)
x

What is the value of E [X ] in our case example ?

▶ It can be interpreted as the long run average.

▶ It can also be thought as the center of gravity of the


probability distribution.

▶ Sensitive to the high and low values of the RV.

▶ Note that for discrete RVs the mean does not have to be a
possible value of the RV.
12 / 26
Joint Probability Distributions
▶ The joint probability distribution for two discrete random
variables X and Y , denoted by PXY (x, y ), is the possible
values of (x, y ) together with their joint probabilities.
It is also referred to as the bivariate distribution of X and Y .

▶ Notation
PXY (x, y ) = P(X = x, Y = y )
denotes the joint probability that the random variable X takes
the value x and, at the same time, the random variable Y
takes the value y .
▶ Properties
▶ 0 ≤ PXY (x, y ) ≤ 1, for all x and y

P P
y x PXY (x, y ) = 1

13 / 26
Joint Probability Distributions - An Example
▶ Assume RVs X and Y are the percent returns of two stocks
with the joint probability distribution given by
Y = 5% Y = 10% Y = 15%
X = −20% 0.05 0.05 0.05
X = 10% 0.10 0.10 0.05
X = 30% 0.30 0.10 0.05
X = 50% 0.10 0.05 0
▶ Note that here we assume that stock returns are discrete
random variables for illustrative purposes.

▶ From the table, the probability that the return from the first
stock (X ) is 10 percent and from the second stock (Y ) is 15
percent on a given period is 0.05, that is,

P(X = 10%, Y = 15%) = 0.05.

14 / 26
Marginal Probability Distributions
▶ The marginal probability distribution of X , denoted by PX (x),
is the probability distribution of X by itself.

▶ We can obtain the marginal distribution of X as


X
PX (x) = PXY (x, y )
y

for each value of X , that is, we fix the value of X at x and


sum the joint probabilities over all values of Y .
This is the marginalization rule.

▶ Similarly, X
PY (y ) = PXY (x, y )
x

for each value of Y .

15 / 26
Marginal Probability Distributions - An Example

▶ For example, we can obtain the marginal distribution of X as

x PX (x)
−20% 0.05 + 0.05 + 0.05 = 0.15
10% 0.10 + 0.10 + 0.05 = 0.25
30% 0.30 + 0.10 + 0.05 = 0.45
50% 0.10 + 0.05 + 0.00 = 0.15
P
x PX (x) = 1

▶ P(X = −20%) =
P
y PXY (X = −20%, Y = y ) = 0.15

▶ P(X = 10%) =
P
y PXY (X = 10%, Y = y ) = 0.25

16 / 26
Conditional Probability Distributions

▶ The conditional probability distribution of X given that Y = y


is denoted PX |Y (x|y ).

▶ We evaluate PX |Y (x|y ) as

PXY (x, y )
PX |Y (x|y ) =
PY (y )

for each value of X for PY (y ) > 0.

▶ Similarly, the conditional probability distribution of Y , given


that X = x, is
PXY (x, y )
PY |X (y |x) =
PX (x)
for each value of Y for PX (x) > 0.

17 / 26
Conditional Distribution Table PX |Y (x|y )

Using the joint probability distribution table for PXY (x, y ) and the
derived marginal distributions of X and Y , the conditional
probability distribution of X given Y can be obtained from the
following table.
Y = 5% Y = 10% Y = 15%
X = −20% 0.091 0.167 0.334
X = 10% 0.182 0.333 0.333
X = 30% 0.545 0.333 0.333
X = 50% 0.182 0.167 0

What do the columns of the table represent ?

18 / 26
Conditional Distributions PX |Y =y

P(X|Y=5) P(X|Y=10) P(X|Y=15)


0.0 0.1 0.2 0.3 0.4 0.5

0.0 0.1 0.2 0.3 0.4 0.5

0.0 0.1 0.2 0.3 0.4 0.5


P(X=x|Y=10)

P(X=x|Y=15)
P(X=x|Y=5)

−20 0 20 40 −20 0 20 40 −20 0 20 40


x x x

What can you say about relationship between Y and X based on


the plots ?
Are they (in)dependent ?
19 / 26
Conditional Probability Distributions and Independence
▶ Random variables X and Y are independent if and only if

PXY (x, y ) = PX (x)PY (y )

for all (x, y ).

▶ Equivalently, X and Y are independent if and only if

PX |Y (x|y ) = PX (x)

and
PY |X (y |x) = PY (y )
for all (x, y ).

▶ Are the returns X and Y from the two securities independent?

20 / 26
Expected Value of Stock Returns

▶ Using the joint probability distribution

Y = 5% Y = 10% Y = 15%
X = −20% 0.05 0.05 0.05
X = 10% 0.10 0.10 0.05
X = 30% 0.30 0.10 0.05
X = 50% 0.10 0.05 0
and the implied marginals, we can compute the means of X
and Y .

▶ E [X ] = −20% × 0.15 + . . . + 50% × 0.15 = 20.5% which


implies an expected return of 20.5 percent.

▶ Similarly, we can obtain E [Y ] = 8%.

21 / 26
Expectation of a Function of a RV
▶ For any function r (X ) of RV X , we can obtain the expected
value of r (X ) as
X
E [r (X )] = r (x)P(X = x)
x

▶ If we have r (X ) = X k for any positive integer k then


X
E [X k ] = x k P(X = x)
x

is known as the kth moment of X (around origin).


▶ For example, for k = 2 we have E [X 2 ], the second moment.

▶ We can also, define moments around mean as


X
E [(X − µX )k ] = (x − µX )k P(X = x).
x

22 / 26
Variance of a RV
▶ The second moment around the mean, is the variance of the
RV.
▶ For RV X the variance is defined as
X
Var (X ) = E [X − µX ]2 = (x − µX )2 P(X = x).
x

Typically, the variance is denoted by σX2 .


▶ An alternative expression can be obtained using the second
moment E [X 2 ] as

σX2 = Var (X ) = E [X 2 ] − µ2X .

▶ The standard deviation σX is the positive square root of the


√ 2
variance, i.e, σX = σ X which measures the dispersion of X
around the mean.
23 / 26
Example: Back to Stock Returns X and Y
▶ We can compute the variance of returns X and Y as

Var [X ] = (−20−20.5)2 ×0.15+. . .+(50−20.5)2 ×.15 = 444.75

and Var [Y ] = 13.5 (See R Code).


This gives the standard deviations as σX = 21.089 and
σY = 3.674

▶ Which stock is better ? We have µX = 20.5 > µY = 8, but


σX = 21.089 > σY = 3.674.
σX
▶ Define the coefficient of variation as cvX = whose
|µX |
reciprocal is known as Sharpe’s ratio in the finance literature.

▶ We can obtain that cvX = 1.03 > cvY = 0.46.

24 / 26
Properties of Means
For RVs X and Y and c and b are constants we can show

1. E [c] = c

2. E [cX ] = cE [X ]

3. E [X + Y ] = E [X ] + E [Y ]

4. E [cX + bY ] = cE [X ] + bE [Y ]

5. E [XY ] = E [X ]E [Y ] if X and Y are independent RVs.

A good application of these is expected return from a portfolio.


We can talk about expected return of a portfolio defined as
P = wX + (1 − w )Y where X and Y are returns from two stocks
and 0 < w < 1.

25 / 26
Conditional Mean of X given Y = y
▶ We define the conditional mean of X given Y = y for discrete
random variables as
X
E [X |Y = y ] = xPX |Y (x|y )
x

Y = 5% Y = 10% Y = 15%
X = −20% 0.091 0.167 0.334
X = 10% 0.182 0.333 0.333
X = 30% 0.545 0.333 0.333
X = 50% 0.182 0.167 0

We can obtain E [X |Y = 5] as

−20%×0.091+10%×0.182+30%×0.545+50%×0.182 = 25.454%.

26 / 26

You might also like