0% found this document useful (0 votes)
7 views

lecture2

Uploaded by

leofabregues46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

lecture2

Uploaded by

leofabregues46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1 Probability theory

1.1 Basics
Consider a finite sample space Ω

Ω = {ω1 , ω2 , . . . , ωM }, M < ∞.

Define a probability measure P on Ω such that


P ({ωi }) = P (ωi ) = pi > 0, i = 1, . . . , M
M
X
pi = 1.
i=1

For every subset A of Ω, A ⊆ Ω, we have that


X
P (A) = P (ωi ).
ωi ∈A

A random variable X on Ω is a mapping

X : Ω −→ R.

The expectation of X is defined as


M
X
E[X] = X(ωi )P (ωi ).
i=1

1.2 Sigma-algebras and information


It is important to know which information is available to investors. This is formalized using
σ-algebras and filtrations.

Definition 1 A collection F of subsets of Ω is called a σ-algebra (or σ-field) if the following


hold.
1. ∅ ∈ F.
2. If A ∈ F then Ac ∈ F.

[
3. If An ∈ F, n = 1, 2, . . . then ∈F
n=1

Remark 1 When working on a finite sample space Ω condition 3 will reduce to


3.’ If A ∈ F and B ∈ F then A ∪ B ∈ F.

Example 1 The following are examples of σ-algebras.


1. F = 2Ω = {A|A ⊆ Ω}, the power set of Ω.
2. F = {∅, Ω}, the trivial σ-algebra.
3. F = {∅, A, Ac , Ω}.

1
Definition 2 A set P = {A1 , . . . , An } of nonempty subsets of the sample space Ω is called a
(finite) partition of Ω if
n
[
1. Ai = Ω,
i=1

2. Ai ∩ Aj = ∅ for i 6= j.
The σ-algebra consisting of all possible unions of the Ai :s (including the empty set) is called
the σ-algebra generated by P and is denoted by σ(P).

Remark 2 On a finite sample space every σ-algebra is generated by a partition.

When making decisions investors may only use the information available to them. This is
formalized by measurability requirements.

Definition 3 A function X : Ω −→ {x1 , . . . , xK } is F-measurable if

X −1 (xi ) = {ω ∈ Ω|X(w) = xi } ∈ F for all xi

If X is F-measurable we write X ∈ F.

Remark 3 Let F = σ(P). Then a function X : Ω −→ R is F-measurable if and only if X


is constant on each set Ai , i = 1, . . . , n.

This captures the idea that based on the available information we should be able to determine
the value of X.
Measurability is preserved under a lot of operations which is the content of the next propo-
sition.

Proposition 1 Assume that X and Y are F-measurable. Then the following hold:
1. For all real numbers α and β the functions

αX + βY, X ·Y

are F-measurable.

2. If Y (ω) 6= 0 for all ω, then


X
Y
is F-measurable.

3. If {Xn }∞
n=1 is a (countable) sequence of measurable functions, then the functions

sup Xn , inf Xn , lim sup Xn , lim inf Xn ,


n n n n

are F-measurable.

Definition 4 Let X be a function X : Ω −→ R. Then F = σ(X) is the smallest σ-algebra


such that X is F-measurable.
If X1 , . . . , Xn are functions such that Xi : Ω −→ R, then G = σ(X1 , . . . , Xn ) is the smallest
σ-algebra such that X1 , . . . , Xn are G-measurable.

2
The next proposition formalizes the idea that if Z is measurable with respect to a certain
σ-algebra, then “the value of Z is completely determined by the information in the σ-algebra”.

Proposition 2 Let X1 , . . . , Xn be mappings such that Xi : Ω −→ R. Assume that the map-


ping Z : Ω −→ R is σ(X1 , . . . , Xn )-measurable. Then there exists a function f : Rn −→ R
such that
Z(ω) = f (X1 (ω), . . . , Xn (ω)).

We also need to know what is meant by independence. Recall that two events A and B on a
probability space (Ω, F, P ) are independent if

P (A ∩ B) = P (A) · P (B).

For σ-algebras and random variables on (Ω, F, P ) we have the following definition.

Definition 5 The σ-algebras F1 , . . . , Fn are independent if


n n
!
\ Y
P Ai = P (Ai ) whenever Ai ∈ Fi , i = 1 . . . , n.
i=1 i=1

Random variables X1 , . . . , Xn are independent if σ(X1 ), . . . , σ(Xn ) are independent.

1.3 Stochastic processes and filtrations


Let N = {0, 1, 2, 3, . . .}.

Definition 6 A stochastic process {Sn }∞


n=0 on the probability space (Ω, F, P ) is a mapping

S : N × Ω −→ R

such that for each n ∈ N


Sn (·) : Ω −→ R
is F-measurable.

Note that Sn (ω) = S(n, ω). We have that for a fixed n

ω −→ S(n, ω)

is a random variable. For a fixed ω

n −→ S(n, ω)

is a deterministic function of time, called the realization or sample path of S for the outcome
ω.

Remark 4 In this course we will mostly be looking at a fixed time horizon so the process
will only live up until time T , that is we will be looking at processes {Sn }Tn=0 .

3
A stochastic process generates information and as before this is formalized in terms of σ-
algebras, only now there will be a time dimension as well.

Definition 7 Let {Sn }∞ n=0 be random process on (Ω, F, P ). The σ-algebra generated by S
over [0, t] is defined by
FtS = σ{Sn ; n ≤ t}.

We interpret FtS as the information generated by observing S over the time interval [0, t].
More generally information developing over time is formalized by filtrations. They are families
of increasing σ-algebras.

Definition 8 A filtration F = {Fn }n≥0 on (Ω, F, P ) is an indexed family of σ-algebras on


Ω such that
1. Fn ⊆ F, n ≥ 0,

2. if m ≤ n then Fm ⊆ Fn .

Remark 5 As stated before, we will mostly be looking at a fixed time horizon in this
course so the filtration will only live up until time T , that is we will be looking at filtrations
F = {Fn }Tn=0 .

For stochastic process the following measurability conditions are relevant.

Definition 9 Given a filtration F and a random process S on (Ω, F, P ) we say that S is


adapted to F if
Sn ∈ Fn for all n ≥ 0,
and S is predictable with respect to F if

Sn ∈ Fn−1 for all n ≥ 1.

1.4 Conditional expectation


Let X be a random variable on (Ω, F, P ) and G a σ-algebra such that G ⊆ F. In this section
we aim to define the expectation of X given the information in G, or conditional on G, E[X|G].
We will do this in three steps.
1. First we will define the expectation of X given a set B ∈ F, such that P (B) 6= 0, i.e.
E[X|B]. Recall that
M
X X
E[X] = X(ωi )P (ωi ) = X(ω)P (ω)
i=1 ω∈Ω

Now it would seem natural (?) to use the normalized probabilities


P (ωi )
on B.
P (B)
We thus define
X P (ωi ) 1 X
E[X|B] = X(ωi ) = X(ω)P (ω)
ωi ∈B
P (B) P (B) ω∈B

4
Example 2 Consider the finite sample space Ω = {ω1 , ω2 , ω3 } endowed with the power
σ-algebra F = 2Ω , and a probability measure P such that P (ωi ) = 1/3, i = 1, 2, 3.
Furthermore let B1 = {ω1 , ω2 }, B2 = {ω3 }, P = {B1 , B2 }, and G = σ(P). Finally, let
(
1, if ω = ω1
X(ω) = I{ω1 } (ω) =
0, otherwise.

Then we have that


X 1 1 1 1
E[X] = X(ω)P (ω) = 1 · +0· +0· =
ω∈Ω
3 3 3 3

and that
1 1 1 1 1
X  
E[X|B1 ] = X(ω)P (ω) = 1· +0· = ,
P (B1 ) ω∈B 1/3 + 1/3 3 3 2
1

whereas
1 X 1 1
E[X|B2 ] = X(ω)P (ω) = · 0 · = 0.
P (B2 ) ω∈B 1/3 3
2

2. Next we will define the expectation of X conditional on a partition P of Ω. Suppose


that P = {B1 , . . . , BK } and that P (Bi ) 6= 0, i = 1, . . . , K. Note that for any random
variable Y measurable with respect to σ(P) we have that if ωi ∈ Bj

Y (ωi ) = E[Y |Bj ]

since Y is constant on each Bi . This means that


K
X
Y (ω) = E[Y |Bi ]IBi (ω),
i=1

where IBi denotes the indicator function of Bi , i.e.


(
1, if ω ∈ Bi
IBi (ω) =
0, otherwise.

We now define
K
X
E[X|P](ω) = E[X|Bi ]IBi (ω).
i=1

Note that this means that E[X|P] is a random variable Z such that

1. Z ∈ σ(P) and that


2. for all B ∈ σ(P) we have that
X X
Z(ω)P (ω) = X(ω)P (ω).
ω∈B ω∈B

5
Example 3 Continuing on Example 2 we can compute
2
X 1
E[X|P] = E[X|Bi ]IBi (ω) = · IB1 (ω) + 0 · IB2 (ω).
i=1
2

3. Now we are ready to give the general definition of E[X|G].

Definition 10 Consider a random variable X on (Ω, F, P ) and a σ-algebra G such


that G ⊆ F. The conditional expectation of X given G denoted E[X|G] is any random
variable Z such that

1. Z ∈ G, and
2. for all A ∈ G we have that
X X
Z(ω)P (ω) = X(ω)P (ω).
ω∈A ω∈A

The proposition below states some properties of the conditional expectation.

Proposition 3 The conditional expectation has the following properties. Suppose that X and
Y are random variables on (Ω, F, P ) and that α, β ∈ R. Let G be a σ-algebra such that G ⊆ F.
Then the following hold.
1. Linearity.
E[αX + βY |G] = αE[X|G] + βE[Y |G].

2. Monotonicity. If X ≤ Y then
E[X|G] ≤ E[Y |G].
3.

E[E[X|G]] = E[X].

4. If H is σ-algebra such that H ⊆ G ⊆ F then

(i) E[E[X|H]|G] = E[X|H],


(ii) E[E[X|G]|H] = E[X|H].

Thus “the smallest σ-algebra always wins”.

5. Jensen’s inequality. If ϕ is a convex function, then

ϕ(E[X|G]) ≤ E[ϕ(X)|G].

6. If X is independent of G then
E[X|G] = E[X].

7. Taking out what is known. If X ∈ G then

E[XY |G] = X · E[Y |G].

You might also like