0% found this document useful (0 votes)
27 views54 pages

Quantitative Method of Finance

Uploaded by

tamara.sammak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views54 pages

Quantitative Method of Finance

Uploaded by

tamara.sammak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

MFIN 305: Quantitative

Methods of Finance
Mathematical and Statistical Foundations
Summations, Products, Logarithms and
Differentiation
1. Logarithms
• Log is the power to which the base must be raised to obtain a given
number
• 23 = 8 can be written as log 2 8 = 3
• We will work with natural logarithms (log with base e)
1. Logarithms
For two variables x and y:
• ln (xy) = ln (x) + ln (y)
• ln (𝑥Τ𝑦) = ln (x) – ln (y)
• ln (𝑦 𝑐 ) = c ln (y)
• ln (1Τ𝑦) = - ln (y)
• ln 𝑒 𝑥 = 𝑒 𝑙𝑛 𝑥 = 𝑥
2. Sigma Notation (Summation)
• σ 1+2+3 =6
• σ4𝑖=1 𝑥i = 𝑥1 + 𝑥2 + 𝑥3 + 𝑥4
• σ𝑛𝑖=1 𝑥𝑖 + σ𝑛𝑖=1 𝑧𝑖 = σ𝑛𝑖= 1(𝑥𝑖 + 𝑧𝑖 )
• σ𝑛𝑖=1 c𝑥𝑖 = c σ𝑛𝑖=1 𝑥𝑖 ; where c is a constant
• σ𝑛𝑖=1 𝑥𝑖 𝑧𝑖 ≠ σ𝑛𝑖=1 𝑥𝑖 σ𝑛𝑖=1 𝑧𝑖

In fact, σ𝑛𝑖=1 𝑥𝑖 𝑧𝑖 = 𝑥1 𝑧1 + 𝑥2 𝑧2 + … + 𝑥𝑛 𝑧𝑛
σ𝑛𝑖=1 𝑥𝑖 σ𝑛𝑖=1 𝑧𝑖 = (𝑥1 + 𝑥2 + … + 𝑥𝑛 ) (𝑧1 + 𝑧2 + … + 𝑧𝑛 )
2. Sigma Notation
• The sum of 𝑛 identical numbers: σ𝑛𝑖=1 𝑥 = 𝑛 𝑥
• σ𝑛𝑖=1 𝑥𝑖 = 𝑥1 + 𝑥2 + … + 𝑥𝑛 = 𝑛 𝑥ҧ
• σ𝑛𝑖=1 σ𝑚
𝑗=1 𝑥𝑖𝑗
3. Pi Notation
• ς𝑛𝑖=1 𝑥𝑖 = 𝑥1 𝑥2 … 𝑥𝑛

• ς𝑛𝑖=1 𝑐 𝑥𝑖 = 𝑐 𝑛 ς𝑛𝑖=1 𝑥𝑖
4. Differentiation
• If 𝑦 𝑥 = 4𝑥 5 + 3𝑥 3 + 2𝑥 + 6
𝑑𝑦
= 20𝑥 4 + 9𝑥 2 + 2
𝑑𝑥
𝑑2 𝑦 3 + 18𝑥
= 80𝑥
𝑑𝑥 2
3 𝑑𝑦
• y = 4𝑥 ⇒ = 12𝑥 2
𝑑𝑥
−1 𝑑𝑦 3
•y= 3Τ
𝑥 = 3𝑥 ⇒ =-
𝑑𝑥 𝑥2
𝑑𝑦 1
• y = log(x) ⇒ =
𝑑𝑥 𝑥
4. Differentiation
𝑑𝑦
•y= 𝑒𝑥 ⇒ = 𝑒𝑥
𝑑𝑥
𝑓 (𝑥) 𝑑𝑦
• In general, y = 𝑒 ⇒ = 𝑓 ′ (𝑥)𝑒 𝑓(𝑥)
𝑑𝑥
3𝑥 2 𝑑𝑦 3𝑥 2
Thus, y = 𝑒 ⇒ =6 𝑥𝑒
𝑑𝑥
𝑑𝑦
• y = 𝑓(𝑥) ± 𝑔 (𝑥) ⇒ = 𝑓 ′ (𝑥) ± 𝑔′ (𝑥)
𝑑𝑥
𝑑 (𝑙𝑜𝑔 𝑓 𝑥 ) 𝑓′ (𝑥)
• y = log (𝑓(𝑥)) ⇒ =
𝑑𝑥 𝑓(𝑥)
3 𝑑 (𝑙𝑜𝑔 𝑓 𝑥 ) 3𝑥 2 +2
Thus, y = log (𝑥 + 2𝑥 – 1) ⇒ =
𝑑𝑥 𝑥 3 + 2𝑥 − 1
4. Differentiation and Optimization
Given y(x) = 5𝑥 2 + 3x – 6
• In order to find an optimum, 𝑥 ∗ , (minimum or maximum), the first
derivative should be set to zero.
𝑑𝑦
= 10x + 3 => 𝑥 ∗ = − 3Τ10 −3ൗ
𝑑𝑥
10
• To check whether the optimum is a minimum
or a maximum, you compute the second derivative
𝑑2 𝑦
= 10 > 0
𝑑𝑥 2
A positive (negative) second derivative indicates a
minimum (maximum)
5. Partial Differentiation
• Given a multivariate function 𝑦 𝑥1 , … , 𝑥𝑛 = 𝑓 𝑥1 , … , 𝑥𝑛
• We can define and compute partial (or directional) derivatives (also
𝜕𝑦 𝜕𝑦 𝜕𝑦
called gradients) can be: , , …,
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
Example:
𝑦 𝑥1 , 𝑥2 = 3𝑥13 + 4𝑥1 − 2𝑥24 + 2𝑥22
𝜕𝑦
• = 9𝑥12 + 4
𝜕𝑥1
𝜕𝑦
• = - 8𝑥23 + 4𝑥2
𝜕𝑥2
Matrix Algebra
6. Matrices
• A scalar is a number. It is the simplest matrix. It is a (1 × 1) matrix.
• A vector is a one-dimensional array of numbers
• A matrix is a two-dimensional collection or array of numbers.
• The size of a matrix is given by its numbers and columns
• A (2⨯4) matrix M is given by:

𝑚11 𝑚12 𝑚13 𝑚14


𝑚21 𝑚22 𝑚23 𝑚24
6. Matrices
• A row vector of dimension 1 × C where C is the number of columns,
is given by one row
(2.7 3.0 -1.5 0.3)
• A column vector of dimension (𝑅 × 1), where R is the number of
rows, is given by one column
0.3
−0.1
0.0
6. Matrices
• When the number of rows is equal to the number of columns it is said
that the matrix is square (R = C)
0.3 0.6
−0.1 0.7
• A matrix in which all elements are zero is know as a zero matrix
0 0 0
0 0 0
6. Matrices
• A symmetric matrix is a special type of a square matrix that is
symmetric about the main diagonal; 𝑚𝑖𝑗 = 𝑚𝑗𝑖 ∀ i,j

1 2 4 7
2 −3 6 9
4 6 2 −8
7 9 −8 0
6. Matrices
• A diagonal matrix is a square matrix which has non-zero terms on the
leading diagonal and zeros everywhere else

−3 0 0 0
0 1 0 0
0 0 2 0
0 0 0 −1
6. Matrices
• A diagonal matrix with 1s on the main diagonal and zero everywhere
else is known as the identity matrix, it is a special type of the diagonal
matrix denoted by 𝐼4 .
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
• The identity matrix is essentially the matrix equivalent of the number
one
𝑀𝐼 = 𝐼𝑀 = 𝑀
7. Operations with Matrices: Multiplication
• Generally, in order to perform operations with matrices, they must be
conformable.
• To perform matrix multiplication, the number of columns of matrix A =
number of rows of matrix B.
• Suppose that matrix A is of dimension (𝑅 × 𝐶) and matrix B is of
dimension (𝐶 × 𝐷). Their product, AB, will be of dimension 𝑅 × 𝐷 .
7. Operations with Matrices: Matrix Addition
and Subtraction
0.3 0.6 0.2 −0.1
If A = and B =
−0.1 0.7 0 0.3

0.3 + 0.2 0.6 − 0.1 0.5 0.5


• A+B= =
−0.1 + 0 0.7 + 0.3 −0.1 1.0
0.3 − 0.2 0.6 − (−0.1) 0.1 0.7
• A–B= =
−0.1 − 0 0.7 − 0.3 −0.1 0.4
7. Operations with Matrices
• Multiplying or dividing matrices by a scalar implies that every element of
the matrix is multiplied by that number
0.3 0.6 0.6 1.2
2A = 2 =
−0.1 0.7 −0.2 1.4
More generally, where c is a scalar:
• A+B =B +A
• A+ 0 =A
• cA = Ac
• c (A + B) = cA + cB
• A0 = 0A = 0
7. Operations with Matrices
• Multiplying two matrices requires the number of columns of the first
matrix equal to the number of rows of the second matrix, AB ≠ BA
1 2
0 2 4 9
Ex. 7 3
6 3 0 2
1 6
(3 ⨯ 2) (2 ⨯ 4)
( 1⨯0 + 2⨯6 ) ( 1⨯2 + 2⨯3 ) ( 1⨯4 + 2⨯0 ) ( 1⨯9 + 2⨯2 )
= ( 7⨯0 + 3⨯6 ) ( 7⨯2 + 3⨯3 ) ( 7⨯4 + 3⨯0 ) ( 7⨯9 + 3⨯2 )
( 1⨯0 + 6⨯6 ) ( 1⨯2 + 6⨯3 ) ( 1⨯4 + 6⨯0 ) ( 1⨯9 + 6⨯2 )
(3 ⨯ 4)
• AB ≠ BA, I can’t reverse the order of the matrices; they would be
non-conformable
7. Operations with Matrices
• Matrices cannot be divided. Instead, we multiply by the inverse
• The transpose of a matrix, written 𝐴′ or 𝐴𝑇 , is the matrix obtained by
transposing (switching) the rows and columns of the matrix
1 2
′ 1 7 1
A= 7 3 ⇒ 𝐴=
2 3 6
1 6
(3 ⨯ 2) (2 ⨯ 3)

• If A is of dimension R⨯C, 𝐴′ will be of dimension C⨯R


8. Rank of a matrix
• Is the maximum number of linearly independent rows (or columns) in
matrix
3 4
• rank = 2 ⇒ Full rank matrix since the number of columns =
7 9
rank
3 6
• rank = 1 ⇒ Deficient / short rank matrix (singular),
2 4

• Deficient cannot be inverted.


8. Rank of a matrix

• Rank(𝐴) = Rank(𝐴′ )

• Rank(𝐴𝐵) ≤ min (Rank(𝐴), Rank(𝐵))

• Rank (𝐴′ A) = Rank (𝐴𝐴′ ) = Rank(𝐴)


9. The Trace of a matrix
• The trace of a square matrix is the sum of the terms on its leading
diagonal

3 4
A= ⇒ Tr (A) = 3 + 9 = 12
7 9
Probability and Statistics
10. Probability and Probability Distributions
• A random variable is one that can take on any value from a given set.
This value is determined in part by chance
• The mean of a random variable is also known as its expected value
written as E(𝑦)
• E(c) = c (The expected value of a constant is the constant)
• E(c𝑦) = c E(𝑦)
• E(c𝑦 + 𝑑) = c E(𝑦) + 𝑑
• For two independent random variables:
E(𝑦1 𝑦2 ) = E(𝑦1 ) E(𝑦2 )
10. Probability and Probability Distributions
• For a discrete random variable: E(𝑦) = σ 𝑦𝑝(𝑦)
Example: Roll of a die Value Prob.

1ൗ
𝐸 𝑦 = 1 × 1ൗ6 + 2 × 1ൗ6 + 3 × 1ൗ6 + 4 × 1ൗ6 + 5 × 1ൗ6 + 1 6
2 1ൗ
6
6 × 1ൗ6
3 1ൗ
6
≈ 3.5 4 1ൗ
6
• Note that in any one roll of a die, you will never see 3.5
• So what does this value mean? 5 1ൗ
6
• If you roll the die many (many) times and average 1ൗ
6 6
the realizations, you will a number that is very close to 3.5
10. Probability and Probability Distributions
• For a continuous random variable:

E(𝑦) = ‫׬‬−∞ 𝑦 𝑓 𝑦 𝑑𝑦
• The variance of a random variable 𝑦 is usually written var(𝑦)
var(𝑦) = E 𝑦 − 𝐸(𝑦) 2
• The variance of a constant is zero: var(c) = 0
• For c and d constants, var(c𝑦 + 𝑑) = 𝑐 2 var(𝑦)
• For two independent random variables (𝑦1 and 𝑦2 ):
var(c𝑦1 +𝑑𝑦2 ) = 𝑐 2 var(𝑦1 ) + 𝑑2 var(𝑦2 )
10. Probability and Probability Distributions
• The covariance between two random variables, 𝑦1 and 𝑦2 is given by
cov(𝑦1 , 𝑦2 ) = E 𝑦1 − E 𝑦1 (𝑦2 − E 𝑦2 )
• For two independent random variables, 𝑦1 and 𝑦2 , cov(𝑦1 , 𝑦2 ) = 0

• For four constants, c, d, e and f


cov(𝑐 + 𝑑𝑦1 , 𝑒 + 𝑓𝑦2 ) = 𝑑𝑓cov(𝑦1 , 𝑦2 )
• For two non-independent random variables 𝑦1 and 𝑦2
var(𝑐𝑦1 + 𝑑𝑦2 ) = 𝑐 2 var(𝑦1 ) + 𝑑2 var(𝑦2 ) + 2𝑐𝑑 cov(𝑦1 , 𝑦2 )
10. Probability and Probability Distributions
Example: Going back to the example of a roll of a die, how do I
calculate the variance in the previously used dice roll example?
1 1 1
var(y) = (1 − 3.5)2 ⨯ + (2 − 3.5)2 ⨯ + … + (6 − 3.5)2 ⨯
6 6 6

≈ 2.92
10. Probability and Probability Distributions
• The expected return on an asset is an expected value.
• For example, the Capital Asset Pricing Model (CAPM) states that the
expected return on an asset is a function of the asset return’s
covariance (we’ll come to that) with a single factor which is the
expected return on the market
𝐸 𝑅𝑖 = 𝑅𝑓 + 𝛽𝑖 𝐸(𝑅𝑚 − 𝑅𝑓 ,
where Ri is the return on the asset (or portfolio), Rm is the return on
the market and Rf is the risk-free rate.
• The risk of an asset or portfolio is commonly measured using the
variance (or more precisely, the standard deviation/volatility) of its
return.
10. Probability and Probability Distributions
Random variable

Continuous Discrete
• Probability density function (pdf) is the plot of continuous variables
• The most commonly used distribution is the normal or Gaussian
• If 𝑦 ~ 𝑁(𝜇, 𝜎 2 ), then, any linear transformation of 𝑦 is also normally
distributed:
𝑎 + 𝑏𝑦 ~ 𝑁(𝑏𝜇 + 𝑎, 𝑏 2 𝜎 2 )
10. Probability and Probability Distributions
• The probability density function (pdf of y) is given by:
1 −(𝑦−𝜇)2
ൗ 2
𝑓 𝑦 = 𝑒 2𝜎
2𝜋𝜎
• The standard normally distributed random variable is:
𝑦−𝜇
𝑧= ~ 𝑁(0, 1)
𝜎
• Stock returns are thought to be normally distributed (well, big caveat
here)
• Normally distributed returns imply that stock prices are log-normally
distrusted
10. Probability and Probability Distributions
• The normal distribution emerges commonly.
Example: Roll of two dice
10. Probability and Probability Distributions
• Cumulative density function (cdf):
𝑃(𝑦 ≤ 𝑦0 )= 𝐹(𝑦0 )
𝑓(𝓍)
1

0.5

-3 0 3 Z
pdf cdf
10. Probability and Probability Distributions

38
11. Central Limit Theorem
• For a random sample of size N, (𝑦1 , 𝑦2 , …, 𝑦𝑛 ) drawn from a
population that is normally distributed with mean 𝜇 and variance 𝜎 2 :
𝜎2
𝑦ത ~ 𝑁(𝜇, )
𝑁
• CLT says that the sampling distribution of the mean of any random
sample of observations will tend to the normal distribution with mean
𝜇 as N → ∞
• The sample mean 𝑦ത will follow a normal distribution even if the
original observations ( 𝑦1 , 𝑦2 , …, 𝑦𝑛 ) did not follow a normal
distribution
12. Other Statistical Distributions
• The sum of squared normally distributed random variables is 𝜒 2
𝑧12 + 𝑧22 + … + 𝑧𝑛2 ~𝜒 2 (𝑛)
• 𝑛: degrees of freedom, controls the shape of the distribution
• The z’s are independent normal variables
• 𝜒 2 takes only positive values
12. Other Statistical Distributions
• Suppose 𝑦1 ~ 𝜒 2 (𝑛1 ) and 𝑦2 ~ 𝜒 2 (𝑛2 ) are two independent chi-
squared distributions with 𝑛1 and 𝑛2 degrees of freedom
• Then, the ratio will follow an F distribution
𝑦1
ൗ𝑛1
𝑦2 ~ 𝐹 (𝑛1 , 𝑛2 )
ൗ𝑛2
𝑍
• The t-distribution is given by: ~ 𝑡 (𝑛)
𝑦1
ൗ𝑛1

• The normal distribution is a special case of the t distribution:


the t distribution with infinite degrees of freedom → normal distribution
12. Other Statistical Distributions: t-
distribution
12. Other Statistical Distributions: F-
distribution
12. Other Statistical Distributions: Chi-
Squared Distribution
13. Measures of Central Tendency
1
• Arithmetic mean: 𝑟𝐴 = σ𝑁
𝑖=1 𝑟𝑖 , where 𝑟𝑖 is a series of length N
𝑁
• Mode: Is the most frequently occurring value in a series. Easiest to
compute but not suitable for continuous, non-integer data
• Median: Middle value in a series when the elements are arranged in
ascending order (i.e., it is the 50th percentile). It is robust to outliers
• Geometric mean:

𝑅𝐺 = 1 + 𝑅1 1 + 𝑅2 … 1 + 𝑅𝑁 𝑁 −1
1
𝑅𝐺 ≈ 𝑟𝐴 − 𝜎2
2
13. Measures of Central Tendency
Examples:
• Consider the following numbers: {2,4,-6,7,1,0,20}
2+4−6+7+1+0+20
The mean is 𝑟𝐴ҧ = =4
7
• Consider the following numbers: {3,7,11,15,22,24}. The medians are
11 and 15. Sometimes we take the mean of two medians so that
median would be: (11+15)/2 = 13
• Consider the following numbers: {3,7,11,15,15,22,24}. The mode is
15.
14. Measures of Spread
• The variance of a random variable is given by:

σ(𝑦 − 𝑦)2
𝑖
𝜎2 =
𝑇 −1

• Where T is the sample size

• The standard deviation of a random variable is the square root of its variance:

σ(𝑦𝑖 − 𝑦)2
𝜎=
𝑇 −1
15. Higher Moments
1
σ(𝑦𝑖 − 𝑦ത )3
• Skewness = 𝑇−1
𝜎 2 3/2
• It is the third moment, it measures symmetry
1
σ(𝑦𝑖 − 𝑦ത )4
• Kurtosis = 𝑇 −1
(𝜎 2 )2

• It is the fourth moment, it measures the fatness of the tails (or, more
loosely, the “peakedness” of the distribution)
15. Higher Moments
𝑓(𝑥)
𝑓(𝑥) mode

median

mean

𝑥
Skewness = 0 and mesokurtic Positive skewed
mean > median > mode
15. Higher Moments
• Negative Skewed:
mode > median > mean mean

median mode
• A normal distribution has
Kurtosis = 3
ൠ =>Excess Kurtosis = 0
Skewness = 0
• One can fully characterize the normal distribution by its first two
moments
15. Higher Moments
• A distribution with a kurtosis larger than 3 (i.e., which exhibits a
positive excess kurtosis) is referred to as leptokurtic or fat tailed
• A distribution with a kurtosis lower than 3 (i.e., which exhibits a
negative excess kurtosis) is referred to as platykurtic
• A distribution with a kurtosis that is exactly equal to 3 (i.e., which
exhibits zero excess kurtosis) is referred to as mesokurtic
15. Higher Moments
The red distribution has fat tails: Leptokurtic
15. Higher Moments
16. Measures of Association
• Sample covariance of two random variables is given by:
σ 𝑥𝑖 −𝑥ҧ (𝑦𝑖 − 𝑦)
𝜎𝑥𝑦 =
(𝑇−1)
• The numerical value of the covariance has no interpretation
• Correlation coefficient (sample correlation)
σ(𝑥𝑖 −𝑥 ) (𝑦𝑖 − 𝑦) 𝜎𝑥𝑦
𝜌𝑥𝑦 = =
(𝑇−1)𝜎𝑥 𝜎𝑦 𝜎𝑥 𝜎𝑦
• 𝜎𝑥 = standard deviation of 𝑥, 𝜎𝑦 = standard deviation of 𝑦
• - 1 (perfect negative association) ≤ 𝜌𝑥𝑦 ≤ + 1 (perfect positive
association)

You might also like