Quantitative Method of finance
Quantitative Method of finance
Quantitative Method of finance
Methods of Finance
Mathematical and Statistical Foundations
Summations, Products, Logarithms and
Differentiation
1. Logarithms
• Log is the power to which the base must be raised to obtain a given
number
• 23 = 8 can be written as log 2 8 = 3
• We will work with natural logarithms (log with base e)
1. Logarithms
For two variables x and y:
• ln (xy) = ln (x) + ln (y)
• ln (𝑥Τ𝑦) = ln (x) – ln (y)
• ln (𝑦 𝑐 ) = c ln (y)
• ln (1Τ𝑦) = - ln (y)
• ln 𝑒 𝑥 = 𝑒 𝑙𝑛 𝑥 = 𝑥
2. Sigma Notation (Summation)
• σ 1+2+3 =6
• σ4𝑖=1 𝑥i = 𝑥1 + 𝑥2 + 𝑥3 + 𝑥4
• σ𝑛𝑖=1 𝑥𝑖 + σ𝑛𝑖=1 𝑧𝑖 = σ𝑛𝑖= 1(𝑥𝑖 + 𝑧𝑖 )
• σ𝑛𝑖=1 c𝑥𝑖 = c σ𝑛𝑖=1 𝑥𝑖 ; where c is a constant
• σ𝑛𝑖=1 𝑥𝑖 𝑧𝑖 ≠ σ𝑛𝑖=1 𝑥𝑖 σ𝑛𝑖=1 𝑧𝑖
In fact, σ𝑛𝑖=1 𝑥𝑖 𝑧𝑖 = 𝑥1 𝑧1 + 𝑥2 𝑧2 + … + 𝑥𝑛 𝑧𝑛
σ𝑛𝑖=1 𝑥𝑖 σ𝑛𝑖=1 𝑧𝑖 = (𝑥1 + 𝑥2 + … + 𝑥𝑛 ) (𝑧1 + 𝑧2 + … + 𝑧𝑛 )
2. Sigma Notation
• The sum of 𝑛 identical numbers: σ𝑛𝑖=1 𝑥 = 𝑛 𝑥
• σ𝑛𝑖=1 𝑥𝑖 = 𝑥1 + 𝑥2 + … + 𝑥𝑛 = 𝑛 𝑥ҧ
• σ𝑛𝑖=1 σ𝑚
𝑗=1 𝑥𝑖𝑗
3. Pi Notation
• ς𝑛𝑖=1 𝑥𝑖 = 𝑥1 𝑥2 … 𝑥𝑛
• ς𝑛𝑖=1 𝑐 𝑥𝑖 = 𝑐 𝑛 ς𝑛𝑖=1 𝑥𝑖
4. Differentiation
• If 𝑦 𝑥 = 4𝑥 5 + 3𝑥 3 + 2𝑥 + 6
𝑑𝑦
= 20𝑥 4 + 9𝑥 2 + 2
𝑑𝑥
𝑑2 𝑦 3 + 18𝑥
= 80𝑥
𝑑𝑥 2
3 𝑑𝑦
• y = 4𝑥 ⇒ = 12𝑥 2
𝑑𝑥
−1 𝑑𝑦 3
•y= 3Τ
𝑥 = 3𝑥 ⇒ =-
𝑑𝑥 𝑥2
𝑑𝑦 1
• y = log(x) ⇒ =
𝑑𝑥 𝑥
4. Differentiation
𝑑𝑦
•y= 𝑒𝑥 ⇒ = 𝑒𝑥
𝑑𝑥
𝑓 (𝑥) 𝑑𝑦
• In general, y = 𝑒 ⇒ = 𝑓 ′ (𝑥)𝑒 𝑓(𝑥)
𝑑𝑥
3𝑥 2 𝑑𝑦 3𝑥 2
Thus, y = 𝑒 ⇒ =6 𝑥𝑒
𝑑𝑥
𝑑𝑦
• y = 𝑓(𝑥) ± 𝑔 (𝑥) ⇒ = 𝑓 ′ (𝑥) ± 𝑔′ (𝑥)
𝑑𝑥
𝑑 (𝑙𝑜𝑔 𝑓 𝑥 ) 𝑓′ (𝑥)
• y = log (𝑓(𝑥)) ⇒ =
𝑑𝑥 𝑓(𝑥)
3 𝑑 (𝑙𝑜𝑔 𝑓 𝑥 ) 3𝑥 2 +2
Thus, y = log (𝑥 + 2𝑥 – 1) ⇒ =
𝑑𝑥 𝑥 3 + 2𝑥 − 1
4. Differentiation and Optimization
Given y(x) = 5𝑥 2 + 3x – 6
• In order to find an optimum, 𝑥 ∗ , (minimum or maximum), the first
derivative should be set to zero.
𝑑𝑦
= 10x + 3 => 𝑥 ∗ = − 3Τ10 −3ൗ
𝑑𝑥
10
• To check whether the optimum is a minimum
or a maximum, you compute the second derivative
𝑑2 𝑦
= 10 > 0
𝑑𝑥 2
A positive (negative) second derivative indicates a
minimum (maximum)
5. Partial Differentiation
• Given a multivariate function 𝑦 𝑥1 , … , 𝑥𝑛 = 𝑓 𝑥1 , … , 𝑥𝑛
• We can define and compute partial (or directional) derivatives (also
𝜕𝑦 𝜕𝑦 𝜕𝑦
called gradients) can be: , , …,
𝜕𝑥1 𝜕𝑥2 𝜕𝑥𝑛
Example:
𝑦 𝑥1 , 𝑥2 = 3𝑥13 + 4𝑥1 − 2𝑥24 + 2𝑥22
𝜕𝑦
• = 9𝑥12 + 4
𝜕𝑥1
𝜕𝑦
• = - 8𝑥23 + 4𝑥2
𝜕𝑥2
Matrix Algebra
6. Matrices
• A scalar is a number. It is the simplest matrix. It is a (1 × 1) matrix.
• A vector is a one-dimensional array of numbers
• A matrix is a two-dimensional collection or array of numbers.
• The size of a matrix is given by its numbers and columns
• A (2⨯4) matrix M is given by:
1 2 4 7
2 −3 6 9
4 6 2 −8
7 9 −8 0
6. Matrices
• A diagonal matrix is a square matrix which has non-zero terms on the
leading diagonal and zeros everywhere else
−3 0 0 0
0 1 0 0
0 0 2 0
0 0 0 −1
6. Matrices
• A diagonal matrix with 1s on the main diagonal and zero everywhere
else is known as the identity matrix, it is a special type of the diagonal
matrix denoted by 𝐼4 .
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
• The identity matrix is essentially the matrix equivalent of the number
one
𝑀𝐼 = 𝐼𝑀 = 𝑀
7. Operations with Matrices: Multiplication
• Generally, in order to perform operations with matrices, they must be
conformable.
• To perform matrix multiplication, the number of columns of matrix A =
number of rows of matrix B.
• Suppose that matrix A is of dimension (𝑅 × 𝐶) and matrix B is of
dimension (𝐶 × 𝐷). Their product, AB, will be of dimension 𝑅 × 𝐷 .
7. Operations with Matrices: Matrix Addition
and Subtraction
0.3 0.6 0.2 −0.1
If A = and B =
−0.1 0.7 0 0.3
• Rank(𝐴) = Rank(𝐴′ )
3 4
A= ⇒ Tr (A) = 3 + 9 = 12
7 9
Probability and Statistics
10. Probability and Probability Distributions
• A random variable is one that can take on any value from a given set.
This value is determined in part by chance
• The mean of a random variable is also known as its expected value
written as E(𝑦)
• E(c) = c (The expected value of a constant is the constant)
• E(c𝑦) = c E(𝑦)
• E(c𝑦 + 𝑑) = c E(𝑦) + 𝑑
• For two independent random variables:
E(𝑦1 𝑦2 ) = E(𝑦1 ) E(𝑦2 )
10. Probability and Probability Distributions
• For a discrete random variable: E(𝑦) = σ 𝑦𝑝(𝑦)
Example: Roll of a die Value Prob.
1ൗ
𝐸 𝑦 = 1 × 1ൗ6 + 2 × 1ൗ6 + 3 × 1ൗ6 + 4 × 1ൗ6 + 5 × 1ൗ6 + 1 6
2 1ൗ
6
6 × 1ൗ6
3 1ൗ
6
≈ 3.5 4 1ൗ
6
• Note that in any one roll of a die, you will never see 3.5
• So what does this value mean? 5 1ൗ
6
• If you roll the die many (many) times and average 1ൗ
6 6
the realizations, you will a number that is very close to 3.5
10. Probability and Probability Distributions
• For a continuous random variable:
∞
E(𝑦) = −∞ 𝑦 𝑓 𝑦 𝑑𝑦
• The variance of a random variable 𝑦 is usually written var(𝑦)
var(𝑦) = E 𝑦 − 𝐸(𝑦) 2
• The variance of a constant is zero: var(c) = 0
• For c and d constants, var(c𝑦 + 𝑑) = 𝑐 2 var(𝑦)
• For two independent random variables (𝑦1 and 𝑦2 ):
var(c𝑦1 +𝑑𝑦2 ) = 𝑐 2 var(𝑦1 ) + 𝑑2 var(𝑦2 )
10. Probability and Probability Distributions
• The covariance between two random variables, 𝑦1 and 𝑦2 is given by
cov(𝑦1 , 𝑦2 ) = E 𝑦1 − E 𝑦1 (𝑦2 − E 𝑦2 )
• For two independent random variables, 𝑦1 and 𝑦2 , cov(𝑦1 , 𝑦2 ) = 0
≈ 2.92
10. Probability and Probability Distributions
• The expected return on an asset is an expected value.
• For example, the Capital Asset Pricing Model (CAPM) states that the
expected return on an asset is a function of the asset return’s
covariance (we’ll come to that) with a single factor which is the
expected return on the market
𝐸 𝑅𝑖 = 𝑅𝑓 + 𝛽𝑖 𝐸(𝑅𝑚 − 𝑅𝑓 ,
where Ri is the return on the asset (or portfolio), Rm is the return on
the market and Rf is the risk-free rate.
• The risk of an asset or portfolio is commonly measured using the
variance (or more precisely, the standard deviation/volatility) of its
return.
10. Probability and Probability Distributions
Random variable
Continuous Discrete
• Probability density function (pdf) is the plot of continuous variables
• The most commonly used distribution is the normal or Gaussian
• If 𝑦 ~ 𝑁(𝜇, 𝜎 2 ), then, any linear transformation of 𝑦 is also normally
distributed:
𝑎 + 𝑏𝑦 ~ 𝑁(𝑏𝜇 + 𝑎, 𝑏 2 𝜎 2 )
10. Probability and Probability Distributions
• The probability density function (pdf of y) is given by:
1 −(𝑦−𝜇)2
ൗ 2
𝑓 𝑦 = 𝑒 2𝜎
2𝜋𝜎
• The standard normally distributed random variable is:
𝑦−𝜇
𝑧= ~ 𝑁(0, 1)
𝜎
• Stock returns are thought to be normally distributed (well, big caveat
here)
• Normally distributed returns imply that stock prices are log-normally
distrusted
10. Probability and Probability Distributions
• The normal distribution emerges commonly.
Example: Roll of two dice
10. Probability and Probability Distributions
• Cumulative density function (cdf):
𝑃(𝑦 ≤ 𝑦0 )= 𝐹(𝑦0 )
𝑓(𝓍)
1
0.5
-3 0 3 Z
pdf cdf
10. Probability and Probability Distributions
38
11. Central Limit Theorem
• For a random sample of size N, (𝑦1 , 𝑦2 , …, 𝑦𝑛 ) drawn from a
population that is normally distributed with mean 𝜇 and variance 𝜎 2 :
𝜎2
𝑦ത ~ 𝑁(𝜇, )
𝑁
• CLT says that the sampling distribution of the mean of any random
sample of observations will tend to the normal distribution with mean
𝜇 as N → ∞
• The sample mean 𝑦ത will follow a normal distribution even if the
original observations ( 𝑦1 , 𝑦2 , …, 𝑦𝑛 ) did not follow a normal
distribution
12. Other Statistical Distributions
• The sum of squared normally distributed random variables is 𝜒 2
𝑧12 + 𝑧22 + … + 𝑧𝑛2 ~𝜒 2 (𝑛)
• 𝑛: degrees of freedom, controls the shape of the distribution
• The z’s are independent normal variables
• 𝜒 2 takes only positive values
12. Other Statistical Distributions
• Suppose 𝑦1 ~ 𝜒 2 (𝑛1 ) and 𝑦2 ~ 𝜒 2 (𝑛2 ) are two independent chi-
squared distributions with 𝑛1 and 𝑛2 degrees of freedom
• Then, the ratio will follow an F distribution
𝑦1
ൗ𝑛1
𝑦2 ~ 𝐹 (𝑛1 , 𝑛2 )
ൗ𝑛2
𝑍
• The t-distribution is given by: ~ 𝑡 (𝑛)
𝑦1
ൗ𝑛1
σ(𝑦 − 𝑦)2
𝑖
𝜎2 =
𝑇 −1
• The standard deviation of a random variable is the square root of its variance:
σ(𝑦𝑖 − 𝑦)2
𝜎=
𝑇 −1
15. Higher Moments
1
σ(𝑦𝑖 − 𝑦ത )3
• Skewness = 𝑇−1
𝜎 2 3/2
• It is the third moment, it measures symmetry
1
σ(𝑦𝑖 − 𝑦ത )4
• Kurtosis = 𝑇 −1
(𝜎 2 )2
• It is the fourth moment, it measures the fatness of the tails (or, more
loosely, the “peakedness” of the distribution)
15. Higher Moments
𝑓(𝑥)
𝑓(𝑥) mode
median
mean
𝑥
Skewness = 0 and mesokurtic Positive skewed
mean > median > mode
15. Higher Moments
• Negative Skewed:
mode > median > mean mean
median mode
• A normal distribution has
Kurtosis = 3
ൠ =>Excess Kurtosis = 0
Skewness = 0
• One can fully characterize the normal distribution by its first two
moments
15. Higher Moments
• A distribution with a kurtosis larger than 3 (i.e., which exhibits a
positive excess kurtosis) is referred to as leptokurtic or fat tailed
• A distribution with a kurtosis lower than 3 (i.e., which exhibits a
negative excess kurtosis) is referred to as platykurtic
• A distribution with a kurtosis that is exactly equal to 3 (i.e., which
exhibits zero excess kurtosis) is referred to as mesokurtic
15. Higher Moments
The red distribution has fat tails: Leptokurtic
15. Higher Moments
16. Measures of Association
• Sample covariance of two random variables is given by:
σ 𝑥𝑖 −𝑥ҧ (𝑦𝑖 − 𝑦)
𝜎𝑥𝑦 =
(𝑇−1)
• The numerical value of the covariance has no interpretation
• Correlation coefficient (sample correlation)
σ(𝑥𝑖 −𝑥 ) (𝑦𝑖 − 𝑦) 𝜎𝑥𝑦
𝜌𝑥𝑦 = =
(𝑇−1)𝜎𝑥 𝜎𝑦 𝜎𝑥 𝜎𝑦
• 𝜎𝑥 = standard deviation of 𝑥, 𝜎𝑦 = standard deviation of 𝑦
• - 1 (perfect negative association) ≤ 𝜌𝑥𝑦 ≤ + 1 (perfect positive
association)