0% found this document useful (0 votes)
80 views53 pages

Mathematicalmethods Lecture Slides Week6 LinearAlgebraII

LinearAlgebra

Uploaded by

eric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views53 pages

Mathematicalmethods Lecture Slides Week6 LinearAlgebraII

LinearAlgebra

Uploaded by

eric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

AMATH 460: Mathematical Methods

for Quantitative Finance


6. Linear Algebra II
Kjell Konis
Acting Assistant Professor, Applied Mathematics
University of Washington
Kjell Konis (Copyright 2013)

6. Linear Algebra II

1 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

2 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

3 / 53

Transposes
Let A be an m n matrix, the transpose of A is denoted by AT
The columns of AT are the rows of A
If the dimension of A is m n then the dimension of AT is n m
"

A=

1 2 3
4 5 6

1 4

AT = 2 5
3 6

Properties
(AT )T = A
The transpose of A + B is AT + B T
The transpose of AB is (AB)T = B T AT
The transpose of A1 is (A1 )T = (AT )1
Kjell Konis (Copyright 2013)

6. Linear Algebra II

4 / 53

Symmetric Matrices
A symmetric matrix satisfies A = AT
Implies that aij = aji

1 2 3

A = 2 5 4 = AT
3 4 9
A diagonal matrix is automatically symmetric

1 0 0

D = 0 5 0 = DT
0 0 9

Kjell Konis (Copyright 2013)

6. Linear Algebra II

5 / 53

Products R T R, RR T and LDLT


Let R be any m n matrix
The matrix A = R T R is a symmetric matrix
AT = (R T R)T = R T (R T )T = R T R = A

The matrix A = RR T is also a symmetric matrix


AT = (RR T )T = (R T )T R T = RR T = A

Many problems that start with a rectangular matrix R end up with


R T R or RR T or both!
Kjell Konis (Copyright 2013)

6. Linear Algebra II

6 / 53

Permutation Matrices
An n n permutation matrix P has the rows of I in any order
There are 6 possible 3 3 permutation matrices

1
1

I=

P31 =

1
1

P21 = 1

P32 =

P32 P21 =

1 P21 P32 = 1

P 1 is the same as P T
Example

1 0 0
1
1
1


P32 2 = 0 0 1 2 = 3
3
0 1 0
3
2
Kjell Konis (Copyright 2013)

6. Linear Algebra II

1
1

P32 3 = 2
2
3
7 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

8 / 53

Spaces of Vectors
The space Rn consists of all column vectors with n components
For example, R3 all column vectors with 3 components and is called
3-dimensional space
The space R2 is the xy plane: the two components of the vector are
the x and y coordinates and the tail starts at the origin (0, 0)
Two essential vector operations go on inside the vector space:
Add two vectors in Rn
Multiply a vector in Rn by a scalar
The result lies in the same vector space Rn
Kjell Konis (Copyright 2013)

6. Linear Algebra II

9 / 53

Subspaces
A subspace of a vector space is a set of vectors (including the zero
vector) that satisfies two requirements:
If u and w are vectors in the subspace and c is any scalar, then
(i)

(ii)

u + w is in the subspace
cw is in the subspace

Some subspaces of R3
L Any line through (0, 0, 0), e.g., the x axis
P

Any plane through (0, 0, 0), e.g., the xy plane

R3

The whole space

The zero vector

Kjell Konis (Copyright 2013)

6. Linear Algebra II

10 / 53

The Column Space of A


The column space of a matrix A consists of all linear combinations of
its columns
The linear combinations are vectors that can be written as Ax
Ax = b is solvable if and only if b is in the column space of A
Let A be
The
The
The

an m n matrix
columns of A have m components
columns of A live in Rm
column space of A is a subspace of Rm

The column space of A is denoted by R(A)


R stands for range
Kjell Konis (Copyright 2013)

6. Linear Algebra II

11 / 53

The Nullspace of A
The nullspace of A consists of all solutions to Ax = 0
The nullspace of A is denoted by N(A)
Example: elimination

1 2 3
1 2 3
1 2 3

8 0 0 2 0 0 2
2 4
3 6 11
0 0 2
0 0 0
The pivot variables are x1 and x3
The free variable is x2 (column 2 has no pivot)
The number of pivots is called the rank of A

Kjell Konis (Copyright 2013)

6. Linear Algebra II

12 / 53

Linear Independence
A sequence of vectors v1 , v2 , . . . , vn is linearly independent if the only
linear combination that gives the zero vector is 0v1 + 0v2 + . . . + 0vn
The columns of A are independent if the only solution to Ax = 0 is
the zero vector
Elimination produces no free variables
The rank of A is equal to n
The nullspace N(A) contains only the zero vector
A set of vectors spans a space if their linear combinations fill the space
A basis for a vector space is a sequence of vectors that
(i)
(ii)

are linearly independent


span the space

The dimension of a vector space is the number of vectors in every


basis
Kjell Konis (Copyright 2013)

6. Linear Algebra II

13 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

14 / 53

Sample Variance and Covariance


Sample mean of the elements of a vector x of length m
x =

m
1 X
xi
m i=1

Sample variance of the elements of x


Var(x ) =

m
1 X
(xi x )2
m 1 i=1

Let y be vector of length m


Sample covariance of x and y
Cov(x , y ) =
Kjell Konis (Copyright 2013)

m
1 X
(xi x )(yi y )
m 1 i=1

6. Linear Algebra II

15 / 53

Sample Variance
Let e be a column vector of m ones
m
m
eTx
1 X
1 X
=
1xi =
xi = x
m
m i=1
m i=1

x is a scalar
Let e x be a column vector repeating x m times
eTx
= x e x
m
The i th element of x is
Let x = x e

xi = xi x
Take another look at the sample variance
m
m
1 X
1 X
x T x
k
x k2
2
Var(x ) =
(xi x ) =
xi2 =
=
m 1 i=1
m 1 i=1
m1
m1
Kjell Konis (Copyright 2013)

6. Linear Algebra II

16 / 53

Sample Covariance
A similar result holds for the sample covariance
Let y = y e

eTy
= y e y
m

The sample covariance becomes


Cov(x , y ) =

m
m
1 X
x T y
1 X
xi yi =
(xi x )(yi y ) =
m 1 i=1
m1
m 1 i=1

Observe that Var(x ) = Cov(x , x )


Proceed with Cov(x , x ) and treat Var(x ) as a special case

Kjell Konis (Copyright 2013)

6. Linear Algebra II

17 / 53

Variance-Covariance Matrix
Suppose x and y are the columns of a matrix R

R = x
|

y
|

| |

=
x

y
R

| |

The sample variance-covariance matrix is

Cov(x , x )

Cov(R) =

Cov(x , y )

Cov(y , x ) Cov(y , y )

TR

1 x x x y
R

=
m 1 y T x y T y
m1

The sample variance-covariance matrix is symmetric


Kjell Konis (Copyright 2013)

6. Linear Algebra II

18 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

19 / 53

Computing Covariance Matrices


Take a closer look at how x was computed
eTx
ee T
ee T
=x
x= I
x
m
m
m


x = x e x = x e

{z
A

What is A?
The outer product (ee T ) is an m m matrix
I is the m m identity matrix
A is an m m matrix
Premultiplication by A turns x into x
Can think of matrix multiplication as one matrix acting on another
Kjell Konis (Copyright 2013)

6. Linear Algebra II

20 / 53

Computing Covariance Matrices


Next, consider what happens when we premultiply R by A
Think of R in block structure where each column is a block

AR = A x
|

|
|

y = A x
|
|

|
| |

A y = x y = R
|
| |

Expression for the variance-covariance matrix no longer needs R


Cov(R) =

1 T
1
(AR)T (AR)
R R=
n1
m1

Since R has 2 columns, Cov(R) is a 2 2 matrix


In general, R may have n columns = Cov(R) is an n n matrix
Still use the same m m matrix A
Kjell Konis (Copyright 2013)

6. Linear Algebra II

21 / 53

Computing Covariance Matrices


Take another look at formula for variance-covariance matrix
Rule for transpose of a product
Cov(R) =

1
1
R T AT AR
(AR)T (AR) =
m1
m1

Consider the product



 

ee T T
ee T
T
A A =
I
I
m
m
"

I
IT

[e T ]T e T
m

=


=
Kjell Konis (Copyright 2013)

T # 

ee T
m

ee T
m

ee T
I
m





I
ee T
m

ee T
m


6. Linear Algebra II

Since AT = A, A is symmetric
22 / 53

Computing Covariance Matrices


Continuing . . .
AT A =

ee T
m



ee T
m

= I

= I2

ee T ee T

+
m
m

= I 2

ee T e(e T e)e T
+
m
m2

= I 2

ee T eme T
+
m
m2

ee T
I
m

ee T
m



ee T
m

ee T
m

2

= A2

=A

A matrix satisfying A2 = A is called idempotent


Kjell Konis (Copyright 2013)

6. Linear Algebra II

23 / 53

Computing Covariance Matrices


Can simplify the expression for the sample variance-covariance matrix
Cov(R) =



1
1
R T |{z}
A |{z}
R
R T AT A R =
|{z}
m

1
m1
| {z } nm mm mn
scalar

How to order the operations . . .

1n
z
}|
{
1

R = R T R e |{z}
e T |{z}
R

R T AR = R T I

ee
m

1m mn
mn

= R T R

}|

1
1

e M1 = |{z}
R T |{z}
R M2 = Cov(R)
|{z}
|{z}
| {z }
m
m |{z}
m1 1n

Kjell Konis (Copyright 2013)

6. Linear Algebra II

nm

mn

mn

nn

24 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

25 / 53

Orthogonal Matrices
Two vectors q, w Rm are orthogonal if their inner product is zero
hq, w i = 0
Consider a set of m vectors {q1 , . . . , qm } where qj Rm \ 0
Assume that the vectors {q1 , . . . , qm } are pairwise orthogonal
qj
Let qj =
be a unit vector in the same direction as qj
kqj k
The vectors {
qa , . . . , qm } are orthonormal
Let Q be a matrix with columns {
qj }, consider the product QQ T

q1T

T
Q Q = q1

T
qm

Kjell Konis (Copyright 2013)

..
T
. | q1 q1

..
=
. qm

..
qiT qj
. |

6. Linear Algebra II

qiT qj
..

.
T
qm
qm

=I

26 / 53

Orthogonal Matrices
A square matrix Q is orthogonal if Q T Q = I and QQ T = I
Orthogonal matrices represent rotations and reflections
Example:

cos() sin()

Q=

sin()

cos()

Rotates a vector in the xy plane through the angle

cos()

QTQ =

sin() cos() sin()

sin() cos()

sin()

cos()

cos2 () + sin2 ()

cos() sin() + sin() cos()

sin() cos() + cos() sin()

Kjell Konis (Copyright 2013)

6. Linear Algebra II

sin2 () + cos2 ()

=I
27 / 53

Properties of Orthogonal Matrices


The definition Q T Q = QQ T = I implies Q 1 = Q T

cos()

QT =

sin()

sin() cos()

cos() sin()

sin()

cos()

Cosine is an even function and sine is an odd function


Multiplication by an orthogonal matrix Q preserves dot products
(Qx ) (Qy ) = (Qx )T (Qy ) = x T Q T Qy = x T Iy = x T y = x y

Multiplication by an orthogonal matrix Q leaves lengths unchanged


kQx k = kx k
Kjell Konis (Copyright 2013)

6. Linear Algebra II

28 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

29 / 53

Singular Value Factorization


So far . . .
If Q is orthogonal then Q 1 = Q T
if D is diagonal then D 1 is

d1

D=

..

0
.
dm

1/d1

..

0
.
1/dm

Orthogonal and diagonal matrices have nice properties


Wouldnt it be nice if any matrix could be expressed as a product of
diagonal and orthogonal matrices . . .
Kjell Konis (Copyright 2013)

6. Linear Algebra II

30 / 53

Singular Value Factorization


Every m n matrix A can be factored into A = UV T where

Picture for m > n


U is an m m orthogonal matrix whose columns are the left singular
vectors of A
is an m n diagonal matrix containing the singular values of A
Convention: 1 2 n 0

V is an n n orthogonal matrix whose columns contain the right


singular vectors of A
Kjell Konis (Copyright 2013)

6. Linear Algebra II

31 / 53

Multiplication
328

7 Linear Transformations

Every invertible 2 2 matrix transforms the unit circle into an ellipse

GA
Gli

I
Figure 76
T

#
"
# " is #a stretching
U and V"are rotations
and reflections.

" #
matrix,
T

v1

0
0
0
v2 = U 1
= u1 u2
T
0 2Z 1gives
2
2
I thatv
disappears.
Multiplying
u and n as before.

Av2 = UV v2 = U

This time it is V V
We have an ordinary factorization of the symmetric matrix A .4 T The columns of U
Kjell Konis (Copyright 2013)

6. Linear Algebra II

32 / 53

Multiplication
Visualize: Ax
U

= UV T x
rotate right 45
stretch x coord by 1.5, stretch y coord by 2
rotate right 22.5

" #

1 1
x :=
2 1

Kjell Konis (Copyright 2013)

x := V T x
6. Linear Algebra II

x := x

x := Ux
33 / 53

Example

> load("R.RData")
> library(MASS)
> eqscplot(R)

5
BAC Returns (%)
0

svdR
U <S <V <-

<- svd(R)
svdR$u
diag(svdR$d)
svdR$v

>
>
>
>

> all.equal(U %*% S %*% t(V), R)


[1] TRUE

0
C Returns (%)

> arrows(0, 0, V[1,1], V[2,1])


> arrows(0, 0, V[1,2], V[2,2])
Kjell Konis (Copyright 2013)

6. Linear Algebra II

34 / 53

Example (continued)

5
BAC Returns (%)
0

> w <- V[, 2] * S[2, 2]


> w <- w / sqrt(m - 1)
> arrows(0, 0, w[1], w[2])

> u <- V[, 1] * S[1, 1]


> u <- u / sqrt(m - 1)
> arrows(0, 0, u[1], u[2])

Kjell Konis (Copyright 2013)

6. Linear Algebra II

0
C Returns (%)

35 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

36 / 53

Eigenvalues and Eigenvectors

= I
Let R be an m n matrix and let R

ee T
m

= UV T be the singular value factorization of R

Let R
Recall that


Cov(R)

Kjell Konis (Copyright 2013)

1
m1

1
m1

T
1
m1 V

T
T
1
m1 V V

TR

R
UV T

T

UV T

U T U V T


6. Linear Algebra II

37 / 53

Eigenvalues and Eigenvectors


Remember that is a diagonal m n matrix

..

..

.
n

{z

nm

}
|

..

{z

mn

T is a diagonal matrix with 12 n2 along the diagonal


Let

Kjell Konis (Copyright 2013)

T
1
m1

1 =

12
m1

6. Linear Algebra II

..

.
n =

n2
m1

38 / 53

Eigenvalues and Eigenvectors


Substitute into the expression for the covariance matrix of R


Cov(R) =

T
T
1
m1 V V

= V V T

Let ej be a unit vector in the j th coordinate direction


Multiply a right singular vector vj by Cov(R)
Cov(R) vj = V V T vj = V V T vj = V ej

Recall that a matrix times a vector is a linear combination of the


columns
.


.
|
|
.

Av = v1 a1 + + v1 a1 = ej = 1
j = j ej
.
|
|
..
Kjell Konis (Copyright 2013)

6. Linear Algebra II

39 / 53

Eigenvalues and Eigenvectors


Substituting ej = j ej . . .
Cov(R) vj = V V T vj = V V T vj = V ej = V j ej = j Vej = j vj

In summary

vj is a right singular vector of R




TR

Cov(R) = R


Cov(R) vj = j vj
Cov(R) vj

same direction as vj , length scaled by factor j

In general: let A be a square matrix and consider the product Ax


Certain special vectors x are in the same direction as Ax
These vectors are called eigenvectors
Equation: Ax = x x ; the number x is the eigenvalue
Kjell Konis (Copyright 2013)

6. Linear Algebra II

40 / 53

Diagonalizing a Matrix
Suppose an n n matrix A has n linearly independent eigenvectors
Let S be a matrix whose columns are the n eigenvectors of A

S 1 AS = =

..

.
n

The matrix A is diagonalized


Useful representations of a diagonalized matrix
AS = S

S 1 AS =

A = SS 1

Diagonalization requires that A have n eigenvectors


Side note: invertibility requires nonzero eigenvalues
Kjell Konis (Copyright 2013)

6. Linear Algebra II

41 / 53

The Spectral Theorem


Returning to the motivating example . . .
TR
where R
is an m n matrix
Let A = R
A is symmetric
Spectral Theorem Every symmetric matrix A = AT has the
factorization QQ T with real diagonal and orthogonal matrix Q:
A = QQ 1 = QQ T

with Q 1 = Q T

Caveat A nonsymmetric matrix can easily produce and x that are


complex

Kjell Konis (Copyright 2013)

6. Linear Algebra II

42 / 53

Positive Definite Matrices


The symmetric matrix A is positive definite if x T Ax > 0 for every
nonzero vector x
"

a b
2 2 case: x Ax = [ x1 x2 ]
b c

#"

x1
= ax12 + 2bx1 x2 + cx22 > 0
x2

The scalar value x T Ax is a quadratic function of x1 and x2


f (x1 , x2 ) = ax12 + 2b x1 x2 + cx22
f has a minimum of 0 at (0, 0) and is positive everywhere else
1 1 a is a positive number
2 2 A is a positive definite matrix

Kjell Konis (Copyright 2013)

6. Linear Algebra II

43 / 53

R Example

> eigR <- eigen(var(R))


> S <- eigR$vectors
> lambda <- eigR$values

> w <- sqrt(lambda[2]) * S[,2]


> arrows(0, 0, w[1], w[2])

> u <- sqrt(lambda[1]) * S[,1]


> arrows(0, 0, u[1], u[2])

BAC Returns (%)


0

Kjell Konis (Copyright 2013)

6. Linear Algebra II

0
C Returns (%)

44 / 53

Outline
1

Transposes and Permutations

Vector Spaces and Subspaces

Variance-Covariance Matrices

Computing Covariance Matrices

Orthogonal Matrices

Singular Value Factorization

Eigenvalues and Eigenvectors

Solving Least Squares Problems

Kjell Konis (Copyright 2013)

6. Linear Algebra II

45 / 53

Least Squares

0.20

Citigroup Returns vs. S&P 500


Returns (Monthly - 2010)

Set of m points (xi , yi )


Want to find best-fit line

y =
+ x

0.15

Criterion:

Citigroup Returns
0.00 0.05 0.10

0.10

m
X


yi yi

2

should

i=1

be minimum

~
y = + x

Choose
and so that
m
X


2

yi ( + xi )

i=1

0.05

minimized when
=

0.00
0.05
S&P 500 Returns

Kjell Konis (Copyright 2013)

6. Linear Algebra II

46 / 53

Least Squares
What does the column picture look like?
Let y = (y1 , y2 , . . . , ym )
Let x = (x1 , x2 , . . . , xm )
Let e be a column vector of m ones
Can write y as a linear combination

" #

y1
x1
1 x1
|

.

.
.

y = .. = e + .. = .. ..
|
ym
xm
1 xm

= X

Want to minimize
m
X


yi yi

2

= ky y k2 = ky X k2

i=1
Kjell Konis (Copyright 2013)

6. Linear Algebra II

47 / 53

QR Factorization
Let A be an m n matrix with linearly independent columns
Full QR Factorization: A can be written as the product of
an m m orthogonal matrix Q
an m n upper triangular matrix R
(upper triangular means rij = 0 when i > j)
A = QR
Want to minimize
ky X k2 = ky QRk2
Recall: orthogonal transformation leaves vector lengths unchanged
ky X k2 = ky QRk2 = kQ T (y QR)k2 = kQ T y Rk2
Kjell Konis (Copyright 2013)

6. Linear Algebra II

48 / 53

Least Squares
Let u = Q T y

u1

r11 r12



u2 0

u R =

u3 0
. .

..

..

r22

..
.

u1 (r11 + r12 )

u2 r22

u3
..
.

and effect only the first n elements of the vector


Want to minimize
2

ku Rk2 = u1 (r11 + r12 )




+ u2 r22

2

m
X

ui2

i=(n+1)

Kjell Konis (Copyright 2013)

6. Linear Algebra II

49 / 53

Least Squares

= u
Can find
and by solving the linear system R

u1
r11 r12

=
0 r22

u2
first n rows of R, u first n elements of u
R
System is already upper triangular, solve using back substitution

Kjell Konis (Copyright 2013)

6. Linear Algebra II

50 / 53

R Example
First, get the data
>
>
>
>

library(quantmod)
getSymbols(c("C", "GSPC"))
citi <- c(coredata(monthlyReturn(C["2010"])))
sp500 <- c(coredata(monthlyReturn(GSPC["2010"])))

The x variable is sp500, bind a column of ones to get matrix X


> X <- cbind(1, sp500)
Compute QR factorization of X and extract the Q and R matrices
> qrX <- qr(X)
> Q <- qr.Q(qrX, complete = TRUE)
> R <- qr.R(qrX, complete = TRUE)
Kjell Konis (Copyright 2013)

6. Linear Algebra II

51 / 53

R Example

0.20

Compute u = Q T y

0.15

> u <- t(Q) %*% citi

Compare with built-in least squares


fitting function
> coef(lsfit(sp500, citi))
Intercept
X
0.01708494 1.33208984
Kjell Konis (Copyright 2013)

6. Linear Algebra II

^
^ +
y^ =
x

0.10

> backsolve(R[1:2,1:2], u[1:2])


[1] 0.01708494 1.33208984

Citigroup Returns
0.00 0.05 0.10

Solve for
and

~
y = + x

0.05

0.00
0.05
S&P 500 Returns

52 / 53

http://computational-finance.uw.edu

Kjell Konis (Copyright 2013)

6. Linear Algebra II

53 / 53

You might also like