0% found this document useful (0 votes)
150 views29 pages

AMATH 460: Mathematical Methods For Quantitative Finance: 7.1 Lagrange's Method

LagrangesMethod

Uploaded by

eric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views29 pages

AMATH 460: Mathematical Methods For Quantitative Finance: 7.1 Lagrange's Method

LagrangesMethod

Uploaded by

eric
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

AMATH 460: Mathematical Methods

for Quantitative Finance


7.1 Lagranges Method
Kjell Konis
Acting Assistant Professor, Applied Mathematics
University of Washington
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

1 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

2 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

3 / 29

Investment Portfolios
Portfolio of n assets
Let wi be the proportion of the portfolio invested in asset i
Have constraint
n
X

wi = 1

i=1

Can take long and short positions = no constraints on individual wi


Let i be the expected rate of return on asset i
Let i2 be the risk of asset i
Let ij be the correlation between assets i and j
Expected rate of return and risk of the portfolio:
Expected Return =

n
X

w i i

i=1

Risk =

n
X

wi2 i2 + 2

i=1
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

wi wj i j ij

1i<jn
4 / 29

Investment Portfolios: Matrix Notation


Let w = (w1 , . . . , wn ) and = (1 , . . . , n )
The expected rate of return can be written in matrix notation as
Return =

n
X

wi i = w T

i=1

The risk can be written as


Risk = w T w
is the covariance matrix of the n assets

12

2 1 21

=
..

1 2 12 1 n 1n
22
..
.

..
.

n 1 n1 n 2 n2

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

2 n
..
.
n2

5 / 29

Optimal Investment Portfolios


Given , , and investor selected w , can compute
portfolio return
portfolio risk

Two notions of optimality


For a target expected return, choose w to minimize portfolio risk
For a target level of risk, choose w to maximize expected return

Both notions are constrained optimization problems that can be


solved using Lagrange multipliers

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

6 / 29

Optimal Investment Portfolios


Minimum variance optimization
n asset case
minimize:
subject to:

w T w
eTw = 1
T w = P

2 asset case
minimize:
subject to:

12 w12

+ 21 2 w1 w2 + 22 w22
w1 + w2 = 1
1 w1 + 2 w2 = P

Maximum expected return optimization


n asset case
maximize:
subject to:

2 asset case

maximize:
T w
subject to:
eTw = 1
w T w = P2

Kjell Konis (Copyright 2013)

1 w1 + 2 w2
w1 + w2 = 1
12 w12 + 21 2 w1 w2 + 22 w22 = P2

7.1 Lagranges Method

7 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

8 / 29

Relative Extrema of Single Variable Functions


A local minimum (maximum) of a function f is a point x0 where
f (x0 ) () f (x )

x (x0 , x0 + )

for some  > 0


A local extrema is a point that is a local minimum or maximum
If f is twice differentiable and f 00 is continuous
Any local extremum is a critical point of f : f 0 (x0 ) = 0
Can classify critical points using second derivative test
f 0 (x0 ) < 0
f 0 (x0 ) > 0
f 0 (x0 ) = 0
Kjell Konis (Copyright 2013)

local maximum
local minimum
anything possible
7.1 Lagranges Method

9 / 29

Relative Extrema of Functions of n Variables


A local minimum (maximum) of a function f : Rn R is a point
x0 Rn where
f (x0 ) () f (x )

x : kx x0 k < 

Every local extremum is a critical point: Df (x0 ) = 0


If f is twice differentiable and has continuous second order partial
derivatives
D 2 f (x0 ) is a symmetric matrix with real eigenvalues
Second order conditions
All eigenvalues of D 2 f (x0 ) > 0
All eigenvalues of D 2 f (x0 ) < 0
D 2 f (x0 ) has eigenvalues
D 2 f (x0 ) singular
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

local minimum
local maximum
saddle point
anything can happen
10 / 29

Finding Extrema: Functions of 2 Variables


Find the local extrema of f (x , y ) = x 2 + xy + y 2
Df (x , y )

2x + y x + 2y

Df (0, 0) = 0 0

(0, 0) is a critical point

D f (x , y ) =

"

2 1
1 2

Can use R to compute the eigenvalues


> A <- matrix(c(2, 1, 1, 2), 2, 2)
> eigen(A)$values
[1] 3 1
Since both eigenvalues are greater than 0 = (0, 0) a local minimum
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

11 / 29

Finding Extrema: Functions of 2 Variables (Take 2)


Find the local extrema of f (x , y ) = x 2 xy y 2
Df (x , y )

2x y x 2y

Df (0, 0) = 0 0

(0, 0) is a critical point

D f (x , y ) =

"

2 1
1 2

Can use R to compute the eigenvalues


> A <- matrix(-c(2, 1, 1, 2), 2, 2)
> eigen(A)$values
[1] -1 -3
Since both eigenvalues are less than 0 = (0, 0) a local maximum
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

12 / 29

Finding Extrema: Functions of 2 Variables


Find the local extrema of f (x , y ) = x 2 + 3xy + y 2
Df (x , y )

2x + 3y 3x + 2y

Df (0, 0) = 0 0

(0, 0) is a critical point

D f (x , y ) =

"

2 3
3 2

Can use R to compute the eigenvalues


> A <- matrix(c(2, 3, 3, 2), 2, 2)
> eigen(A)$values
[1] 5 -1
One positive and one negative eigenvalue = (0, 0) a saddle point
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

13 / 29

Finding Extrema: Functions of 2 Variables


3

Find the local extrema of f (x , y ) = 2xy (1 y 2 ) 2


First order condition


Df (x , y ) =
Df (0, 0) =

2y


2x + 3y


0 0

y2

= (0, 0) is a critical point

Second order condition

0
D 2 f (x , y ) =
2

2
36y

1y 2

D 2 f (0, 0) =

0 2
2 3

Compute the eigenvalues of the Hessian at the critical point


> eigen(matrix(c(0, 3, 3, 2), 2, 2))$values
[1] 4.162278 -2.162278
One positive and one negative eigenvalue = (0, 0) a saddle point
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

14 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

15 / 29

Lagranges Method
Problem:
maximize:
subject to:

f (x1 , x2 , . . . , xn )
g1 (x1 , x2 , . . . , xn ) = 0
g2 (x1 , x2 , . . . , xn ) = 0
..
.

(1)

gm (x1 , x2 , . . . , xn ) = 0
18th -century mathematician Joseph Louis Lagrange proposed the
following method for the solution
Form the function
F (x1 , . . . , xn , 1 , . . . , m ) = f (x1 , . . . , xn ) +

m
X

i gi (x1 , x2 , . . . , xn )

i=1

Optimal value for problem (1) occurs at one of the critical points of F
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

16 / 29

Lagranges Method
Terminology:
The function F (x1 , . . . , xn , 1 , . . . , m ) is called the Lagrangian
The column vector = (1 , . . . , m ) is called the Lagrange
multipliers vector
Necessary Condition:
Let x = (x1 , x2 , . . . , xn )


Let g(x ) = g1 (x ), g2 (x ), . . . , gm (x ) be a vector-valued function


of the constraints


The gradient D g(x ) must have full rank at any point where
the constraint g(x ) = 0 is satisfied, that is


rank Dg(x ) = m
Kjell Konis (Copyright 2013)

x where g(x ) = 0

7.1 Lagranges Method

17 / 29

Partial Derivatives of the Lagrangian


D F (x , ) has n + m variables, compute gradient in 2 parts


D F (x , ) = Dx F (x , ) D F (x , )
Recall Lagrangian:
F (x , ) = f (x1 , . . . , xn ) +

m
X

i gi (x1 , x2 , . . . , xn )

i=1

The partial derivatives are


m
X
F
f
gi
=
+
i
xj
xj i=1 xj

f
f
Gradient of f : Df (x ) =
...
x1
xn


Kjell Konis (Copyright 2013)

7.1 Lagranges Method

F
= gi (x )
i


18 / 29

Partial Derivatives of the Lagrangian


Gradient of g(x ):

g1
x1
g2
x1

g1
x2
g2
x2

gm
x1

gm
x2

Dg(x ) = .
.
.

..
.

g1
xn
g2
xn

..
.

..

gm
xn

Can express sum in second term in matrix notation


m
X

i=1



gi
= T Dg(x ) j
xj

It follows that
D F (x , ) = Df (x ) + T Dg(x )


Kjell Konis (Copyright 2013)

7.1 Lagranges Method

T 

g(x )

19 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

20 / 29

Example
Want to
4x2 2x3
2x1 x2 x3 = 0
x12 + x22 13 = 0

max/min:
subject to:

Start by writing down the Lagrangian


F (x , ) = f (x ) + 1 g1 (x ) + 2 g2 (x )
= 4x2 2x3 + 1 (2x1 x2 x3 ) + 2 (x12 + x22 13)
Check necessary condition:
"

2 1 1
Dg(x ) =
2x1 2x2
0
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

21 / 29

Derivatives of the Lagrangian


The Lagrangian
F (x , ) = 4x2 2x3 + 1 (2x1 x2 x3 ) + 2 (x12 + x22 13)
Gradient of the Lagrangian
T

21 + 22 x1
4 + 2 x
1
2 2

2 1
D F (x , ) =

2x1 x2 x3
x12 + x22 13

get 1 = 2 for free

Set D F (x , ) = 0 and solve for x and


set

21 + 22 x1 = 0
set
4 1 + 22 x2 = 0
set
2x1 x2 x3 = 0
set
x12 + x22 13 = 0
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

22 / 29

Example (continued)

A little algebra gives


x1 =

2
2

x2 =

3
2

x3 =

7
2

Also know that


x12 +x22

= 13

2
2

2

3
+
2


2

13
= 13
22

2 = 1

The critical points are


= (2, 1), x = (2, 3, 7), f (x ) = 26
= (2, 1), x = (2, 3, 7), f (x ) = 26

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

23 / 29

Outline

Optimal Investment Portfolios

Relative Extrema of Functions of Several Variables

Lagranges Method

Example

Minimum Variance Portfolio

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

24 / 29

Minimum Variance Portfolio


Recall: minimum variance portfolio optimization
minimize:
subject to:

w T w
eTw = 1
T w = P

Lagranges method setup


f (w ) = w T w
"

g(w ) =

"

g1 (w )
T w P = 0
=
g2 (w )
eTw 1 = 0

First, check necessary condition


"

T
Dg(x ) = T
e
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

25 / 29

Derivative of a Quadratic Form


"

a b
Let A =
b c

Let f (x ) = x T Ax = ax12 + 2bx1 x2 + cx22




Then Df (x ) = 2ax1 + 2bx2

2bx1 + 2cx2 = 2x T A


In general, let A be an n n symmetric matrix


The derivative (gradient) of the quadratic form f (x ) = x T Ax is
Df (x ) = 2x T A

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

26 / 29

Minimum Variance Portfolio


The Lagrangian
F (y , ) = w T w + 1 e T w 1 + 2 T w P


Gradient of the Lagrangian


T 

Df (w ) + T Dg(w )

g(w )

2w T + 1 e T + 2 T

eTw 1

D F (w , ) =

T w P

Find the critical point by solving the linear system

2 e
w
0
T

0 0 1 = 1
e
T 0 0
2
P
Kjell Konis (Copyright 2013)

7.1 Lagranges Method

27 / 29

Minimum Variance Portfolio

Further reading:
Second order conditions, e.g., Theorem 9.2 and Corollary 9.1 in
PFME

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

28 / 29

http://computational-finance.uw.edu

Kjell Konis (Copyright 2013)

7.1 Lagranges Method

29 / 29

You might also like