0% found this document useful (0 votes)
34 views17 pages

Session 9 2018

1) The document discusses the chain rule and Lagrange optimization. 2) The chain rule allows derivatives of composite functions to be found by multiplying derivatives along the chain of dependencies. Lagrange optimization finds maxima and minima of functions subject to constraints. 3) An example applies the chain rule to find partial derivatives involving intermediate variables, and Lagrange optimization to maximize a function subject to a constraint.

Uploaded by

Mehroze Elahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views17 pages

Session 9 2018

1) The document discusses the chain rule and Lagrange optimization. 2) The chain rule allows derivatives of composite functions to be found by multiplying derivatives along the chain of dependencies. Lagrange optimization finds maxima and minima of functions subject to constraints. 3) An example applies the chain rule to find partial derivatives involving intermediate variables, and Lagrange optimization to maximize a function subject to a constraint.

Uploaded by

Mehroze Elahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

The chain rule Lagrange optimization

Introduction to Mathematics
of Finance

Session 9

February 18, 2018


The chain rule Lagrange optimization

Outline

1 The chain rule

2 Lagrange optimization
The chain rule Lagrange optimization

Differentiable function

A function z = f (x, y ) is differentiable at


(x0 , y0 ) if fx (x0 , y0 ) and fy (x0 , y0 ) exist and 4z
satisfies an equation of the form

4z = fx (x0 , y0 )4x + fy (x0 , y0 )4y + 1 4x + 2 4y ,

in which each of 1 , 2 → 0 as both 4x, 4y → 0.


Here

4z = f (x0 + 4x, y0 + 4y ) − f (x0 , y0 ).

We call f differentiable if it is differentiable at


every point in its domain.
The chain rule Lagrange optimization

Chain rule

Chain rule for functions of two independent


variables
If w = f (x, y ) has continuous partial derivatives
fx and fy and if x = x(t), y = y (t) are
differentiable functions of t, then the composite
w = f (x(t), y (t)) is a differentiable function of t
and
df 0 0
= fx (x(t), y (t)) · x (t) + fy (x(t), y (t)) · y (t),
dt
or
dw ∂f dx ∂f dy
= +
dt ∂x dt ∂y dt
The chain rule multiplying derivatives along the way. Lagrange optimization

Then add the products.


Chain Rule
where P1
and let ¢
w ! f (x, y) Dependent
variable
0w 0w
0x 0y
Letting ¢
x y Intermediate
variables
a
dx dy
dt dt
Independent
t variable
dw 0 w dx 0 w dy
! "
dt 0 x dt 0 y dt
The
The chain rule Lagrange optimization

Chain rule of two independent and three


intermediate variables

If w = f (x, y , z) is differentiable and


x = g(r , s), y = h(r , s) and z = k (r , s). If all four
functions are differentiable, then w has partial
derivatives with respect to r and s, given by the
formulas
∂w ∂w ∂x ∂w ∂y ∂w ∂z
= + +
∂r ∂x ∂r ∂y ∂r ∂z ∂r
∂w ∂w ∂x ∂w ∂y ∂w ∂z
= + +
∂s ∂x ∂s ∂y ∂s ∂z ∂s
= + + .
The chain rule 0s 0x 0s 0y 0s 0z 0s Lagrange optimization

The first of these equations can be derived from the Chain Rule in Theorem 6 by hold-
ing s Chain rule r- asTree
fixed and treating diagram
t. The second can be derived in the same way, holding r fixed
and treating s as t. The tree diagrams for both equations are shown in Figure 14.19.

w ! f (x, y, z) w ! f (x, y, z)

0w 0w 0w 0w
f 0x 0z 0x 0z
0w 0w
0y 0y
z x y z x y z
0y 0y
k 0x 0r 0z 0x 0s 0z
0r 0r 0s 0s

r s
(r, s), k (r, s)) 0w 0w 0x 0w 0y 0w 0z 0w 0 w 0x 0w 0y 0w 0z
! " " ! " "
0r 0x 0r 0y 0r 0z 0r 0s 0x 0s 0y 0s 0z 0s

(b) (c)

te function and tree diagrams for Theorem 7.

EXAMPLE 3 Partial Derivatives Using Theorem 7


The chain rule Lagrange optimization

Example

Express ∂w/∂r and ∂w/∂s in terms of r and s if


r
w = x + 2y + z 2 , x = , y = r 2 + ln s, z = 2r
s
The chain rule Lagrange optimization

Lagrange optimization

Lagrange optimization is a technique that


let’s us find the maximum and minimum of a
multivariate function f (x1 , x2 , · · · , n) when
there is some constraint on the independent
variables
The constraint looks like
g(x1 , x2 , · · · , xn ) ≤ c
Given PKR 5000, how can you optimize
spending it to x, y , z product that is most
satisfactory to you
For a given level of risk, maximize return
For a given expected return, minimize risk
The chain rule Lagrange optimization

Lagrange Method

The problem at hand is to

max f (x, y )
subject to
g(x, y ) ≤ c

Note that we have two unknown variables and


one constraint.
The chain rule Lagrange optimization

Steps to follow
1
Formulate a Lagrange as follows
L(x, y , λ) := f (x, y ) − λ (g(x, y ) − c)
2
Then (x ∗ , y ∗ , λ∗ ) is a solution to the original
problem if
∂L ∗ ∗ ∗
(x , y , λ ) = 0
∂x
∂L ∗ ∗ ∗
(x , y , λ ) = 0
∂y
λ∗ · [g(x ∗ , y ∗ ) − c] = 0
λ∗ ≥ 0
g(x ∗ , y ∗ ) ≤ c
The chain rule Lagrange optimization

Example

Maximize F (x, y ) = 2y + x
subject to g(x, y ) = y 2 + xy − 1 = 0
Formulate the Lagrange

L(x, y , λ) = 2y + x − λ y 2 + xy − 1


Find ∂(L) ∂(L)


∂x , ∂y and
∂(L)
∂λ and equate these
equal to zero
The chain rule Lagrange optimization

We get

1 − λy = 0 (1)
2 − 2y λ − xλ = 0 (2)
y 2 + xy − 1 = 0 (3)

along with λ ≥ 0 and λ · [y 2 − xy − 1] = 0


The chain rule Lagrange optimization

Let’s try to solve the equations to find x, y and λ.

Equation 1 gives λ = 1/y


Plug that in equation 2, we get x = 0 for
λ 6= 0
Plugging the two results in equation 3 gives
λ = ±1
Since y = 1/λ, we get y = ±1
The solution then is x = 0, y = ±1, λ = ±1
The chain rule Lagrange optimization

Example 2

Maximize f (x, y ) = 2x + y subject to x 2 + y 2 = 1


The chain rule Lagrange optimization

Minimization problem

Problem:

min f (x, y )
subject to
g(x, y ) ≥ b
The chain rule Lagrange optimization

You can use either of the following for the


minimization problem
1
Change g(x, y ) ≥ b to −g(x, y ) ≤ −b and
proceed like the first case of maximizing f
2
Replace f by −f and keep everything else
in the same form and proceed like before
3
Put the multipliers in the Lagrangian with
plus sign instead of minus sign

You might also like