0% found this document useful (0 votes)
5 views

Introduction to Optimization

The document discusses financial engineering and risk management, focusing on linear and nonlinear optimization techniques. It covers the hedging problem, linear programming duality, and the Lagrangian method for constrained optimization. Key concepts include optimization problems, constraints, and methods for finding local and global minima in various contexts.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Introduction to Optimization

The document discusses financial engineering and risk management, focusing on linear and nonlinear optimization techniques. It covers the hedging problem, linear programming duality, and the Lagrangian method for constrained optimization. Key concepts include optimization problems, constraints, and methods for finding local and global minima in various contexts.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Financial Engineering and Risk Management

Review of linear optimization

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Hedging problem
d assets
Prices at time t = 0: p œ Rd

Market in m possible states at time t = 1


Price of asset j in state i = Sij
S T S T
S1j S11 S12 ... S1d
W S2j X
W X # $ WW S21 S22 ... S2d X
X m◊d
Sj = W . X S = S1 S2 . . . Sd = W . .. .. .. X œ R
U .. V U .. . . . V
Smj Sm1 Sm2 ... Smd

Hedge an obligation X œ Rm
Have to pay Xi if state i occurs
Buy/short sell ◊ = (◊1 , . . . , ◊d )€ shares to cover obligation

2
Hedging problem (contd)
Position ◊ œ Rd purchased at time t = 0
◊j = number of shares of asset j purchased, j = 1, . . . , d
qd
Cost of the position ◊ = j=1
pj ◊j = p€ ◊

Payoff from liquidating position at time t = 1


qd
payoff yi in state i: yi = j=1
Sij ◊j
Stacking payoffs for all states: y = S◊
Viewing the payoff vector y: y œ range(S)
S T
◊1
d
# $ W ◊2 X ÿ
y = S1 S2 ... Sd W . X=
U .. V ◊j S j
j=1
◊d

Payoff y hedges X if y Ø X.

3
Hedging problem (contd)
Optimization problem:
qd
min pj ◊j (© p€ ◊)
qj=1
d
subject to j=1 Sij ◊j Ø Xi , i = 1, . . . , m (© S◊ Ø X)

Features of this optimization problem


Linear objective function: p€ ◊
Linear inequality constraints: S◊ Ø X

Example of a linear program


Linear objective function: either a min/max
Linear inequality and equality constraints
max/minx c€ x
subject to Aeq x = beq
Ain x Æ bin

4
Linear programming duality
Linear program
P = minx c€ x
subject to Ax Ø b
Dual linear program
D = maxu b€ u
subject to A€ u = c
uØ0

Theorem.
Weak Duality: P Ø D
Bound: x feasible for P, u feasible for D, c€ x Ø P Ø D Ø b€ u
Strong Duality: Suppose P or D finite. Then P = D.
Dual of the dual is the primal (original) problem

5
More duality results
Here is another primal-dual pair
minx c€ x = maxu b€ u
subject to Ax = b subject to A€ u = c

General idea for constructing duals

P = min{c€ x : Ax Ø b}
Ø min{c€ x ≠ u€ (Ax ≠ b) : Ax Ø b} for all u Ø 0
Ø b€ u + min{(c ≠ A€ u)€ x : x œ Rn }
; €
b u A€ u = c
=
≠Œ otherwise
Ø max{b€ u : A€ u = c}

Lagrangian relaxation: dualize constraints and relax them!

6
Financial Engineering and Risk Management
Review of nonlinear optimization

Martin Haugh Garud Iyengar


Columbia University
Industrial Engineering and Operations Research
Unconstrained nonlinear optimization
Optimization problem
minxœRn f (x)
Categorization of minimum points
xú global minimum if f (y) Ø f (xú ) for all y
xúloc local minimum if f (y) Ø f (xúloc ) for all y such that Îy ≠ xúloc Î Æ r

Sufficient condition for local min


S ˆf T
ˆx1

gradient Òf (x) = U ... V = 0: local stationarity


W X
ˆf
ˆxn
S ˆ2f ˆ2f ˆ2f
T
2
ˆx1 ˆx1 ˆx2
... ˆx1 ˆxn
W .. .. .. X
Hessian Ò2 f (x) = W .. X positive semidefinite
U . . . . V
ˆ2f ˆ2f ˆ2f
ˆxn ˆx1 ˆxn ˆx2
... ˆxn2

Gradient condition is sufficient if the function f (x) is convex.

2
Unconstrained nonlinear optimization
Optimization problem

minxœR2 x12 + 3x1 x2 + x23

Gradient
5 6 C D
2x1 + 3x2 ≠ 94
Òf (x) = =0 ∆ x = 0,
3x1 + 3x22 3
2
5 6
2 3
Hessian at x: H =
3 6x2
5 6
2 3
x = 0: H = . Not positive definite. Not local minimum.
3 0
5 6 5 6
≠ 94 2 3
x= 3 : H= . Positive semidefinite. Local minimum
2
3 9

3
Lagrangian method
Constrained optimization problem
maxxœR2 2 ln(1 + x1 ) + 4 ln(1 + x2 ),
s.t. x1 + x2 = 12

Convex problem. But constraints make the problem hard to solve.


Form a Lagrangian function

L(x, v) = 2 ln(1 + x1 ) + 4 ln(1 + x2 ) ≠ v(x1 + x2 ≠ 12)

Compute the stationary points of the Lagrangian as a function of v


5 2 6
≠v 2 4
ÒL(x, v) = 1+x 4
1
= 0 ∆ x1 = ≠ 1, x2 = ≠ 1
1+x2 ≠ v v v

Substituting in the constraint x1 + x2 = 12, we get


5 6
6 3 1 11
= 14 ∆ v= ∆ x=
v 7 3 25

4
Portfolio Selection
Optimization problem
maxx µ€ x ≠ ⁄x€ Vx
s.t. 1€ x = 1
Constraints make the problem hard!

Lagrangian function
L(x, v) = µ€ x ≠ ⁄x€ Vx ≠ v(1€ x ≠ 1)

Solve for the maximum value with no constraints


1
Òx L(x, v) = µ ≠ 2⁄Vx ≠ v1 = 0 ∆ x= · V≠1 (µ ≠ v1)
2⁄
Solve for v from the constraint
1€ V≠1 µ ≠ 2⁄
1€ x = 1 ∆ 1€ V≠1 (µ ≠ v1) = 2⁄ ∆ v=
1€ V≠1 1
Substitute back in the expression for x

You might also like