Math Chapter 7
Math Chapter 7
Notation
• Let K be a subset of R and f : F → R. We write minK f = minx∈K f (x)
(or, more technically, inf K f = inf x∈K f (x) ) for the minimization program
but also its optimal value. And arg minK f = arg minx∈K f (x) is the
(possible empty) set of global minima, or simply solutions.
Continuous functions
Assume: K is compact (closed and bounded) in the metric space (E, d) and
f is continuous on K. Then arg minK f is non-empty and compact (and so is
arg maxK f ).
Convex functions
• Assume that K is a convex subset of R and f : K → R is convex. If
x ∈ int K (interior point) is a local minimum, then x is a global minimum
as well.
Similarly, if f is concave an (interior) local maximum is a global maximum.
• For any convex program (K convex, f convex) the set of solutions arg minK f
is convex (even a single point at most if f is strictly convex).
1
Differentiable functions
Some general ideas that are important in optimization.
However, these are necessary but not sufficient conditions. E.g. f (x) =
x3 at a = 0 shows that the combination of the first and second order
conditions are not sufficient to guarantee a local minimum.
• The following is a sufficient condition that guarantees a local minimum: if
f 0 (a) = 0 and f 00 (a) is definite positive (f 00 (a)(x, x) > 0 for x 6= 0), then
a is a local minimum of f .
Unconstrained optimization
Note: For this and the remaining sections, we will assume all functions are twice
continuously differentiable.
max f (x)
x
fx (x∗ ) = 0 i = 1, · · · , 0
2
• Suppose we want to find the set if values x1 , x2 , · · · , xn that minimize the
function g(x1 , x2 , · · · , xn ). We can write this as follows
min g(x)
x
max −g(x)
x
and use the same FOC as above. SOC now requires the Hessian matrix
to be positive semidefinite.
• If the Hessian matrix is neither positive semidefinite nor negative semidef-
inite, then the solution obtained is a saddle point.
max f (x1 , · · · xn )
x1 ,···xn
subject to g 1 (x1 , · · · xn ) = 0
g 2 (x1 , · · · xn ) = 0
..
.
g m (x1 , · · · xn ) = 0
max f (x)
x
subject to g(x) = 0
∂L
= g j (x)∗ ) = 0 j = 1, · · · , m
∂λj
3
• The Lagrange multiplier(s) has/have an economic interpretation, and it/they
is/are usually called the shadow price.
Then,
∂M (a) ∂L(x∗ , a, λ∗ )
= for j = 1, · · · , m
∂aj ∂aj
max f (x)
x∈Rn |{z}
objective function
subject to g i (x) ≥ 0 i = 1, · · · , I
| {z }
inequality constraints
hj (x) = 0 j = 1, · · · , J
| {z }
equality constraints
• We can use the Lagrange multiplier as previously discussed for the equal-
ity constraints. But for the inequality constraints, we used the so-called
Kuhn-Tucker conditions.
• Let µi be the multiplier for the inequality constraints and λj be the mul-
tipliers for the equality constraints. The FOC for all constraints are as
follows
XI J
X
5f (x∗ ) + µi 5 g i (x∗ ) + λj 5 hj (x∗ ) = 0
i=1 j=1
λ ≥ 0, µ ≥ 0, g i (x) ≥ 0, µi g i (x) = 0