Exercise Part I
Exercise Part I
borrowed from
Further Mathematics for Economic Analysis
Knut Sydsaeter, Peter Hammond, Atle Seierstad
and Arne Strom,
with additional ones from the lectures notes of
J.M. Bonnisseau∗
Jean-Marc Bonnisseau†
Exercise 1 (SHSS 13.1, 5) Sketch the set S = {(x, y)R2 | x > 0, w ≥ 1/x} in
the plane. Is S closed?
1
Exercise 3 (SHSS 13.1, 7) Consider the following three subsets of R2 :
A = {(x, y) : y = 1, x ∈ ∪∞ n=1 (2n, 2n + 1)}
B = {(x, y) : y ∈ (0, 1), x ∈ ∪∞ n=1 (2n, 2n + 1)}
∞
C = {(x, y) : y = 1, x ∈ ∪n=1 [2n, 2n + 1]}
For each of these sets determine whether it is open, closed, or neither.
Exercise 5 (SHSS 13.1, 11) Show by an example that the union of infinitely
manyu closed sets need not be closed.
Exercise 6 (harder) (SHSS 13.1, 14) Prove that the empty set ∅ and the whole
space Rn are the only sets in Rn that are both open and closed.
Exercise 7 (SHSS 2.2, 2) Determine which of the following sets are convex by
drawing each in the plane.
(a) {(x, y) | x2 + y 2 < 2};
(b) {(x, y) | x ≥ 0, y ≥ 0};
(c) {(x, y) | x2 + y 2 > 8};
(d) {(x, y) | x ≥ 0, y ≥ 0, xy ≥ 1};
(e) {(x, y) | xy ≤ 1};
√ √
(f) {(x, y) | x + y ≤ 2};
Exercise 8 (SHSS 2.2, 3) Let S be the set of all poins (x1 , . . . , xn ) in Rn that
satisfy all the m inequalities
a11 x1 + a12 x2 + . . . + a1n xn ≤ b1
a21 x1 + a22 x2 + . . . + a2n xn ≤ b2
......
am1 x1 + am2 x2 + . . . + amn xn ≤ bm
Exercise 9 (SHSS 2.2, 4) If S and T are two sets in Rn and a and b are scalars,
let W = aS + bT denote the set of all points of the ax + by, where x ∈ S and
y ∈ T . (Then W is called a linear combination of the two sets.) Prove that if S
and T are both convex, then so is W = aS + bT .
2
S
Exercise 11 (harder) (SHSS 2.2, 7) (a) Let S be a set of real numbers with
the property that if x1 , x2 ∈ S, then the midpoint 12 (x1 + x2 ) also belongs to S.
Show by an example that S is not necessarily convex.
(b) Does it make any difference if S is closed?
Exercise 14 (harder) (SHSS 13.3, 1) Prove that the set S = {(x, y) | 2x − y <
2 and x − 3y < 5} is open in R2 .
Exercise 17 (harder) (SHSS 13.3, 4) For a fixed a in Rn , prove that the func-
tion f : Rn → R, defined by f (x) = d(x, a) is continuous.
3
(a) Put F (x, y) = f (x2 + y 2 ). Find the gradient ∇F at an arbitrary point
and show that it is parallel to the straight line segment joining the point and the
origin.
(b) Put G(x, y) = f (y/x). FInd ∇G at an arbitraty point where x 6= 0, and
show that it is orthogonal to the straight line segment joining the point and the
origin.
Exercise 20 (harder) (SHSS 2.1, 6) Suppose that f (x, y) has continuous par-
tial derivatives. Suppose too that the maximum directional derivative of f at
(0, 0) is equal to 4, and that it is attained in the direction given by the vector
from the origin to the point (1, 3). Find ∇f (0, 0).
Exercise 23 (SHSS 1.5, 1) For the following matrices, find the eigenvaleues
and also
thoseeigenvectors that corresdpond to the real eigenvalues:
2 −7
(a)
3 −8
2 4
(b)
−2 6
1 4
(c)
6 −1
2 0 0
(d) 0 3 0
0 0 4
2 1 −1
(e) 0 1 1
2 0 −2
4
1 −1 0
(f) −1 2 −1
0 −1 1
0 2 3
Exercise
24 (SHSS 1.5, 2) (a) Compute X AX, A , and A when A =
a a 0 x
a a 0 and X = y .
0 0 b z
(b) FInd all the eigenvalues of A.
(c) The characteristic polynomlal p(λ) of A is a cubic function of λ. Show
that if we replace λ by A, then p(A) is the zero matrix. (This is a special case
od the Cayley-Hamilton theorem.)
−2 −1 4 1
Exercise 25 (SHSS 1.5, 5) Let A = 2 1 −2 , x1 = 0, x2 =
−1 −1 3 1
1 1
−1, x3 = 1.
0 1
(a) Verify that x1 , x2 , and x3 are eigenvectors of A, and find the associated
eigenvalues.
(b) Let B = AA. Show that Bx2 = x2 and Bx3 = x3 . Is Bx1 = x1 ?
(c) Let C be an arbitrary n × n matrix such that C3 = C2 + C. Prove that if
λ is an eigenvalue for C, then λ3 = λ2 + λ. Show that C + In has an inverse.
Find the characteristic equation of Ak and determine the values of k that make
all the eigenvalues real. What are the eigenvalues if k = 3?
(b) Show that columns of P are eigenvectors of A3 , and compute the matrix
product P0 A3 P. What do you see?
Exercise 27 (SHSS 1.6, 3) (a) Prove that if A = PDP−1 , where P and D are
n × n matrices, then A2 = PD2 P−1 .
(b) Show by induction that Am = PDm P−1 for every positive integer m.
Exercise 28 (SHSS 1.7, 5) Using a result of the course, determine the defi-
niteness of
(a) Q = x21 + 8x22
(b) Q = 5x21 + 2x1 x3 + 2x22 + 2x2 x3 + 4x23
(c) Q = −(x1 − x2 )2
(d) Q = −3x21 + 2x1 x2 − x22 + 4x2 x3 − 8x23
5
Exercise 29 (SHSS 1.7, 6) Let A = (aij )n×n be symmetric and positive semidef-
inite. Prove that A is positive definite if and only if |A| =
6 0.
Exercise 30 (SHSS 1.7, 7) (a) For what values of c is the quadratic form
Exercise 34 (SHSS 13.6, 5) Some books in economics have suggested the fol-
lowing generalisation of the Minkowski’s separating hyperplane Theorem: Two
convex sets in Rn with only one point in common can be separated by a hyper-
plane. Is this statement correct? What about the assertion taht two convex sets
in Rn with disjoint interiors can be separated by a hyperplane?
Exercise 35 (SHSS 2.3, 1) Which of the functions whose graphs are shown in
the figure (e) are (presumably) convex/concave, strictly concave/strictly convex?
Exercise 37 (SHSS 2.3, 3) Show that f (x, y) = ax2 +2bxy +cy 2 +px+qy +r is
strictly concave if ac−b2 > 0 and a < 0, whereas it is strictly convex is ac−b2 > 0
and a > 0.
(b) Find necessary and sufficient condition for f (x, y) to be concave/convex.
6
Exercise 38 (SHSS 2.3, 4) For what values of the constant a is the following
function concave/convex?
Exercise
p 41 (SHSS 2.3, 7) Let f be defined for all x in Rn by f (x) = kxk =
x21 + . . . + x2n . Prove that f is convex. Is f strictly convex? (Hint: Use the
triangular inequality for the norm.)
Exercise 42 (SHSS 2.3, 8) Show that the CES function f defined for v1 > 0,
v2 > 0 by
f (v1 , v2 ) = A(δ1 v1−ρ + δ2 v2−ρ )−1/ρ (A > 0, ρ 6= 0, δ1 , δ2 > 0)
is concave for ρ ≥ −1 and convex for ρ ≤ −1, and that it is strictly concave if
ρ > −1.
Exercise 43 (SHSS 2.3, 9) (a) The Cobb-Douglas function z = f (x) = xa11 xa22 . . . xann
(a1 > 0, . . ., an > 0) is defined for all x1 > 0, . . ., xn > 0. Prove that the kth
leading principal minor of the Hessian f 00 (x) is
a1 − 1 a1 ... a1
a1 . . . ak k a2 a2 − 1 . . . a2
Dk = z . .. .. ..
(x1 . . . xk )2 .. . . .
ak ak . . . ak − 1
7
Exercise 45 (SHSS 2.4, 2) Use the Jensen’s inequality to f (x) = ln(x), with
λ1 = . . . = λn = 1/n to prove that
√ 1
n
x 1 x 2 . . . xn ≤ (x1 + x2 + . . . + xn ) for x1 > 0, . . . , xn > 0
n
Exercise 46 (SHSS 2.4, 6) Prove that f (x, y) = x4 +y 4 defined in R2 is strictly
convex by showing that the gradient is a subgradient.
Exercise 48 (B chap 2, 15) Let (uν ) and (vν ) be two sequences. We assume
that (uν ) is convergent. Show that if the set {n ∈ N | uν 6= vν } is finite, then,
(vν ) is convergent and has the same limit as (uν ).
We assume that (uν ) is not convergent. Show that if the set {ν ∈ N | uν 6= vν }
is finite, then, (vν ) is not convergent.
Exercise 49 (B chap 2, 18) Give the closure and the interior of the following
subsets of Rn .
Rn ;
Rn+ ;
Rn++ ;
in R2 , {(x, y) ∈ R2 | x + y ≥ 0, x2 + y 2 ≤ 1};
8
Exercise 51 (B chap 2, 27) We consider the linear space L(Rn , Rp ) with the
norm NL and f an element of L(Rn , Rn ). We consider the mapping Φ from
L(Rn , Rn ) to itself defined by Φ(g) = g ◦ f .
1) Show that Φ is a linear mapping. Show that it is Lipschitz continuous with a
coefficient NL (f ).
Same question with Ψ defined by Ψ(g) = f ◦ g.
2 On optimization
Optimization in Economics: examples. Existence result: the Weierstrass theo-
rem.
9
Exercise 58 (B chap 1, 3) Let f be a function defined on C. Let us suppose
that ϕ : X ⊂ R → R is an increasing function and f (c) ∈ X for all c ∈ C.
max f (x) max ϕ(f (x)) min −ϕ(f (x))
(P1 ) (P2 ) (P3 )
x∈C x∈C x∈C
1) Prove that the three following problems are equivalents, that is that their sets
of solutions are the same.
2) Prove that if ϕ is continuous and val(P1 ) ∈ X, then val(P2 ) = ϕ(val(P1 )).
3) Show that if there exists a solution, then val(P2 ) = ϕ(val(P1 )).
4) Let us consider f (x) = x, C = ]0, 1[ and ϕ equal to the ceiling function, that
isϕ(x) is the smallest element of Z greater or eqal to x, or ϕ(x) = min{z ∈ Z |
x ≤ z}. Compute val(P1 ), val(P2 ) and ϕ(val(P1 )).
Exercise 59 (B chap 1, 5)
1) Prove that the function x → ax2 + bx + c with a > 0 is coercive.
2) Prove that the function x → ax3 + bx2 + cx + d with a =
6 0 is not coercive.
For which values of (a, b, c) this problem has a solution? For which values of
(a, b, c) this problem has a finite value?
When a solution exists, compute the solution and give the value of the problem.
10
Maximise ax − ex
(P(α))
x∈R
where a is a positive real number.
has a solution.
3 Unconstrainded optimisation
Looking for unconstrained optima: FOC; SOC.
Exercise 67 (SHSS 3.1, 4) Find the functions x∗ (r) and y ∗ (r) such that x =
x∗ (r) and y = y ∗ (r) solve the problem
Exercise 68 (SHSS 3.1, 5) Find the solutions x∗ (r, s) and y ∗ (r, s) of the prob-
lem
max f (x, y, r, s) = r2 x2 + 3s2 y − x2 − 8y 2
x,y
defined onR3 has only one stationary point. Show that it is a local minimum
point.
11
Exercise 70 (SHSS 3.2, 2) (a) Let f be defined for all (x, y) by f (x, y) = x3 +
y 3 − 3xy. Show that (0, 0) and (1, 1) are the only stationary points, and compute
the quadratic form associated to the Hessian matrix of f at the stationary points.
(b) Check the definiteness of the quadratic form at the stationary points.
(c) Classify the stationary points, local minimum, local maximum, saddle
point.
Exercise 71 (SHSS 3.2, 3) Classify the stationary points of
(a) f (x, y, z) = x2 + x2 y + y 2 z + z 2 − 4z
(b) f (x1 , x2 , x3 , x4 ) = 20x2 + 48x3 + 6x4 + 8x1 x2 − 4x21 − 12x23 − x24 − 4x32
Exercise 72 (SHSS 3.2, 4) Suppose f (x, y) has only one stationary point
(x∗ , y ∗ ) which is a local minimum point. Is (x∗ , y ∗ ) necessarily a global mini-
mum point? It may be surprising that the answer is no. Prove this by examining
the function defined for all (x, y) by f (x, y) = (1 + y)3 x2 + y 2 . (Hint: Look at
f (x, −2) as x → ∞.)
Exercise 73 (B chap 3, 48) For the following functions, find the critical points
where the gradient vanish.
1) f (x, y) = ln(1 + xy), (x, y) ∈ {(x0 , y 0 ) ∈ R2 | xy > −1}
2) f (x, y) = xy 2 + xy − 2x − 12y
3) f (x, y, z) = −2x2 − 2xy − xz − 21 y 2 + 2xz − 2z 2 + x − 2y − z
4) f (x, y) = x2 y 2 − 4x2 − y 2
5) f (x, y) = 2x4 + 2x2 y + y 2 − 2x2 + 1
6) f (x, y) = √ 1
+√ 1
on R2 \ {(0, 0), (1, 0)}
x2 +y 2 (x−1)2 +y 2
12
4 Optimisation with equality constraints
Optimization problem with equality constraints. Necessary conditions for opti-
mality: Theorem of Lagrange. The Lagrangian function: interpretation of the
Lagrange multipliers.
Equality constraints: Second order conditions. Sufficient conditions for local
optimality. Sufficient conditions for global optimality.
Exercise 76 (SHSS 3.3, 1) (a) Solve the problem max −x2 − y 2 − z 2 subject
to x + 2y + z = a.
(b) Compute the optimal value function f ∗ (a) and verify that the derivative
of the value function is equal to the multiplier.
Exercise 79 (SHSS 3.3, 4) (a) Solve the utility maximizing problme (assuming
m ≥ 4)
max U (x1 , x2 ) = 21 ln(1 + x1 ) + 14 ln(1 + x2 ) subject to 2x1 + 3x2 = m
(b) With U ∗ (m) as indirect utility function, show that dU ∗ /dm = λ.
Exercise 80 (SHSS 3.3, 5) (a) Solve the problem max 1 − rx2 − y 2 subject to
x + y = m, with r > 0.
(b) Find the value function f ∗ (r, m) and compute ∂f ∗ /∂r and ∂f ∗ /∂m and
verify that they are equal to the partial derivative of the Lagrangian computed
at the solution.
13
then the expenditure on good j is the following linear function of prices and
income
pj x∗j = αj m + pj aj − αj ni=1 pi ai , j = 1, 2, . . . , n
P
(b) Let U ∗ (p, m) = U (x∗ ) denote the indirect utility function. Verify Roy’s
identity:
∂U ∗ ∂L
= = −λx∗i , i = 1, . . . , n
∂pi ∂pi
Exercise 83 (SHSS 3.3, 8) (a) Find the solution of the following problem by
solving the constraints for x and y: √
minimize x2 + (y − 1)2 + z 2 subject to x + y = 2 and x2 + y 2 = 1
(b) Note that there are three variables and two constraints (z does not ap-
pear in the constraints). Show that the condition on the matrix of the partial
derivatives of the constraints are not satisfied, and that there are no Lagrange
multipliers for which the Lagrangian is stationary at the solution point.
Assume that the coefficient matrix A = (aij ) of the quadratic form Q is symmetric
and prove that Q attains maximum and minimum values over the set S which are
equal to the largest and smallest eigenvalues of A. (Hint: Consider first the case
n = 2. Write Q(x) as Q(x = x0 Ax. The first-order conditons give Ax = λx.)
14
Exercise 87 (SHSS 3.4, 2) Compute the B2 and B3 the determinant of the
bordered Hessian of order 2 and 3 for the problem
max(min)x2 + y 2 + z 2 subject to x + y + z = 1
Show that the second-order conditions for a local minimum are satisfied.
Exercise 88 (SHSS 3.4, 3) Use the second order sufficient conditions to classify
the candidates for optimality in the problem
local max(min)x + y + z subject to x2 + y 2 + z 2 = 1 and x − y − z = 1
Compute the unique point satisfying the first order necessary condition. Are
the second order necessary condition satisfied at this point?
15
Show that there exists a unique point satisfying the first order necessary con-
dition.
Exercise 93 (B chap 4, 60) For the following problem, find the points satisfy-
ingthe first order necessary conditions (minimum or maximum):
Optimise 13 x − 41 y
x2 − 2x + y 2 = 0
Optimise ln x + ln y + ln z
x2 + y 2 + z 2 = 3
x > 0, y > 0, z > 0
Optimise 4x2 + y 2
xy + 2 = 0
Optimise xy
x2 + 4y 2 − 8 = 0
Optimise 2y 4 − 2xy 2 + x2 − 4y 2 + 2x + 2
−x + y 2 − 2 = 0
Optimise x + 3y − pz
2 2 2
x + 3y + z − 2 x2 + 3y 2 − 4 = 0
Optimise x2 − 32 x + y 2 − 32 y
x2 + y 2 − 2xy − x − y = 0
Optimise 4x + y + 2
ln x + 2 ln y = 0
x > 0, y > 0
Optimise − 32 xy + 52 y + 83 x − 11
6
x2 + y − 1 = 0
Exercise 94 (B chap 4, 61) For the above optimisation problems, write explic-
itly the associated Lagrangian mapping and check if the second order necessary
condition is satisfied or not at the points satisfying the first order necessary con-
dition.
16