0% found this document useful (0 votes)
104 views13 pages

Mathematics For Economics and Finance: Answer Key To Final Exam

This document provides the answer key to a final exam for a mathematics course on economics and finance. It contains solutions to two multiple choice questions testing concepts like matrix determinants, limits, and integrals. It also presents a scenario where a portfolio manager needs to analyze the variance of a portfolio consisting of three asset classes with different return variances to implement a new strategy.

Uploaded by

jeanboncru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views13 pages

Mathematics For Economics and Finance: Answer Key To Final Exam

This document provides the answer key to a final exam for a mathematics course on economics and finance. It contains solutions to two multiple choice questions testing concepts like matrix determinants, limits, and integrals. It also presents a scenario where a portfolio manager needs to analyze the variance of a portfolio consisting of three asset classes with different return variances to implement a new strategy.

Uploaded by

jeanboncru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Mathematics for Economics and Finance

Answer Key to Final Exam


Instructor: Norman Schürho↵

Date: January 19, 2013

1
Question I (25 points)
1. Are the following statements true or false? Just state TRUE or FALSE. A correct
answer receives a 1 point bonus while a wrong answer receives a 1 point deduction.
State nothing to avoid any deduction.

(a) The determinant of a square matrix A of order n is greater than zero if and only
if A is invertible.
(b) The row rank of a matrix A is equal to the column rank if and only if the matrix
A is square.
(c) Consider a homogeneous system of equations Ax = b (A is a matrix, b is a vector).
The solution is trivial if and only if det(A) 6= 0.
(d) If a matrix A is square and invertible then (A0 ) 1
= (A 1 )0 .
(e) Consider two continuous functions f (x) and g(x). The function f (x)/g(x) is
continuous.
(f) Consider two concave functions f (x) and g(x). The composition of the functions,
g(f (x)), is concave.
(g) If a function f (x) has at least one local minimum, then it also has a global
minimum.
(h) A square matrix A is orthogonal if and only if A0 A = I 0 I, where I is the identity
matrix with same dimensions than A.
(i) A random variable is a measurable function from the sample space ⌦ to the real
line R.

2. Are the following statements true or false? Just state TRUE or FALSE. A correct
answer receives a 4 point bonus while a wrong answer receives a 4 point deduction.
State nothing to avoid any deduction.

(a)
✓ ◆x2
3x2 + ⇡ ⇡+3
lim sin( ) = sin(e 2 ).
x!1 3x2 2
(b)
1
lim x2 sin = 0.
x!0 x
(c) Z 1
dx ⇡
= .
1 (ex 2
+ 1) (x + 1) 2
(d) Z 1 p 4 p
x 1 + xdx = ( 2 + 1).
0 15

2
Qestion 1, Solution:

1. (a) False, the determinant of a matrix invertible can be negative.


(b) False, row rank is equal to the column rank (proposition 2.5).
(c) True, the solution is a zero vector (algorithm 2.5).
(d) True, (proposition 2.2).
(e) False, g(x) can be equal to zero.
(f) False, only when g(x) is increasing (proposition 3.17).
(g) False, consider this counter-example: f (x) = x x3 . It has a local minimum at
x = p13 but no global minimum ( lim f (x) = 1 ).
x!1
(h) True, (see definition p.60 in the lecture notes).
(i) True, (definition 4.6).
2. (a) False, first one may note that:
✓ ◆x2 ⇡ 1 x
2 1/t ⇡
3x2 + ⇡ 1+ 3x 2 1 + ⇡3 t e3 ⇡+2
lim = lim 2 = lim x2
= 2 =e 3 .
x!1 3x2 2 x!1
1 2 1 x t!0
1 2
t e 3
3 x2 3
Therefore,

✓ ◆ x2 ! ✓ ◆x2 !
3x2 + ⇡ 3x2 + ⇡ ⇡+2
lim sin = sin lim = sin e 3 .
x!1 3x2 2 x!1 3x2 2
2
(b) True, using inequality x2  x sin x1  x2 and Sandwich theorem (see p.75.,
Theorem 3.2) one obtains

1
lim x2 = lim x2 sin
x!0 x!0 x
2
= lim x = 0
x!0

(c) False, splitting the integration into two segments [ 1, 0] and [0, 1] and introduc-
ing substitution t = x on the first segment, one obtains

Z 1
dx
1 (ex + 1) (x2 + 1)
Z 0 Z 1
dx dx
= x 2
+ x 2
1 (e + 1) (x + 1) 0 (e + 1) (x + 1)
Z 1 Z 1
dt dx
= t 2
+
0 (e + 1) (t + 1) (e + 1) (x2 + 1)
x
Z 1✓ ◆0
1 1 dx
= +
0 e x + 1 ex + 1 (x2 + 1)

3
1 1
Since e x +1 + ex +1
⌘ 1, thus

Z 1 Z 1 1
dx dx ⇡
= = arctan x =
1 (e + 1) (x2 + 1)
x
0 1 + x2 4
0
p
(d) True, introducing substitution t = g (x) = 1 + x, t 1 we obtain x = t2 1
and dx = 2tdt.
p
The new integration bounds are g (0) = 1 and g (1) = 2. Thus

Z Z p
1 p 2
x 1 + xdx = t2 1 t · 2tdt
0 1
Z p ✓✓ ◆ ◆
2
4 2 t5 t3 p
2 4 ⇣p ⌘
= 2 t t dt = 2 |1 = 2+1
1 5 3 15

4
Question II (25 points)
You are portfolio manager for the secretive hedge fund V M IN . You are the only one in the
fund to generate positive alpha returns since your arrival three years ago, and it appears you
are the only one who knows how to use quantitative techniques.
You are about to implement a new strategy in which you experiment with a combination
of three asset classes. Each asset class 1, 2, and 3 yields the same expected returns but
the return variances are di↵erent. You now decide to analyse the properties of the portfolio
variance which is given by

V (x1 , x2 , x3 ) = 2.5x21 + 2.5x22 + x23 + x1 x2 ,

where x1 , x2 , and x3 are the portfolio weights.

1. What is the variance-covariance matrix A of the asset class returns? Is A symmetric?

2. Suppose you want to minimize the portfolio variance V under the constraint x1 + x2 +
x3 = 1 and you are allowed to invest in only one asset class. Which asset do you choose
(that is, what are the optimal portfolio weights x1 , x2 , and x3 )? What is the portfolio
variance in this case?

3. Determine the eigenvalues of A. Arrange them in a diagonal matrix ⇤. Find the


eigenvectors corresponding to the eigenvalues of A and arrange them in a matrix C.
Indeed perform a spectral decomposition of A.

4. Compute A 1 .

5. Determine the definiteness of the portfolio variance V .

6. Using the results from the previous questions, find the portfolio weights x1 , x2 , and x3
that minimize the portfolio variance V under the constraint x1 + x2 + x3 = 1. What is
the portfolio variance in this case? How does your answer di↵er from 2? What do you
conclude?

0
Hint: @(x@xAx) = 2Ax

5
Question II Solution:

1. In matrix Notation, V = x0 Ax, so


0 5
1 1
02 2
A⌘@ 0 A. A is symmetric.
1
2
5
2
0 0 1

2. Without diversification, you invest only in the asset with the lowest variance : x3 = 1.
The variance of the portfolio is indeed the idiosyncratic variance of the asset 3 and is
equal to 1.
0 5 1
1
2 2
0
3. det(A I) = det @ 12 5
2
0 A=
0 0 1
5 2 1
= (1 )[( 2 ) 4
] = (1 )(2 )(3 ) = 0,

so the eigenvalues are 1, 2 and 3.

The corresponding eigenvectors matrix is the following:


0 1
0 0.7071 0.7071
@ 0 0.7071 0.7071 A. By definition 2.22, A = C⇤ 1 C 0 and A 1
= C⇤ 1 C 0 .
1 0 0
0 1
0.4167 0.0833 0
4. Using previous result: A 1 = @ 0.0833 0.4167 0 A.
0 0 1
5. We can compute the leading principal minors of A.
D1 = 52 >✓0, ◆
5 1
D2 = det 21 25 = 6 > 0,
2 2
D3 = 1D2 = 6 > 0.
So we can conclude that the matrix is positive definite (proposition 2.9 p.49).
Remark: because the eigenvalues are positive, it permits also to conclude that the
quadratic form is positive definite.

6. The covariance matrix A is positive definite and standard optimization leads to the
1
following result for x: x = 1A0 A 111 , where 1 is equal to (1, 1, 1)0 . 10 A 1 1 = 1.667 and
A 1 1 = (0.3333, 0.3333, 1)0 , so x ' (0.2, 0.2, 0.6)0 . The diversification modifies the
composition of the minimum variance portfolio.

6
Question III (25 points)
Consider the production problem faced by widget maker LINU. LINU produces widgets
using inputs (x, y) 2 R2+ :
max xa y b
x, y

subject to
I:xc
and
II : x + y  d.
The coefficients (a, b, c, d) 2 R4+ are constant parameters.

1. Set up the optimization problem in standard notation. Simplify the problem if possible.

2. How is a production function of the form xa y b called? Explain under which condi-
tion the objective function is homogeneous of degree c? How can one interpret the
constraints I and II?

3. Explain under which conditions on the parameters c and d LINU ’s problem attains a
non-trivial solution.

4. Is constraint qualification satisfied? What does this imply?

5. Write down the Lagrangian to this problem.

6. What are the necessary conditions for optimality of (x⇤ , y ⇤ )?

7. Are these conditions sufficient for an optimum? What happens if you assume that a
and b are chosen so that f is concave.

8. Assume that f is concave and find the optimal (x⇤ , y ⇤ ) as functions of the parameters
a, b, c, d.

9. How does the solution (x⇤ , y ⇤ ) change when the parameter d increases by a small
amount ✏ > 0 and at the same time c drops by the same ✏?

7
Question III Solution:

1. Students are allowed to remove the positivity constraints for considering realistic prob-
lems (positive quantities) even if depending on a and b considering positivity con-
straints can change the solution. In particular when the optimum can be a negative
number (example when a = b = 2). Optimization program:

max xa y b
x, y

s.t. x  c; x + y  d; x 0; y 0

2. It’s a Cobb-Douglas function. The function is homogeneous of degree c, whenever


a + b = c. Denoting the value function z, z is homogeneous of degree c if z( x, y) =
c
z(x, y). Using this definition the conclusion is obvious. The constraints represent
the fact that the production factors are limited.

3. x,y belong to (R2+ ). The value function is increasing with x and y as long as c  d and
the solution is bounded as soon as c, d are finite numbers. Using Weierstrass theorem
the value function attains a maximum on the interval.

4. The constraints are linear, the CQ is satisfied. It means that the KT conditions are
necessary to find a maximum.

5. Lagrangian: L(x, y, 1, 2) = xa y b + 1 (c x) + 2 (d x y) + µ1 x + µ2 x

Please note that the answers are accepted without the conditions x 0; y 0. These
conditions are not relevant as long as the value function is increasing with x and y. In
this case the constraints will be binding only if c = d = 0.

6. Necessary conditions at optimum (x⇤ , y⇤ ):

Please note that the students can disregard the conditions associated to the positivity
constraints.
FOC (Lagrange conditions):

axa⇤ y⇤b
1 2 µ1 = 0
x⇤
bxa⇤ y⇤b
2 µ2 = 0
y⇤

8
SC:

1 (c x⇤ ) = 0
2 (d x⇤ y⇤ ) = 0
µ1 x ⇤ = 0
µ2 y ⇤ = 0
1 0; 2 0 ; µ1 0 ; µ2 0
c x⇤ 0; d x⇤ y⇤ 0 ; x⇤ 0 ; y⇤ 0

7. The conditions are not sufficient as long as we cannot conclude that the objective
function is concave. In particular we cannot rule out the possibility to find inflexion
points if we use the necessary conditions. Moreover this would also imply to check
corner solutions. If we assume that the value function is concave then the conditions
become sufficient for finding a global maximum (theorem 6.22, 6.23).

8. Using the necessary conditions four cases are possible, each case depends on the con-
straint that is binding and the constraints are bindings depending on the values of c
and d. Remember also that we are assuming c  d for considering realistic production
problems:

(a) 1 = 0, 2 =0
(b) 1 = 0, d x⇤ y⇤ = 0
(c) c x⇤ = 0, 2 =0
(d) c x⇤ = 0, d x⇤ y⇤ = 0

(a) The constraints are not bindings. It is the trivial case where we chose x⇤ and y⇤
as high as possible. This case is note admissible as c, d should be finite positive
real numbers.

az bz bx⇤
(b) The second constraint is binding. x⇤ = d y⇤ , x⇤
= y⇤
so y⇤ = a
. Eventually
ad bd
x⇤ = a+b and y⇤ = a+b .

(c) The first constraint is binding. x⇤ = c but y⇤ can be choose as high as possible,
because d is so high that d c > y⇤ . Again this case is not admissible.

(d) both constraints are bindings: x⇤ = c, y⇤ = d c.

9
9. Only two cases are relevant. In the other cases the analysis is trivial, because the
solution is not sensitive to a change of y⇤ . The total e↵ect is the sum of the partial
e↵ects. Considering both relevant cases, I denote by S the solution:
b a
(b) Sd+ = ( a+b ; a+b ) > (0; 0) and Sc = (0; 0), so the solution and the value function
are increasing in this scenario.

(d) Sd+ = (0; 1) and Sc = ( 1; 1), so the solution is decreasing with respect to
x⇤ but increasing with respect to y⇤ . The total e↵ect on the value function depends on
the parameters a and b.

10
Question IV (25 points)
Let X and Y be two normal random variables with distribution (X, Y ) ⇠ N (0, ⌃), where
✓ ◆
1 ✓
⌃= ,
✓ 1

with 1 < ✓ < 1. Let X = (X1 , X2 , ..., Xn ) and Y = (Y1 , Y2 , ..., Yn ) be the observations from
2
an iid sample of size n. Let SX be the sample variance of X and SY2 the sample variance of
Y . Let SXY be the sample covariance between X and Y .

1. What is the variance of X and, respectively, Y ? What is the correlation between X


and Y ?

2. Define the sample covariance SXY in terms of (X1 , X2 , ..., Xn ) and (Y1 , Y2 , ..., Yn ).

3. Which of the following statements are true (explain):


2
(a) Xand SX are independent.
(b) Xand SY2 are independent.
(c) Xand SXY are independent.

4. Compute the conditional expectation E (Y |X = x) and conditional variance V ar (Y |X = x).


Hint: Notice that the joint probability density function of two standard normally
distributed variables Xand Y is

1 1
fX,Y (x, y) = p exp x2 2✓xy + y 2 .
2⇡ 1 ✓ 2 2 (1 ✓2 )

5. Construct an unbiased estimator for ✓. Hint: Find E(SXY ).

6. Determine the asymptotic distribution of the maximum likelihood estimator ✓ˆM LE of


✓. Hint: You do not need to compute the estimator itself.

11
Question IV Solution:

1. The variance of X and Y equal 1, so the correlation equal the covariance equal ✓.
Pn
i=1 ( Xi X )(Yi Y )
2. The sample covariance can be defined as SXY = n
.
Pn
i=1 (Xi X )(Yi Y )
Remark. It is not a mistake to define sample covariance as SXY = n 1
.

3. (a) True. First one may note that X and X1 X, X2 X, . . . , Xn X are inde-
pendent since Cov X, Xi X = 0 (direct computation) and X and Xi X are
normally distributed statistics (as a linear combination of normally distributed
observations X1 , X2 , . . . , Xn ). Therefore, X and Xi X are also independent,
2
and finally X and SX are independent.
(b) True. The proof proceeds in the same way as in [1.].
(c) True. The proof proceeds in the same way as in [1.].

4. Recall that

fX,Y (x, y)
fY |X (y|x) =
fX (x)
=) by direct computation one obtains
( )
1 (y ✓x)2
fY |X (y|x) = p p exp
2⇡ 1 ✓2 2 (1 ✓2 )

One may immediately note that it is a conditional normal distribution of variable


Y |X = x ⇠ N (✓x, 1 ✓2 )
Thus the answer is obtained instantaneously: E (Y |X = x ) = ✓x, V ar (Y |X = x) =
1 ✓2 .
Pn 2 Pn
2 (Xi X)
( Yi Y )
5. It is known that sample variances = SX
n 1
i=1
and = i=1n 1 SY2
are unbiased
2
estimators of V ar (X) and V ar (Y ). By the same token sample variance SX+Y =
Pn 2
i=1 (Xi +Yi X Y )
n 1
is unbiased estimator of V ar (X + Y ).
Pn 2 Pn 2 Pn 2 Pn Pn
i=1 (Xi +Yi X Y) i=1 ( XiX+Yi Y ) i=1 ( Xi X) i=1 ( Yi Y) i=1 (Xi X )(Yi Y )
Since n 1
= n 1
= n 1
+ n 1
2 n 1
and V ar (X + Y ) = V ar (X) + V ar (Y ) 2Cov (X, Y )
Pn
i=1 (Xi X )(Yi Y )
We conclude that n 1
is unbiased estimator of Cov (X, Y ).
Therefore, ✓b = n
S2 .
n 1 XY

12
6. The maximum likelihood function L can be derived directly from joint probability
density function fX,Y (x, y) by taking logarithm of it:

✓ Pn Pn Pn ◆
n 2 n i=1 Xi2 i=1X i Yi i=1 Yi2
ln(L) = nln (2⇡) ln 1 ✓ 2✓ +
2 2 (1 ✓2 ) n n n

And thus

✓ Pn 2
Pn Pn 2
◆ Pn
@ln(L) n✓ n✓ i=1 Xi i=1 Xi Yi i=1 Y i n i=1 Xi Yi
= 2✓ + + =0
@✓ 1 ✓2 (1 ✓2 )2 n n n 1 ✓ 2 n

Which yields the following maximum likelihood equation:


Pn ✓ Pn 2
Pn ◆
2 2 i=1 Xi Yi i=1 Xi i=1 Yi2
✓ 1 ✓ + 1+✓ ✓ + =0
n n n
which is a cubic equation that doesn’t have a simple solution. It can be shown that
for n large enough it has only one real root (and two other complex) which will be the
maximum likelihood estimator.
⇣ ⌘
bM LE a.s.
However we know that E ✓ = ✓ and variance of ✓bM LE asymptotically approaches
In 1 (✓). Thus,

✓ ◆
@ 2 ln L
In (✓) = E
@✓2

✓ Pn ✓ Pn Pn Pn ◆◆
1 + ✓2 4✓ i=1X i Yi 1 + 3✓2 i=1 Xi2 i=1X i Yi i=1 Yi2
= nE 2 + 2✓ +
(1 ✓ )2 (1 ✓2 )2 n (1 ✓2 )3 n n n
✓ ◆
1 + ✓2 4✓ 1 + 3✓2
= n 2 + ✓ (1 2✓ · ✓ + 1)
(1 ✓2 ) (1 ✓2 )2 (1 ✓2 )3
✓ ◆
1 + ✓2 + 4✓2 2 6✓2
= n
(1 ✓2 )2
1 + ✓2
= n .
(1 ✓2 )2

Therefore,
✓ ◆
p ⇣ bM LE ⌘
(1 ✓2 )
2

for n ! 1 we have n ✓ ✓ !N 0, 1+✓ 2


.

13

You might also like