Math Econ Lecture 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Matrix Algebra

Mathematical Economics

Lecture 3

© Hui Xiao Copyrights Reserved

1 / 33
Matrix Algebra

Matrix Algebra

It is clear that the Algebra above for solving the Equilibrium Model gets cumbersome and
unwieldy fast even as n increases even slightly.

We resort to Matrix Algebra for handling large systems of simultaneous Equations.

In a way, Matrix Algebra certainly offers a higher dimensional perspective on Modeling in


general, both figuratively and literally.

2 / 33
Matrix Algebra

A Two-Commodity Market Model

Recall the Two-Commodity Market Model:

Qd1 = a0 + a1 P1 + a2 P2 ,

Qs1 = b0 + b1 P1 + b2 P2 ,

Qd1 = Qs1 ,
(1)
Qd2 = α0 + α1 P1 + α2 P2 ,

Qs2 = β0 + β1 P1 + β2 P2 ,

Qd2 = Qs2 .

Qd1 = Qs1 =⇒ a0 + a1 P1 + a2 P2 = b0 + b1 P1 + b2 P2 ,

=⇒ a0 − b0 + (a1 − b1 )P1 + (a2 − b2 )P2 = 0,


(2)
Qd2 = Qs2 =⇒ α0 + α1 P1 + α2 P2 = β0 + β1 P1 + β2 P2 ,

=⇒ α0 − β0 + (α1 − β1 )P1 + (α2 − β2 )P2 = 0,

3 / 33
Matrix Algebra

A Two-Commodity Market Model


To further simplify,

a0 − b0 + (a1 − b1 )P1 + (a2 − b2 )P2 = 0 ⇐⇒ c0 + c1 P1 + c2 P2 = 0,


(3)
α0 − β0 + (α1 − β1 )P1 + (α2 − β2 )P2 = 0 ⇐⇒ γ0 + γ1 P1 + γ2 P2 = 0,

where ci = ai − bi , γi = αi − βi , i = 0, 1, 2.
Then
c0 + c1 P1
c1 P1 + c2 P2 = −c0 =⇒ P2 = − ,
c2 (4)
γ1 P1 + γ2 P2 = −γ0 .

Thus, by eliminating P2 =⇒ one Equation with one Endogenous Variables P1 ,

c0 + c1 P1 ∗ c 2 γ0 − c 0 γ2
γ1 P1 + γ2 P2 = −γ0 =⇒ γ1 P1 − γ2 = −γ0 =⇒ P1 = . (5)
c2 c 1 γ2 − c 2 γ1

c2 γ0 −c0 γ2
Then solve for P2∗ =⇒ substitute P1∗ = c1 γ2 −c2 γ1 ,

∗ ∗ c 2 γ0 − c 0 γ2 ∗
c1 P1 + c2 P2 = −c0 =⇒ c1 + c2 P2 = −c0
c1 γ2 − c2 γ1
(6)
∗ c 2 γ0 − c 0 γ2 ∗ c 0 γ1 − c 1 γ0
=⇒ c2 P2 = −c0 − c1 =⇒ P2 = .
c 1 γ2 − c 2 γ1 c 1 γ2 − c 2 γ1 4 / 33
Matrix Algebra

Stochasticness: Restrictions Needed: c1 γ2 − c2 γ1 ̸= 0 for P1∗ and P2∗ to exist. For P1∗ > 0, P2∗ > 0
=⇒ c2 γ0 − c0 γ2 & c1 γ2 − c2 γ1 and c0 γ1 − c1 γ0 & c1 γ2 − c2 γ1 must have the same sign.
c2 γ0 −c0 γ2
Economically Meaningful: P1∗ = c1 γ2 −c2 γ1 + ε > 0 =⇒
0 −c0 γ2 0 −c0 γ2 c2 γ0 −c0 γ2
ε> − cc21 γ
γ2 −c2 γ1 , ε ∼ N (µ, σ ) =⇒ µ − 4σ > − cc21 γ
2 ∗
γ2 −c2 γ1 =⇒ µ − 4σ + c1 γ2 −c2 γ1 = P1 > 0.

∗ ∗ ∗
A sample of prices (wheat) {P1i , P2i , . . . , Pni }, i = 1, . . . , n, and estimate population µ, σ 2 with
the sample mean and variance:
q
Pn Pn ∗ −µ̂)2
(P1i γ0 −c0 γ2 c2 γ0 −c0 γ2
µ̂ = 1
n

P1i , σ̂ = n−1 =⇒ µ̂−4σ̂ > − cc12 γ2 −c2 γ1
=⇒ c1 γ2 −c2 γ1 > 4σ̂−µ̂.
i=1 i=1

Also standardize the equilibrium wheat price sample first, then estimate µ, σ : 2

c2 γ0 −c0 γ2
P1∗ = c1 γ2 −c2 γ1 + ε ∼ N (µ, σ 2 ) =⇒ Stochastic P1∗ ’s statistical properties:
c2 γ0 −c0 γ2
P1∗ −( +µ)
0 −c0 γ2
P1∗ ∼ N ( cc21 γ 2
γ2 −c2 γ1 + µ, σ ) =⇒
c1 γ2 −c2 γ1
σ ∼ N (0, 1) =⇒ 0 − 4 × 1 >
| {z }
bound
c γ −c γ
P1∗ −( 2 0 0 2 +µ)
c1 γ2 −c2 γ1
σ by restricting parameters.
5 / 33
Matrix Algebra

Matrix Algebra

Instead of eliminating by brute force, what if we rewrite the following into Matrix Forms,

a0 − b0 + (a1 − b1 )P1 + (a2 − b2 )P2 = 0,


(7)
α0 − β0 + (α1 − β1 )P1 + (α2 − β2 )P2 = 0,

=⇒     
a1 − b 1 a2 − b2 P1 b 0 − a0
= , (8)
α1 − β1 α2 − β 2 P2 β0 − α0
=⇒    −1  
P1∗ a1 − b 1 a2 − b2 b0 − a0
= , (9)
P2∗ α1 − β1 α2 − β 2 β 0 − α0

which is much more convenient and elegant than brute-force elimination.


Then Q∗ ∗ ∗ ∗
1 and Q1 can be obtained by plugging P1 and P2 into the respective commodities’ Demand

or Supply Equation.

6 / 33
Matrix Algebra

Matrix Algebra

Naturally, the focus of the problem then becomes to explicitly solve

 −1
a1 − b1 a2 − b 2
. (10)
α1 − β1 α2 − β2

To do this,

we need to dive further into Matrix Algebra.

In a way, Matrix Algebra certainly offers a higher dimensional perspective on Modeling in


general, both figuratively and literally.

7 / 33
Matrix Algebra

Matrix Algebra

Matrix Algebra:

As more commodities or more Variables are involved in an increasingly larger General


Equilibrium Model, solving for such Equilibrium becomes increasingly cumbersome.

We resort to Matrix Algebra because

Gives a shorthand way of writing a large system of Equations compactly.

Allows convenient testing for the existence of large simultaneous systems’ solutions.

Detects Inconsistency (fallacy not your Tesla) and Functional Dependence


(redundancy the same Tesla).

Allows solving large simultaneous systems conveniently and concisely.

In outer space, the problems on earth seem so small (Higher Dimensional Perspective).

8 / 33
Matrix Algebra

Drawback

Minor Drawback:

Matrix Algebra only applies for Linear Equation systems, which places restrictions on the
Economic Model’s Functional Forms.

The Assumption of Linearity costs the Economic Model’s ability in Mathematically


translating the actual Economy more accurately.

However,

Economic Models are all meant as rough representations of the actual Economy,

and Linear Models often provide just sufficiently close approximations of actual Nonlinear
relationships.

Trade some degree of modelling accuracy for modelling tractability (easier access to
models’ solutions).

9 / 33
Matrix Algebra

Transformation of Functional Forms


Besides, log-linear transformations can be applied to covert Nonlinear systems to Linear. Example:
Consider a Nonlinear Partial Equilibrium Model:

a a
Qd1 = P1 1 P0 2 , a1 < 0, Demand Downward-sloping (normal good)

Qs1 =
b b
P1 1 P0 2 , b1 > 0, Supply upward-sloping (normal good) (11)

Qd1 = Qs1 .

Recall if a1 < 0 =⇒ P1−3 = 1


P13
= ( P11 )3 =⇒ Demand Downward-sloping (normal good),
and P1 moves along Qd1 .
Taking the logarithm on both sides for every Equation in the Model, but to preserve the economic
supply and demand relationship, the gives

logQd1 = a1 logP1 + a2 logP0 , Qd1 > 0, P1 > 0, P0 > 0,

logQs1 = b1 logP1 + b2 logP0 , Qs1 > 0, (12)

logQd1 = logQs1 ,

which becomes a system of Equations Linear in the Endogenous Variables Qd1 , Qs1 , and P1 with
P0 as an Exogenous Variable.
10 / 33
Matrix Algebra

Transformation of Functional Forms

Solving the Model:

b2 −a2
b −a
∗ b 2 − a2 logP1∗ ∗
b2 −a2
logP0 a −b
logP0 1 1
2 2

logP1 = logP0 =⇒ e = P1 = e a1 −b1 =e = P0a1 −b1 > 0.


a1 − b 1
(13)

Plugging P1∗ into the Supply Equation:


logQ∗ = b1 logP1∗ + b2 logP0 = b1 ba21−a b2 −a2
−b1 logP0 + b2 logP0 = (b1 a1 −b1 + b2 )logP0
2

b2 −a2 b2 −a2 a1 b2 −a2 b1


∗ (b1 +b2 )logP0 (b1 +b2 )
=⇒ elogQ = Q∗ = e a1 −b1
= elogP0 a1 −b1
= P0 a1 −b1
> 0.

Make the solutions Economically Meaningful: a1 − b1 ̸= 0.

11 / 33
Matrix Algebra

nonlinear Supply (can’t produce infinity even someone pays infinity) and demand (if price too high, 0
12 / 33
Matrix Algebra

Matrix
Matrix Algebra Example:

Consider the following system of Equations from our previous Two-Commodity Market
Linear Model,
a0 − b0 + (a1 − b1 )P1 + (a2 − b2 )P2 = 0,
(14)
α0 − β0 + (α1 − β1 )P1 + (α2 − β2 )P2 = 0,

(a1 − b1 )P1 + (a2 − b2 )P2 = b0 − a0 ,


(15)
(α1 − β1 )P1 + (α2 − β2 )P2 = β0 − α0 ,

where the Endogenous Variables P1 and P2 are to be solved for.

Rewrite in Matrix Form gives

     
a1 − b1 a2 − b 2 P1 b0 − a0
= , (16)
α1 − β1 α2 − β2 P2 β0 − α0
| {z } |{z} | {z }
2×2 Coefficients Matrix 2×1Vector of Endogenous Variables 2×1Vector of Constants

where a Vector is a special Matrix that has only 1 column.


13 / 33
Matrix Algebra

Matrix

Generally, a system of m-Linear Equations in n Endogenous Variables (x1 , x2 , . . . , xn ) can


be expressed as
a11 x1 + a12 x2 + . . . + a1n xn = d1 ,

a21 x1 + a22 x2 + . . . + a2n xn = d2 ,


(17)
..............................,

am1 x1 + am2 x2 + . . . + amn xn = dm ,

where aij represents the coefficient for the j th Variable in the ith Equation,

thus all subscripts therefore corresponds to the specific locations of Endogenous Variables and
Parameters in a system of Equations.

Example: a21 is the Coefficient for the Variable x1 in the second Equation.

14 / 33
Matrix Algebra

Matrix

Rewritten in Matrix Form gives

 
a11 a12 ... a1n
   
x1 d1
. 
. 
  x2   d2 
 a21 ... ... .    
   ..  =  ..  ,
. 
.  .  . 

 ... ... ... .
am1 am2 ... amn xn dm
| {z } | {z } | {z }
n×1 Vector of Endogenous Variables=X m×1 Vector of Constants=D
m×n Coefficients Matrix=A
(18)
or equivalently, AX = D,
where the m × n Coefficients Matrix A can also be denoted as

A = [aij ], i = 1, 2, . . . , m, j = 1, 2, . . . , n, (19)

which suggests that each element’s location inside the Matrix A is unequivocally fixed by the
corresponding subscript =⇒ Matrix’s information is ordered.

15 / 33
Matrix Algebra

Matrix Operations

Matrix Addition and Subtraction:

Matrix Addition and Subtraction only works if two matrices’ dimensions conform.

The corresponding elements in both Matrices are then added or substracted.

Example:

     
a b 1 2 a+1 b+2
c d + 3 4 = c + 3 d + 4 , (20)
e f 5 6 e+5 f +6
| {z } | {z } | {z }
which is 3×2 which is 3×2 which is 3×2

     
a b 1 2 a−1 b−2
− = . (21)
c d 3 4 c−3 d−4
| {z } | {z } | {z }
which is 2×2 which is 2×2 which is 2×2

16 / 33
Matrix Algebra

Scalar Multiplication

Scalar Multiplication:

In Matrix Algebra, a Scalar represents a number such as 5.

A Scalar can be Multiplied into a Matrix by multiplying every element of the said Matrix.
   
a b 5a 5b
5 c d =  5c 5d  . (22)
|{z}
Scalar e f 5e 5f
| {z } | {z }
Matrix which is 3×2 Matrix which is 3×2

17 / 33
Matrix Algebra

Matrix Multiplication
Matrix Multiplication:

Matrix Multiplication requires two matrices’ dimensions conform


such that the column dimension of the ”Lead” Matrix equates the row dimension of the
”Lag” Matrix.
Generally, given A = [aij ], i = 1, 2, . . . , m, j = 1, 2, . . . , n, and
B = [bij ], i = 1, 2, . . . , n, j = 1, 2, . . . , q,

A B = AB (23)
|{z} |{z} |{z}
”Lead” Matrix which is m×n ”Lag” Matrix which is n×q Resulting Matrix which is m×q

Example:

   
a b   aβ0 + bβ1
β0
→ c d ↓ =  cβ0 + dβ1  . (24)
β1
e f |{z} eβ0 + f β1
| {z } ”Lag” Matrix which is 2×1 | {z }
”Lead” Matrix which is 3×2 Resulting Matrix which is 3×1

18 / 33
Matrix Algebra

Transposing a Matrix

Transposing a Matrix:
If A is an m × n Matrix, then its Transpose AT is an n × m Matrix.

Example:
 
a b
A= c d , (25)
e f
| {z }
which is 3×2

A = [aij ], i = 1, 2, . . . , m, j = 1, 2, . . . , n. (26)

then  
T a c e
A = . (27)
b d f
| {z }
which is 2×3

T
A = [aji ], j = 1, 2, . . . , n, i = 1, 2, . . . , m. (28)

19 / 33
Matrix Algebra

Matrix Inversion

Matrix Inversion:

Note that It is not possible to divide one Matrix by the other per se, which is why you do
not see A/B or B.
A

Instead,

It is possible to define the inverse of a certain Matrix B as B −1 if such an Inverse exists.

If B −1 ’s dimension conforms with Matrix A, we may further define AB −1 .

However,

even if both defined, AB −1 and B −1 A often do not represent the same quantity.

Therefore, the ordering of Matrices or


which matrix ”Leads” and which matrix ”Lags”, is important in Matrix Algebra.

20 / 33
Matrix Algebra

Matrix Inversion

So, Not all Matrices have Inverses.

For a Matrix to have an Inverse, it must satisfy

Necessary Condition:

Be a Square Matrix that is n × n in dimension, where n is an arbitrary positive integer


such as 2.

Sufficient Condition:

Be a Nonsingular Matrix where no Linear Dependence (redundancy) exists in the rows or


columns.
−1
A is Square and Nonsingular ⇐⇒ ∃A . (29)
| {z }
Necessary and Sufficient Condition

21 / 33
Matrix Algebra

Nonsingularity

Conditions for Nonsingularity:


For Matrix A to be Nonsingular, we need its determinant |A| ̸= 0 as a determinant is
essentially a number.
Example:

 
1 −1 1 −1
A= Then |A| = = 1 × 1 − 10 × (−1) = 1 + 10 = 11 ̸= 0. (30)
10 1 10 1

For Matrix A, |A| ̸= 0 is often equivalent to saying

Matrix A is nonsingular.

Matrix A’s columns and rows are linearly independent (no redundancy).

∃A−1 or Matrix A has an Inverse.

For AX = d =⇒ X = A−1 d, which is unique.

Matrix A’s column (or row) vectors span the vector space.

22 / 33
Matrix Algebra

National-Income Model
Matrix Inversion Example:
Consider the following simple National-Income Model defined as

Y = C + I0 + G0 , I0 ≥ 0, G0 ≥ 0,
(31)
C = a + bY, a > 0, 0 < b < 1,

where
Income Y and Consumption C are the Endogenous Variables.
Investment I0 and the Government Expenditure G0 are the Exogenous Variables treated the same
as the Parameters a and b.
The marginal propensity to consume b ̸= 1 because C = a + Y > Y , as a > 0.
But Y = C + I0 + G0 ≥ C or C ≤ Y =⇒ Y and C interrelated by the system above.
Y − C = I0 + G0 ,
(32)
−bY + C = a,
=⇒     
1 −1 Y I0 + G0
= , (33)
−b 1 C a

23 / 33
Matrix Algebra

Cramer’s Rule

    
1 −1 Y I0 + G0
= , (34)
−b 1 C a

then by Cramer’s Rule,

I0 + G0 −1


a 1 (I0 + G0 ) × 1 − a × (−1) I0 + G0 + a
Y = = = , (35)
1 −1 1 × 1 − (−b) × (−1) 1−b

−b 1

where 1 − b > 0 as 0 < b < 1, and | · | represents a determinant which is in essence a number.
=⇒

∗ I0 + G0 + a
Y = > 0, because I0 + G0 + a > 0 as a > 0, I0 ≥ 0, G0 ≥ 0, (36)
1−b

as how Income Y ∗ should be.

24 / 33
Matrix Algebra

Cramer’s Rule

Similarly, given
    
1 −1 Y I0 + G0
= , (37)
−b 1 C a

by Cramer’s Rule,

1 I0 + G0


−b a a × 1 − (I0 + G0 ) × (−b) a + b(I0 + G0 )
C = = = , (38)
1 −1 1 × 1 − (−b) × (−1) 1−b

−b 1

where 1 − b > 0 as 0 < b < 1, and | · | represents a determinant which is in essence a number.
=⇒
∗ a + b(I0 + G0 )
C = > 0, as a > 0, I0 ≥ 0, G0 ≥ 0, (39)
1−b
as how Consumption C ∗ should be.

25 / 33
Matrix Algebra

Matrix Inversion

Alternatively, given

        −1  
1 −1 Y I0 + G0 Y∗ 1 −1 I0 + G0
= =⇒ = , (40)
−b 1 C a C∗ −b 1 a

we can directly invert the Coefficient Matrix or

 −1
1 −1
A= . (41)
−b 1

26 / 33
Matrix Algebra

Matrix Inversion
Step 1: Find A’s Cofactor Matrix:
   
|C11 | |C12 | (−1)1+1 |M11 | (−1)1+2 |M12 |
C= = , (42)
|C21 | |C22 | (−1)2+1 |M21 | (−1)2+2 |M22 |

where Cij = (−1)i+j |Mij | with |Mij | being the subdeterminant or Minor from deleting the ith row
and j th column of Matrix A’s determinant |A|.
Matrix A’s determinant |A| is

1 −1
|A| = = 1 × 1 − (−1) × (−b) = 1 − b. (43)
−b 1

Then
   
(−1)1+1 |M11 | (−1)1+2 |M12 | (−1)2 × |1| (−1)3 × |(−b)|
C= = =
(−1)2+1 |M21 | (−1)2+2 |M22 | (−1)3 × |(−1)| (−1)4 × |1|

   
(−1)2 × 1 (−1)3 × (−b) 1 b
= . (44)
(−1)3 × (−1) (−1)4 × 1 1 1

27 / 33
Matrix Algebra

Matrix Inversion
Step 2:
Find A’s adjoint Matrix adj A:

   
1 b T 1 1
C= , then adj A = C = . (45)
1 1 b 1

Example:
 
a b
A= c d , (46)
e f
| {z }
which is 3×2

A = [aij ], i = 1, 2, . . . , m, j = 1, 2, . . . , n. (47)

then  
T a c e
A = . (48)
b d f
| {z }
which is 2×3 28 / 33
Matrix Algebra

Matrix Inversion

Step 3:
Find A’s Inverse A−1 :
     
−1 1 1 1 1 1 1 1 1 1 1
A = adj A = = = ,
|A| 1 −1 b 1 1 × 1 − (−1) × (−b) b 1 1−b b 1

−b 1
(50)
Therefore,
        
Y∗ −1 I0 + G0 1 1 1 I0 + G 0 1 1 × (I0 + G0 ) + 1 × a
=A = = ,
C∗ a 1−b b 1 a 1−b b × (I0 + G0 ) + 1 × a
(51)

     I0 +G0 +a

Y∗ 1 I0 + G0 + a
= = 1−b
, (52)
C ∗ 1−b b(I0 + G0 ) + a
a+b(I0 +G0 )
1−b

which is the same as Cramer’s rule’s results and


I0 +G0 +a a+b(I0 +G0 )
Y∗ = 1−b > 0, C ∗ = 1−b > 0 respectively.

29 / 33
Matrix Algebra

Matrix Inverses’ Properties

Note:
For inverting even higher-dimensional Matrices (what we did is simply inverting a 2 × 2
matrix), it is much more efficient to use statistical softwares such as R and Stata than by
hand.

Familiarize with at least one statistical software for the upper level courses.

Properties of Matrix Inverses:

If Matrix A is n × n, then A−1 must also be an n × n matrix, and

−1
AA = In , (53)

where In is an n × n Identity Matrix.

If Matrix A has an Inverse A−1 , then A−1 is unique.

30 / 33
Matrix Algebra

Special Matrices

Null Matrix:
All the elements are 0.

Example:
 
0 0
= 0. (54)
0 0
| {z }
which is 2×2

Thus, in general,
A + 0 = A, (55)

and
A × 0 = 0, (56)

assuming the Matrix A and the Null Matrix 0’s dimensions conform so that the above
operations are permitted.

31 / 33
Matrix Algebra

Special Matrices
Identity Matrix: All the diagonal elements are 1:
 
1 0 ... 0
.
.

0 1 ... .
  = In , (57)
 .. .
.
.
.
. . ... .
0 0 ... 1

where In is an n × n Identity Matrix, and AIn = In A = A, A is an n × n Matrix so that dimensions


conform.
Diagonal Matrix: Only the diagonal elements are Nonzero:
 
a11 0 ... 0
.
.
 
 0 a22 ... . 
A=  , (58)
. . .
. . .
 
 . . ... . 
0 0 ... ann

where the diagonal elements a11 , a22 , . . . ann are all Nonzero.
32 / 33
Matrix Algebra

Special Matrices
Lower Triangular Matrix: Square Matrix A is lower triangular if aij = 0 for all i < j, i.e. all elements
above the diagonal are 0:  
a11 0 ... 0
.
.
 
 a21 a22 ... . 
 
.
. (59)

A=
,
.
 a31 ... a33 

 .. .
.
.
.

 . . ... . 
an1 an2 ... ann

where the lower triangular elements or all elements including the diagonal elements are all nonzero.
Upper Triangular Matrix: Square Matrix A is upper triangular aij = 0 for all i > j, i.e. all elements
below the diagonal are 0:  
a11 a12 ... a1n
 0 a22 ... a2n 
 
.
.
 
A= 0 ... a33 . . (60)
 
. . .
. . .
 
 . . ... . 
0 0 ... ann
33 / 33

You might also like