Chapter 1. Linear Algebra Final
Chapter 1. Linear Algebra Final
2
Topics
• 1. Linear Algebra
• 2. Differential and integral calculus
• 3. Optimization
• 4. Differential and difference equations
• 5. Linear and non-linear programming
Mathematics? Economics?
Several questions:
What does the indifference curve look like? Why?
How to get demand functions from the utility
functions?
Is there a utility function?
Does the maximum exist?
Is it unique?
How do we obtain the solution?
What properties does a well-behaved demand
function possess?
Chapter 1. Linear Algebra
• Matrix
• System of equations
• Applications
• Vectors
• Orthonormal Basis
• Operation of matrices
• Determinants of a matrix
• Inverse of a matrix
• Applications
1.1. Matrix
• A matrix is a set of elements, organized into rows and columns
rows
a b
columns
c d
• a and d are the diagonal elements.
• b and c are the off-diagonal elements.
• Matrices are like plain numbers in many ways: they can be added,
subtracted, and, in some cases, multiplied and inverted (divided).
1.1. Matrix
• Examples:
1 b
A ; b
1 d
• Dimensions of a matrix: numbers of rows by numbers of
columns. The Matrix A is a 2x2 matrix, b is a 1x3 matrix.
• A matrix with only one column or only one row is called a
vector.
• If a matrix has an equal numbers of rows and columns, it is
called a square matrix. Matrix A, above, is a square matrix.
• Usual Notation: Upper case letters => matrices
Lower case => vectors
7
Basic Operations of Matrices
a b e f a e b f
c d g h
c g d h Just add elements
a b e f a e b f Just subtract elements
c d g h
c g d h
a b e f ae bg af bh Multiply each
c d g
h ce dg cf dh row by each
column
a b ka kb Multiply each
k
c d kc kd element by the
scalar
Vector multiplication: Geometric interpretation
• Think of a vector as a x2
-4 -3 -2 -1 1 2 3 4 5 6
1 U 3 2 -2
9
Matrix multiplication: Details
• Multiplication of matrices requires a conformability
condition
• The conformability condition for multiplication is that the
column dimensions of the lead matrix A must be equal to
the row dimension of the lag matrix B.
• What are the dimensions of the vector, matrix, and result?
• Matrix subtraction 2 1 1 0 1 1
7 9 2 3 5 6
• Matrix multiplication 2 1 1 0 4 3
7 9 x 2 3 26 27
A 2 x 2 xB2 x 2 C 2 x 2
• Scalar multiplication 1 2 4 1 4 1 2
11
8 6 1 3 4 1 8
Vector Addition: Geometric interpretation
x2
• v' = [2 3]
5
• u' = [3 2]
• w’= v'+u' = [5 5] 4 u
• Note that two vectors 3
plus the concepts of v w
addition and 2
1 2 3 4 5
3 1
3 8 9
Example
: A A 8 0
1 0 4
9 4
13
Inverse of a Matrix
• Identity matrix:
AI = A 1 0 0
• Some matrices have an
I 0 1 0
inverse, such that:
AA-1 = I
• Inversion is tricky: 0 0 1
(ABC)-1 = C-1B-1A-1
• More on this topic later
1.2. System of Equations: Matrices and Vectors
• Assume an economic model as system of linear equations in
which
aij parameters, where i = 1.. n rows, j = 1.. m columns, and
n=m
di endogenous variables,
xi exogenous variables and constants
a11 x1 a12 x2 a1m xn d1
a21 x1 a22 x2 a2 m xn d 2
an1 x1 a n 2 x2 anm xn d n
15
1.2. System of equations: Matrices and Vectors
• A general form matrix of a system of linear
equations
Ax = d where
A = matrix of parameters
x = column vector of exogenous variables
d = column vector of endogenous variables and
constants
a11 a12 a1m x1 d1
a a a x d
• Solve for x* 21 22 2m 2
2
an1 an 2 anm xn d n
Ax d
x * A 1d
16
Solution of a General-equation System
17
Linear Dependence
v1' 5 12
• A set of vectors is linearly v2' 10 24
dependent if any one of them can
5 10 v1'
be expressed as a linear 12 24 '
combination of the remaining v2
vectors; otherwise, it is linearly 2v1/ v2/ 0 /
independent.
• Dependence prevents solving a 2 1 4
system of equations. More v1 ; v2 ; v3
unknowns than independent 7 8 5
equations. 3v1 2v2
• The
6 21 2 16
number of linearly
independent rows or columns in a 4 5 v3
matrix is the rank of a matrix
(rank(A)). 3v1 2v2 v3 0
18
Application 1: One Commodity Market Model
(2x2 matrix)
• Economic Model * a c
P
1) Qd=Qs bd
2) Qd = a – bP (a,b >0) * ad bc
Q
3) Qs = -c + dP (c,d >0) bd
• Find P* and Q*
Scalar Algebra Matrix Algebra
4) 1Q + bP = a
1 b Q a
5) 1Q – dP = -c 1 d P c
Ax d
x * A 1d
19
General form of 3x3 linear matrix
Scalar algebra form
parameters & exogenous variables endog. vars
& const.
a11x + a12y + a13z = d1
a21x + a22y + a23z = d2
a31x + a32y + a33z = d3
Matrix algebra form
parameters exog. endog.
vars vars. &
constan
a11 a12 a13 x d1
ts
a a22 a23 y d 2
21
a31 a32 a33 z d 3
20
Application II: Three Equation National Income
Model (3x3 matrix)
• Let
Y = C + I 0 + G0
C = a + b(Y-T) (a > 0, 0<b<1)
T = d + tY (d > 0, 0<t<1)
• Endogenous variables?
• Exogenous variables?
• Constants?
• Parameters?
• Why restrictions on the parameters?
21
Three Equation National Income Model
• Endogenous: Y, C, T: Income (GNP), Consumption,
and Taxes.
• Exogenous: I0 and G0: autonomous Investment &
Government spending.
• Constants a & d: autonomous consumption and
taxes.
• Parameter t is the marginal propensity to tax gross
income 0 < t < 1.
• Parameter b is the marginal propensity to consume
private goods and services from gross income 0 < b
< 1.
22
Three Equation National Income Model
Parameters & Exog
Endogenous .
• Given vars.
vars.
Y = C + I0 + G0 Y C T &con
s.
1Y -1C +0 = I0+G
C = a + b(Y-T) T 0
T = d + tY -bY +1 +b = a
1 C1 0T Y I 0 G0
• Find Y*, C*, T* b C a
Ax d -tY
1
+0 b+1
= d
t 0
C 1T T d
x * A 1d
24
Three Equation National Income Model
1 1 0 Y I 0 G0
b 1 b C a
t 0 1 T d
Ax d
1
Y 1 1 0 I 0 G0
*
*
C b 1 b a
T * t 0 1 d
* 1
x A d
25
Application III: Two Commodity Market
Equilibrium (4x4 matrix)
• Given • Scalar algebra
Qdi = Qsi, i=1, 2 1Q1 +0Q2 +2P1 - 1P2 = 10
Qd1 = 10 - 2P1 + P2
1Q1 +0Q2 - 3P1 +0P2= -2
Qs1 = -2 + 3P1
0Q1+ 1Q2 - 1P1 + 1P2= 15
Qd2 = 15 + P1 - P2
Qs2 = -1 + 2P2 0Q1+ 1Q2 +0P1 - 2P2= -1
1 0 2 1 Q1 10
• Find Q1*, Q2*, P1*, P2*
1
0 3 0 Q2 2
Ax d 0 1 1 1 P1 15
x * A 1d 0 1 0 2 P2 1
26
Two Commodity Market Equilibrium
1 0 2 1 Q1 10
1 0
3 0 Q2 2
0 1 1 1 P1 15
0 1 0 2 P2 1
Ax d
1
Q1* 1 0 2 1 10
*
Q 2 1 0 3 0 2
P* 0 1 1 1 15
1*
P2 0 1 0 2 1
x * A 1d 27
1.3. Vector Operations
An [m x 1] column vector u and a
3
u [1 x n] row vector v, yield a
2 x1
2 product matrix uv of dimension
[m x n].
v 1 4 5
1x 3
3 31215
uv 1 4 5
2 x3 2 2 8 10
28
1.3. Vector multiplication: Dot (inner), and
cross product
y c1 z1 c 2 z 2 c3 z 3 c 4 z 4
4
y c i z i
i 1
z1
z
y c 1 c2 c3 c4 2
c' z
z3
z4
Alternate representations:
Polar coords: (||v||, )
Complex numbers: ||v||ej
y
||v||
“phase”
x
1.3. Vectors: Cross Product
a b a b sin( )
1.3. Vectors: Cross Product: Right hand rule
1.3. Vectors: Norm
• Given a vector space V, the function g: V→ R is called a norm if
and only if:
1) g(x)≥ 0, for all xεV
2) g(x)=0 iff x=θ (empty set)
3) g(αx) = |α|g(x) for all αεR, xεV
4) g(x+y)=g(x)+g(y) (“triangle inequality”) for all x,yεV
x 1 0 0
T
x y 0
y 0 1 0
T
x z 0
z 0 0 1
T
y z 0
• X, Y, Z is an orthonormal basis. We can describe any 3D point as
a linear combination of these vectors.
1.3. Orthonormal Basis
• How do we express any point as a combination of a new basis
U, V, N, given X, Y, Z?
a 0 0 u1 v1 n1 a u b u c u
0 b 0 u v2 n2 a v b v c v
2
0 0 c u3 v3 n3 a n b n c n
• Commutative law: A + B = B + A
38
1.4. Matrix Multiplication
1 2 0 1
A ,B
3 4 6 7
10 26 1 1 27 12 13
AB
30 46 3 1 47 24 25
40
1.5. Identity and Null Matrices
1 0
• Identity matrix is a square matrix 0 or
and also it is a diagonal matrix with 1
1 along the diagonals. Similar to 1 0 0
scalar “1” 0
1 0 etc.
• Null matrix is one in which all 0 0 1
elements are zero. Similar to scalar
“0” 0 0 0
0 0 0
• Both are diagonal matrices 0 0 0
• Both are symmetric and
idempotent matrices:
A = AT and
A = A 2 = A3 = … 41
1.6. Inverse matrix
• AA-1 = I • Ax=d
• A-1A=I • A-1A x = A-1 d
• Necessary for matrix to • Ix = A-1 d
be square to have • x = A-1 d
unique inverse • Solution depends on
• If an inverse exists for a A-1
square matrix, it is • Linear independence
unique • Determinant test!
• (A')-1=(A-1)'
42
1.6. Inverse of a Matrix
1)Augmented 2) Transform
matrix augmented matrix
[A I ] [U H] [I K] further row
operations
Gauss elimination Gauss-Jordan
U: upper triangular elimination
IX=K
I X = X = A-1 K = A-1
1.6. Determinant of a Matrix
a b
A | A |det( A) ad bc
c d
1 1 d b
A This matrix is called the
ad bc c a
adjugate of A (or adj(A)).
A-1 = adj(A)/|A|
1.6. Determinant of a Matrix (3x3)
a b c
d e f aei bfg cdh afh bdi ceg
g h i
a b c a b
d e f d e Sarrus’ Rule: Sum
from left to right.
g h i g h
Then, subtract from
right to left
Note: N! terms
1.6. Determinants: Laplace formula
• The determinant of a matrix of arbitrary size can
be defined by the Leibniz formula or the Laplace
formula.
• The Laplace formula (or expansion) expresses the
determinant |A| as a sum of n determinants of (n-
1) × (n-1) sub-matrices of A. There are n2 such
expressions, one for each row and column of A
• Define the i,j minor Mij (usually written as |Mij|) of
A as the determinant of the (n-1) × (n-1) matrix
that results from deleting the i-th row and the j-th
column of A.
48
1.6. Determinants: Laplace formula
• Define the Ci,j the cofactor of A as:
Ci , j ( 1) i j | M i , j |
• The cofactor matrix of A -denoted by C-, is defined as
the nxn matrix whose (i,j) entry is the (i,j) cofactor of A.
The transpose of C is called the adjugate or adjoint of A
(adj(A)).
• Example:
1 2 3
A 0 1 0
2 4 6
| A |1xC11 2xC12 3xC13
1x ( 1x 6) 2x ( 1) x (0) 3x ( ( 1) x 2)) 0
2x (0) ( 1)(1x6 - 3x2) 4x (0) 0
51
1.6. Matrix inversion
52
Matrix Algebra: Summary
53
Notation and Definitions: Summary
• A (Upper case letters) = matrix
• b (Lower case letters) = vector
• nxm = n rows, m colums
• rank(A) = number of linearly independent vectors of A
• trace(A) = tr(A) = sum of diagonal elements of A
• Null matrix = all elements equal to zero.
• Diagonal matrix = all non-zero elements are in the diagonal.
• I = identity matrix (diagonal elements: 1, off-diagonal:0)
• |A| = det(A) = determinant of A
• A-1 = inverse of A
• A’=AT = Transpose of A
• A=AT Symmetric matrix
• A=A-1 Orthogonal matrix
• |Mij|= Minor of A
54