0% found this document useful (0 votes)
18 views53 pages

Chapter 1. Linear Algebra Final

The course 'Mathematics for Economists' aims to equip students with mathematical skills for economic analysis, covering topics such as linear algebra, calculus, optimization, and programming. It emphasizes the application of mathematical techniques to understand economic structures and perform quantitative analyses. Key concepts include matrix operations, systems of equations, and their relevance in economic models.

Uploaded by

Mebrat taye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views53 pages

Chapter 1. Linear Algebra Final

The course 'Mathematics for Economists' aims to equip students with mathematical skills for economic analysis, covering topics such as linear algebra, calculus, optimization, and programming. It emphasizes the application of mathematical techniques to understand economic structures and perform quantitative analyses. Key concepts include matrix operations, systems of equations, and their relevance in economic models.

Uploaded by

Mebrat taye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Mathematics for Economists (CAEC 502)

Jema Haji (Associate Professor)


Haramaya University
Objectives of the Course
• To equip the students with knowledge and skills
to enable them apply mathematics in economic
analyses;
• To enable students use mathematics to
understand the structure of economics;
• To facilitate the student to carry out quantitative
analysis of economic systems;
• To provide a framework for enabling the student
to see the application of mathematical techniques
to economics through examples.

2
Topics
• 1. Linear Algebra
• 2. Differential and integral calculus
• 3. Optimization
• 4. Differential and difference equations
• 5. Linear and non-linear programming
Mathematics? Economics?

Several questions:
What does the indifference curve look like? Why?
How to get demand functions from the utility
functions?
Is there a utility function?
Does the maximum exist?
Is it unique?
How do we obtain the solution?
What properties does a well-behaved demand
function possess?
Chapter 1. Linear Algebra
• Matrix
• System of equations
• Applications
• Vectors
• Orthonormal Basis
• Operation of matrices
• Determinants of a matrix
• Inverse of a matrix
• Applications
1.1. Matrix
• A matrix is a set of elements, organized into rows and columns
rows

 a b
columns
c d
 
• a and d are the diagonal elements.
• b and c are the off-diagonal elements.
• Matrices are like plain numbers in many ways: they can be added,
subtracted, and, in some cases, multiplied and inverted (divided).
1.1. Matrix
• Examples:

1 b 
A   ; b   
1  d 
• Dimensions of a matrix: numbers of rows by numbers of
columns. The Matrix A is a 2x2 matrix, b is a 1x3 matrix.
• A matrix with only one column or only one row is called a
vector.
• If a matrix has an equal numbers of rows and columns, it is
called a square matrix. Matrix A, above, is a square matrix.
• Usual Notation: Upper case letters => matrices
Lower case => vectors

7
Basic Operations of Matrices

• Addition, Subtraction, Multiplication

 a b  e f   a e b f 
c d   g h 
c  g d  h  Just add elements
     
 a b  e f  a e b f  Just subtract elements
c d   g h 
c  g d  h 
     
 a b  e f   ae  bg af  bh  Multiply each
c d  g  
  h   ce  dg cf  dh  row by each
column
 a b   ka kb  Multiply each
k   
 c d   kc kd  element by the
scalar
Vector multiplication: Geometric interpretation

• Think of a vector as a x2

directed line segment in 6


N-dimensions! (has 5
“length” and “direction”)
4
• Scalar multiplication  6 4  2 U
3
(“scales” the vector –i.e.,
changes length) 2
 3 2 U
1
x1

-4 -3 -2 -1 1 2 3 4 5 6


 1 U   3  2  -2

9
Matrix multiplication: Details
• Multiplication of matrices requires a conformability
condition
• The conformability condition for multiplication is that the
column dimensions of the lead matrix A must be equal to
the row dimension of the lag matrix B.
• What are the dimensions of the vector, matrix, and result?

 b11 b12 b13 


aB a11 a12   c 
 b21 b23 
22 
c11 c12 c13 
a11b11  a12b21 a11b12  a12b22 a11b13  a12b23 
• Dimensions: a(1x2), B(2x3), c(1x3) 10
Basic Matrix Operations: Examples
 2 1  3 1   5 2 
• Matrix addition  7 9   0 2  7 11
     
A2 x 2 B2 x 2 C2 x 2

• Matrix subtraction  2 1  1 0  1 1 
 7 9    2 3   5 6 
     

• Matrix multiplication  2 1  1 0  4 3 
 7 9 x  2 3  26 27
     
A 2 x 2 xB2 x 2 C 2 x 2

• Scalar multiplication 1  2 4  1 4 1 2
   11 
8  6 1  3 4 1 8
Vector Addition: Geometric interpretation

x2
• v' = [2 3]
5
• u' = [3 2]
• w’= v'+u' = [5 5] 4 u
• Note that two vectors 3
plus the concepts of v w
addition and 2

multiplication can create u


1
a two-dimensional space. x1

1 2 3 4 5

A vector space is a mathematical structure formed by a


collection of vectors, which may be added together and
multiplied by scalars. (It’s closed under multiplication
12 and
Transpose Matrix
• The transpose of a matrix A is another matrix AT (also written
A′) created by any one of the following equivalent actions:
- write the rows (columns) of A as the columns (rows) of AT
- reflect A by its main diagonal to obtain AT
• Formally, the (i,j) element of AT is the (j,i) element of A:
[AT]ij = [A]ji
• If A is a m × n matrix => AT is a n × m matrix.
• (A')' = A
• Conformability changes unless the matrix is square.

 3 1
 3 8  9
Example
: A    A  8 0 
1 0 4  
    9 4
13
Inverse of a Matrix

• Identity matrix:
AI = A  1 0 0
• Some matrices have an 
I  0 1 0 
inverse, such that:
AA-1 = I
• Inversion is tricky:  0 0 1
(ABC)-1 = C-1B-1A-1
• More on this topic later
1.2. System of Equations: Matrices and Vectors
• Assume an economic model as system of linear equations in
which
aij parameters, where i = 1.. n rows, j = 1.. m columns, and
n=m
di endogenous variables,
xi exogenous variables and constants
a11 x1  a12 x2   a1m xn d1
a21 x1  a22 x2   a2 m xn d 2
   
an1 x1  a n 2 x2   anm xn d n

15
1.2. System of equations: Matrices and Vectors
• A general form matrix of a system of linear
equations
Ax = d where
A = matrix of parameters
x = column vector of exogenous variables
d = column vector of endogenous variables and
constants
 a11 a12  a1m   x1   d1 
a a  a   x  d 
• Solve for x*  21 22 2m   2 
 2 
          
    
 an1 an 2  anm   xn   d n 
Ax d
x *  A  1d
16
Solution of a General-equation System

• Assume the 2x2 model • Why?


2x + y = 12 4x + 2y =24
4x + 2y = 24 2(2x + y) = 2(12)
• one equation with two
Find x*, y*: unknowns
y = 12 – 2x 2x + y = 12
4x + 2(12 – 2x) = 24 x, y
4x +24 – 4x = 24 Conclusion:
not all simultaneous equation
0 = 0 ? indeterminant! models have solutions

17
Linear Dependence
v1' 5 12
• A set of vectors is linearly v2' 10 24
dependent if any one of them can
 5 10   v1' 
be expressed as a linear 12 24  ' 
combination of the remaining    v2 
vectors; otherwise, it is linearly 2v1/  v2/ 0 /
independent.
• Dependence prevents solving a  2  1  4
system of equations. More v1   ; v2   ; v3  
unknowns than independent  7   8  5 
equations. 3v1  2v2

• The
6 21  2 16
number of linearly
independent rows or columns in a 4 5 v3
matrix is the rank of a matrix
(rank(A)). 3v1  2v2  v3 0
18
Application 1: One Commodity Market Model
(2x2 matrix)
• Economic Model * a c
P 
1) Qd=Qs bd
2) Qd = a – bP (a,b >0) * ad  bc
Q 
3) Qs = -c + dP (c,d >0) bd
• Find P* and Q*
Scalar Algebra Matrix Algebra
4) 1Q + bP = a
1 b   Q   a 
5) 1Q – dP = -c  1  d   P    c 
    
Ax d
x *  A  1d
19
General form of 3x3 linear matrix
Scalar algebra form
parameters & exogenous variables endog. vars
& const.
a11x + a12y + a13z = d1
a21x + a22y + a23z = d2
a31x + a32y + a33z = d3
Matrix algebra form
parameters exog. endog.
vars vars. &
constan
 a11 a12 a13   x  d1 
 ts
a a22 a23   y   d 2 
 21
 a31 a32 a33   z   d 3 
20
Application II: Three Equation National Income
Model (3x3 matrix)
• Let
Y = C + I 0 + G0
C = a + b(Y-T) (a > 0, 0<b<1)
T = d + tY (d > 0, 0<t<1)
• Endogenous variables?
• Exogenous variables?
• Constants?
• Parameters?
• Why restrictions on the parameters?

21
Three Equation National Income Model
• Endogenous: Y, C, T: Income (GNP), Consumption,
and Taxes.
• Exogenous: I0 and G0: autonomous Investment &
Government spending.
• Constants a & d: autonomous consumption and
taxes.
• Parameter t is the marginal propensity to tax gross
income 0 < t < 1.
• Parameter b is the marginal propensity to consume
private goods and services from gross income 0 < b
< 1.

22
Three Equation National Income Model
Parameters & Exog
Endogenous .
• Given vars.
vars.
Y = C + I0 + G0 Y C T &con
s.
1Y -1C +0 = I0+G
C = a + b(Y-T) T 0
T = d + tY -bY +1 +b = a
 1 C1 0T Y   I 0  G0 
• Find Y*, C*, T*  b   C   a 
Ax d -tY
 1
+0 b+1
   = d 
  t 0
C 1T T   d 
x *  A  1d
24
Three Equation National Income Model

 1  1 0  Y   I 0  G0 
  b 1 b   C   a 
    
  t 0 1  T   d 
Ax d
1
 Y   1  1 0  I 0  G0 
*

 *    
 C    b 1 b   a 
 T *    t 0 1  d 
 
* 1
x A d
25
Application III: Two Commodity Market
Equilibrium (4x4 matrix)
• Given • Scalar algebra
Qdi = Qsi, i=1, 2 1Q1 +0Q2 +2P1 - 1P2 = 10
Qd1 = 10 - 2P1 + P2
1Q1 +0Q2 - 3P1 +0P2= -2
Qs1 = -2 + 3P1
0Q1+ 1Q2 - 1P1 + 1P2= 15
Qd2 = 15 + P1 - P2
Qs2 = -1 + 2P2 0Q1+ 1Q2 +0P1 - 2P2= -1
1 0 2  1   Q1   10 
• Find Q1*, Q2*, P1*, P2*
1
 0  3 0   Q2    2

Ax d 0 1  1 1   P1   15 
    
x *  A  1d 0 1 0  2  P2    1 
26
Two Commodity Market Equilibrium
1 0 2  1   Q1   10 
1 0
  3 0   Q2    2

0 1  1 1   P1   15 
    
0 1 0  2  P2    1 
Ax d
1
 Q1*   1 0 2  1  10 
 * 
 Q 2   1 0  3 0    2
 
 P*   0 1 1 1   15 
 1*     
 P2   0 1 0  2   1
x *  A  1d 27
1.3. Vector Operations
An [m x 1] column vector u and a
 3
u   [1 x n] row vector v, yield a
2 x1
 2 product matrix uv of dimension
[m x n].
v 1 4 5
1x 3

 3  31215
uv     1 4 5   
2 x3  2  2 8 10 

28
1.3. Vector multiplication: Dot (inner), and
cross product

y c1 z1  c 2 z 2  c3 z 3  c 4 z 4
4
y  c i z i
i 1
 z1 
z 
y c 1 c2 c3 c4  2
c' z
 z3 
 
 z4 

• The dot product produces a scalar! c’z =1x1=1x4 4x1= z’c


29
1.3. Vectors: Dot Product
d Think of the dot product
   T a b c  e  ad  be  cf as a matrix multiplication
 f 
T 1/ 2
The magnitude (length) is
 [ ]  aa  bb  cc the square root of the dot
product of a vector with
itself.
The dot product is also
     cos( ) related to the angle between
the two vectors – but it
doesn’t tell us the angle.
Note: As the cos(90) is zero, the dot product of two orthogonal
vectors is zero.
1.3. Vectors: Magnitude and Phase (direction)
T
v ( x , x ,  , x )
1 2 n
n
v   x 2 (Magnitude or “2-norm”)
i
i 1
If v 1, v is a unit vector

(unit vector => pure direction)

Alternate representations:
Polar coords: (||v||, )
Complex numbers: ||v||ej
y

||v||
 “phase”
x
1.3. Vectors: Cross Product

• The cross product of vectors A and B is a vector C


which is perpendicular to A and B
• The magnitude of C is proportional to the sine of the
angle between A and B
• The direction of C follows the right hand rule – this
why we call it a “right-handed coordinate system”

a b  a b sin( )
1.3. Vectors: Cross Product: Right hand rule
1.3. Vectors: Norm
• Given a vector space V, the function g: V→ R is called a norm if
and only if:
1) g(x)≥ 0, for all xεV
2) g(x)=0 iff x=θ (empty set)
3) g(αx) = |α|g(x) for all αεR, xεV
4) g(x+y)=g(x)+g(y) (“triangle inequality”) for all x,yεV

•The norm is a generalization of the notion of size or length of a


vector.
• An infinite number of functions can be shown to qualify as
norms. For vectors in Rn, we have the following examples:
g(x)=maxi (xi), g(x)=∑i |xi|, g(x)=[∑i (xi)4 ] ¼
• Given a norm on a vector space, we can define a measure of
“how far apart” two vectors are using the concept of a metric.
1.3. Vectors: Metric
• Given a vector space V, the function d: VxV→ R is called a metric
if and only if:
1) d(x,y)≥ 0, for all x,yεV
2) d(x,y)=0 iff x=y
3) d(x,y) = d(y,x) for all x,yεV
4) d(x+y)≤d(x,z) + d(z,y) (“triangle inequality”) for all x,y,zεV

•Given a norm g(.), we can define a metric by the equation:


d(x,y) = g(x-y).

• The dot product is called the Euclidian distance metric.


1.3. Orthonormal Basis
• Basis: A space is totally defined by a set of vectors – any point
is a linear combination of the basis
• Orthonormal: orthogonal + normal
• Orthogonal: dot product is zero
• Normal: magnitude is one
• Example: X, Y, Z (but don’t have to be!)

x 1 0 0
T
x y 0
y 0 1 0
T
x z 0
z 0 0 1
T
y z 0
• X, Y, Z is an orthonormal basis. We can describe any 3D point as
a linear combination of these vectors.
1.3. Orthonormal Basis
• How do we express any point as a combination of a new basis
U, V, N, given X, Y, Z?

 a 0 0  u1 v1 n1   a u  b u  c u 
 0 b 0  u v2 n2   a v  b v  c v 
  2
 0 0 c   u3 v3 n3   a n  b n  c n 

(not an actual formula – just a way of thinking about it)

• To change a point from one coordinate system to another,


compute the dot product of each coordinate row with each
of the basis vectors.
1.4. Laws of Matrix Addition & Multiplication

• Commutative law: A + B = B + A

 a11 a12   b11 b12   a11  b11 b12  a12 


A  B      
 a 21 a 22   b21 b22   a 21  a 21 b22  a 22 

 b11 b12   a11 a12   b11  a11 b12  a12 


B  A      
b b b b b
 21 22   21 22   21 21 22 a b  a 22 

38
1.4. Matrix Multiplication

• Matrix multiplication is generally not commutative. That is,


AB  BA even if BA is conformable
(because diff. dot product of rows or col. of A&B)

 1 2  0  1
A   ,B  
 3 4   6 7 
 10   26  1 1  27   12 13 
AB     
 30   46  3 1  47   24 25

 01   13 02   14   3  4


BA    
 6 
1   7 3 62   7 4   27 40 
39
1.4. Matrix multiplication

• Exceptions to non-commutative law:


AB=BA iff
B = a scalar,
B = identity matrix I, or
B = the inverse of A -i.e., A-1

40
1.5. Identity and Null Matrices
1 0
• Identity matrix is a square matrix 0  or
and also it is a diagonal matrix with  1
1 along the diagonals. Similar to 1 0 0
scalar “1” 0
 1 0 etc.
• Null matrix is one in which all  0 0 1 
elements are zero. Similar to scalar
“0” 0 0 0
0 0 0

• Both are diagonal matrices  0 0 0
• Both are symmetric and
idempotent matrices:
A = AT and
A = A 2 = A3 = … 41
1.6. Inverse matrix

• AA-1 = I • Ax=d
• A-1A=I • A-1A x = A-1 d
• Necessary for matrix to • Ix = A-1 d
be square to have • x = A-1 d
unique inverse • Solution depends on
• If an inverse exists for a A-1
square matrix, it is • Linear independence
unique • Determinant test!
• (A')-1=(A-1)'

42
1.6. Inverse of a Matrix

1. Append the identity matrix


to A
a b c 1 0 0 2. Subtract multiples of the
other rows from the first
d e 
f | 0 1 0 row to reduce the diagonal
 element to 1
 g h i 0 0 1 3. Transform the identity
matrix as you go
4. When the original matrix is
the identity, the identity has
become the inverse!
1.6. Determination of the Inverse
(Gauss-Jordan Elimination)

all A, X and I are (nxn) X = A-1


AX = I square matrices

1)Augmented 2) Transform
matrix augmented matrix
[A I ] [U H] [I K] further row
operations
Gauss elimination Gauss-Jordan
U: upper triangular elimination
IX=K
I X = X = A-1 K = A-1
1.6. Determinant of a Matrix

• The determinant is a number associated with any square


matrix.
• If A is an nxn matrix, the determinant is given by |A| or
det(A).
• Determinants are used to characterize invertible matrices.
A matrix is invertible (non-singular) if and only if it has a
non-zero determinant
• That is, if |A|≠0 → A is invertible.
• Determinants are used to describe the solution to a system
of linear equations with Cramer's rule.
• Can be found using factorials, pivots, and cofactors! More
on this later.
• Lots of interpretations
45
1.6. Determinant of a Matrix

• Used for inversion. Example: Inverse of a 2x2 matrix:

 a b
A   | A |det( A) ad  bc
 c d 

1 1  d  b
A  This matrix is called the
ad  bc   c a 
adjugate of A (or adj(A)).

A-1 = adj(A)/|A|
1.6. Determinant of a Matrix (3x3)

a b c
d e f aei  bfg  cdh  afh  bdi  ceg
g h i

a b c a b
d e f d e Sarrus’ Rule: Sum
from left to right.
g h i g h
Then, subtract from
right to left
Note: N! terms
1.6. Determinants: Laplace formula
• The determinant of a matrix of arbitrary size can
be defined by the Leibniz formula or the Laplace
formula.
• The Laplace formula (or expansion) expresses the
determinant |A| as a sum of n determinants of (n-
1) × (n-1) sub-matrices of A. There are n2 such
expressions, one for each row and column of A
• Define the i,j minor Mij (usually written as |Mij|) of
A as the determinant of the (n-1) × (n-1) matrix
that results from deleting the i-th row and the j-th
column of A.
48
1.6. Determinants: Laplace formula
• Define the Ci,j the cofactor of A as:
Ci , j ( 1) i  j | M i , j |
• The cofactor matrix of A -denoted by C-, is defined as
the nxn matrix whose (i,j) entry is the (i,j) cofactor of A.
The transpose of C is called the adjugate or adjoint of A
(adj(A)).

• Theorem (Determinant as a Laplace expansion)


Suppose A = [aij] is an nxn matrix and i,j= {1, 2, ...,n}.
Then the determinant
| A |ai1Ci1  ai 2Ci 2  ...  ainCin
a1 j C1 j  a2 j C2 j  ...  anj Cnj
49
1.6. Determinants: Laplace formula

• Example:
 1 2 3

A  0  1 0  
 2 4 6
| A |1xC11  2xC12  3xC13 
1x ( 1x 6)  2x ( 1) x (0)  3x ( ( 1) x 2)) 0
 2x (0)  ( 1)(1x6 - 3x2)   4x (0) 0

|A| is zero => The matrix is singular. (Check!)


50
1.6. Determinants: Properties

• Interchanging the rows and columns does not


affect |A|.
• |A| = |A’|.
• |kA| = kn |A|, where k is a scalar.
• |I| = 1, where I is the identity matrix.
• |AB| = |A||B|.
• |A-1|=1/|A|.

51
1.6. Matrix inversion

• It is not possible to divide one matrix by another. That


is, we can not write A/B. For two matrices A and B, the
quotient can be written as AB-1 or B-1A.
• In general, in matrix algebra AB-1  B-1A. Thus, writing
A/B does not clearly identify whether it represents AB-1
or B-1A. We’ll say B-1 post-multiplies A (for AB-1) and
B-1 pre-multiplies A (for B-1A)
• Matrix division is matrix inversion.

52
Matrix Algebra: Summary

• Matrix algebra can be Ax d


used:
a. to express the system of *
x A d1
equations in a compact
notation;
1 adjA
b. to find out whether A 
solution to a system of det A
equations exist; and
c. to obtain the solution if it * adjA
exists. Need to invert the A x  d
matrix to find the solution A
for x*

53
Notation and Definitions: Summary
• A (Upper case letters) = matrix
• b (Lower case letters) = vector
• nxm = n rows, m colums
• rank(A) = number of linearly independent vectors of A
• trace(A) = tr(A) = sum of diagonal elements of A
• Null matrix = all elements equal to zero.
• Diagonal matrix = all non-zero elements are in the diagonal.
• I = identity matrix (diagonal elements: 1, off-diagonal:0)
• |A| = det(A) = determinant of A
• A-1 = inverse of A
• A’=AT = Transpose of A
• A=AT Symmetric matrix
• A=A-1 Orthogonal matrix
• |Mij|= Minor of A

54

You might also like