0% found this document useful (0 votes)
52 views41 pages

Matrix Algebra

A + B is the matrix formed by adding corresponding elements: a11 + b11 a12 + b12 A + B = a21 + b21 a22 + b22 Example: 2 3 1 0 A = 4 1 and B = 5 2 0 3 3 3 4 4 Then: A + B = 5 5 Commutative Law: A + B = B + A Associative Law: (A + B) + C = A + (B + C) Matrix Subtraction Requirement: conformable for subtraction (i.e. same dimension) a11 a12 b11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views41 pages

Matrix Algebra

A + B is the matrix formed by adding corresponding elements: a11 + b11 a12 + b12 A + B = a21 + b21 a22 + b22 Example: 2 3 1 0 A = 4 1 and B = 5 2 0 3 3 3 4 4 Then: A + B = 5 5 Commutative Law: A + B = B + A Associative Law: (A + B) + C = A + (B + C) Matrix Subtraction Requirement: conformable for subtraction (i.e. same dimension) a11 a12 b11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

GM0742/GM1040 Mathematics

Lectures 1-6
MATRIX ALGEBRA

Charles Nadeau
E-mail: [email protected]
Office: E-506
Office Hours: by appointment

Department of Economics, University of Göteborg

Autumn 2023
Modeling Systems in Matrix Form
Simple linear input-output model:
x1 – βx2 = d1
x2 – γx3 = d2
x3 - αx1 = 0

where (x1 , x2 , x3) denote endogeneous variables, (β, γ, α) denote coefficients and (d1 , d2) denote
constant terms.

Matrix form: Ax = d

where
1 −β 0 x1 d1
A = 0 1 −γ x = x2 d = d2
−α 0 1 x3 0

𝑑1 + β𝑑2 αγ𝑑1 + 𝑑2 α𝑑1 + αβ𝑑2


Solving: x1* = x2* = x3* =
1−αβγ 1−αβγ 1−αβγ
Matrix Modeling

Linear models can be written in (compact) matrix form Ax = d where:

A denotes the matrix of coefficients (coefficient matrix) (A33)

matrix ≡ group of mathematical terms arranged in rows and columns ([ ])


dimension ≡ denotes the size of a matrix in terms of # of rows and columns
elements ≡ individual terms in a matrix

x denotes the (column) vector of variable (x31)

vector ≡ special matrix with only 1 row or 1 column (row vector, column vector)

d denotes the (column) vector of constant terms (d31)

Note: If the coefficient matrix (A) is a non-singular, square matrix then the model can be
solved (x1*, x2*, x3*, ….) and the solution set will be unique.
Modeling Systems in Matrix Form (cont)

Suppose we have a system of m simultaneous, linear equations:

a11x1 + a12x2 + …. + a1nxn = d1


a21x1 + a22x2 + …. + a2nxn = d2
…..
am1x1 + am2x2 + …. + amnxn = dm

This system can be written as:


Ax = d
where

A = x = d =
Modeling Systems in Matrix Form (cont)

Suppose:
2x1 - x2 = 0
-x1 + x2 = 4

Matrix form:
Ax = d

where
2 −1 x1 0
A = x = d =
−1 1 x2 4

x1
First equation: [2 -1] = 0
x2

x1
Second equation: [-1 1] = 4
x2
Vectors

Vectors are ordered arrays of elements (e.g. numbers, variables, ect …):

Two-dimensional vectors:

x´ = [x1 x2]

y1
y = y
2

which can be represented as points or directed segments on a 2-


dimensional coordinate plane (see textbook discussion).
Vectors (cont)

Three-dimensional vectors:

x´ = [x1 x2 x3]

y =

which can be represented as points or directed segments on a 3-


dimensional coordinate space (see textbook discussion).
Vectors (cont)

n-dimensional vectors:

x´ = [x1 x2 x3 … xn]

y =

which cannot be represented in graphical form.

Equality of vectors: x´ = z´ iff xj = zj ∀ j


y = z iff yi = zi ∀ i
Matrices

Matrices are rectangular arrays of elements consisting of m rows and n


columns:

m x n matrix:

Amn =

where aij denotes the element of the matrix on the intersection of row i and
column j, considered as a single entity (e.g. number, parameter, variable).

Equality of matrices: A = B iff aij = bij ∀ i, j


Matrices (cont)

Special matrices:

Identity matrix:
1 0
I2 =
0 1

I3 =

There also exists I4 , I5 , I6 , ect …. all necessarily square matrices.


Matrix counterpart to scalar 1: IA = AI = A
Matrices (cont)

Special matrices:

Null matrix:
0 0
022 =
0 0

032 =

Null matrices are not necessarily square matrices.


Matrix counterpart to scalar 0: (A + 0) = (0 + A) = A
(A0) = (0A) = 0
Vector Operations (conformability conditions)

Addition/Subtraction:
Vectors must have the same dimension

Scalar Multiplication (α):

Multiply every element of the vector by the scalar (α)

Multiplication:

Column dimension of the lead vector must be same as the row dimension of lag vector
(e.g. 1x2 lead vector and 2x1 lag vector)

Division:
Impossible
Vector Addition and Subtraction
Suppose:
x´ = [8 9 -3]
z´ = [4 -5 6]
Then:
x´ + z´ = [12 4 3]
x´ - z´ = [4 14 -9]
Suppose:
−2
y = 7
−1

0
z = 13
4
Then:
−2 −2
y + z = 20 and y - z = −6
3 −5
Vector Addition and Subtraction (cont)

Generically:
x´ = [x1 x2 x3]
z´ = [z1 z2 z3]

x´ + z´ = [x1 + z1 x2 + z2 x3 + z3]
x´ - z´ = [x1 - z1 x2 - z2 x3 - z3]

y1 z1
y = y2 z = z2
y3 z3

y1 + z1 y1 − z1
y + z = y2 + z2 y - z = y2 − z2
y3 + z3 y3 − z3
Scalar Multiplication (α)

Suppose:
x´ = [x1 x2 x3]

y =

Then:

αx´ = [αx1 αx2 αx3]

αy =
Vector Multiplication (cont)

Generically:
x´ = [x1 x2 x3]

y =

Then:

x´y = [x1 x2 x3]

= x1y1 + x2y2 + x3y3

Requirement: (row vector) x (column vector) with equal number of elements


Result: scalar (calculated as the inner product of the vector elements)
Linear Dependence

A system of vectors (x, y, z) is linearly dependent if some non-trivial linear combination of them is
equal to the null vector:
α1x + α2y + α3z = 0

where the scalars (α1 , α2 , α3) are not all equal to zero.

Examples:

(1) Two vectors (x, y) are linearly dependent if they are proportional: x = αy
(2) Three vectors (x, y, z) are linearly dependent if one of the vectors is a linear combination of the
other two vectors: z = α1x + α2y

Textbook concepts: vector space, spanning vectors


Linear Dependence (cont)

Example:
v1´ = [5 12] v2´ = [10 24]

Row vectors v1´ and v2´ are linearly dependent because

2v1´ = v2´
2v1´ - v2´ = 0
Example:
2 1 4
v1 = v2 = v3 =
7 8 5

Column vectors v1 , v2 and v3 are linearly dependent because

3v1 – 2v2 = v3
3v1 – 2v2 – v3 = 0

Question: Is it even possible for v1 , v2 and v3 to be linearly independent?


Matrix Operations (conformability conditions)
Addition/Subtraction:
Matrices must have the same dimension
Commutative Law: A + B = B + A
Associative Law: (A + B) + C = A + (B + C)

Scalar Multiplication (α):

Multiply every element of the matrix by the scalar (α)


Commutative Law: αA = Aα

Multiplication:

Column dimension of the lead matrix must be the same as the row dimension of lag matrix
Associative Law: (AB)C = A(BC) = ABC
Distributive Law: A(B + C) = AB + AC

Division:
Impossible
Matrix Addition

Requirement: conformable for addition (i.e. same dimension)

a11 a12 b11 b12


A = and B =
a21 a22 b21 b22

a11 + b11 a12 + b12


A+B =
a21 + b21 a22 + b22

Rules:
(1) Commutative: A + B = B + A
(2) Associative: (A + B) + C = A + (B + C)
(3) Null matrix (0): A + 0 = A
(4) Opposite matrix (-A): A + (-A) = 0
Matrix Addition/Subtraction

Example:
4 9 2 0
A = B =
2 1 0 7

6 9
A+B =
2 8

Example:
19 3 6 8
A = B =
2 0 1 3

13 −5
A-B =
1 −3
Scalar Multiplication

αa11 αa12
αA =
αa21 αa22

Rules:
(1) α1(α2 A) = (α1 α2) A
(2) α (A + B) = αA + αB
(3) (α1 + α2) A = α1 A + α2 A
Matrix Multiplication

Conformability Requirement:
Amk Bkn = Cmn
(column dimension of lead matrix) = (row dimension of lag matrix) = k
Dimension of product matrix: (m x n)

Multiplication Rule:
cij = σ𝑘𝑝=1(𝑎ip)(𝑏pj )
(cij is calculated as the inner product of the ith row of A and the jth column of B)
Result: scalar

Rules: (1) AB ≠ BA (i.e. not commutative in general)


(2) Associative: (AB)C = A(BC) (assuming conformability holds)
(3) Distributive: A(B+C) = AB + AC
(B+C)A = BA + CA (assuming …)
Transpose of a Matrix
In order to transpose a matrix you interchange the rows and columns of the matrix:

Suppose:
1 2 3
A = 4 5 6
7 8 9
Then:
1 4 7
A´ = 2 5 8
3 6 9

Properties: (A´)´ = A
(A + B)´ = A´ + B´
(AB)´ = B´A´
Transpose of a Matrix (cont)

Suppose:
1 2 3 4
A =
5 6 7 8
Then:

A´ =
Matrix Inversion

The inverse of square matrix A is denoted as

A-1
and is defined such that

AA-1 = A-1 A = I

Note: (1) unlike a transpose matrix, an inverse matrix may not exist.
(2) if square matrix A has an inverse, then A is called nonsingular.
(3) if square matrix A has no inverse, then A is called singular.
(4) A and A-1 will always have the same dimension.
(5) if A-1 exists, then it is unique.

Issue: What does singularity or nonsingularity imply about a square matrix?


Matrix Inversion (cont)

Suppose matrix A and matrix B are both non-singular nxn matrices.

Properties:
(A-1)-1 = A
(AB)-1 = B-1 A-1
(A´)-1 = (A-1)´
Determinant of a Matrix

The determinant of a square matrix is a unique scalar computed in a particular way:

Suppose:
a11 a12
A =
a21 a22
Then:
a11 a12
A ≡ det A = = (a11a22 - a12a21) (scalar)
a21 a22

where the value of A is a crucial test for the singularity of the matrix (discussed earlier)
and is useful for calculating A-1 (also discussed earlier).
Determinant of a Matrix (cont)

Suppose:
a11 a12 a13
A = a21 a22 a23
a31 a32 a33
Then:
a22 a23 a a23 a a
A = a11 - a12 21 + a13 21 22
a32 a33 a31 a33 a31 a32

= a11 M11 - a12 M12 + a13 M13

where the subdeterminant Mij denotes the minor of element aij. The cofactor ( Cij ) of
element aij is defined as:

Cij ≡ (-1)i+j Mij

Note: In the example above, A was calculated via expansion along the first row …
Determinant of a Matrix (cont)

The value of the determinant of a matrix can be calculated by expansion using any row i
or column j in the matrix; thus, the formula for the value of the determinant of matrix A
can be written as either:

det A = σnj=1 aij Cij

if expanding using row i or, equivalently

det A = σni=1 aij Cij

if expanding using column j.


Determinant of a Matrix (cont)

Properties:

(1) If matrix A is singular (i.e. linearly dependent rows/columns) then:

A = 0

(2) A = A′
(3) If matrix A has a 0 row or column, then A = 0
(4) Interchanging rows (or columns) in matrix A does not affect A
(5) Multiplying 1 row/column in matrix A by scalar α yields: α A

Textbook concepts: rank, full rank


Determinant of a Matrix (cont)

Given:
2 0
A =
0 2
Calculating:
𝐴 = (2)(2) – (0)(0) = +4

Given:
−1 5
A =
2 −10
Calculating:
𝐴 = (-1)(-10) – (5)(2) = 0

Note: -2v1´ = v2´ (i.e. linear dependent rows)


Note: -5v1 = v2 (i.e. linear dependent columns)
Determinant of a Matrix (cont)

Suppose:

A =

B =

Question: Is there an easy way to calculate determinants here?

Verify: A = +72 B = -81


Calculating the Inverse of a Matrix

Given square matrix A, the inverse of A (i.e. A-1) is calculated as follows:


Steps:

(1) calculate A (assume: A ≠ 0)

(2) construct the cofactor matrix (C) for A

(3) take the transpose of C (i.e. C´ ≡ adjoint of A)

Calculating:
adj A
A-1 = A
Constructing a Cofactor Matrix

To construct a cofactor matrix (C) for matrix A we simply replace each element in matrix A
(aij) with its cofactor ( Cij ), where

Cij ≡ (-1)i+j Mij

Example:
a11 a12
A =
a21 a22

C11 C12 a22 −a21


C = =
C21 C22 −a12 a11

a22 −a12
adj A ≡ C´ =
−a21 a11
Constructing a Cofactor Matrix (cont)

Example:
a11 a12 a13
A = a21 a22 a23
a31 a32 a33

C =

adjA ≡ C´ =
Calculating the Inverse of a Matrix (cont)

Example:
1 0 0
I = 0 1 0
0 0 1

Calculating: I = (1)(1) + (0)(0) + (0)(0) = +1

1 0 0 1 0 0
C = 0 1 0 → adj I = 0 1 0
0 0 1 0 0 1

1 0 0
adj I
Thus: I-1 = I
= 0 1 0
0 0 1

Note: Identity matrix (I) is its own inverse (i.e. I = I-1)


Solving a Square System

Suppose:
Ax = d

where the coefficient matrix A is an nxn square matrix and d is an nx1 column vector
where d ≠ 0. If A ≠ 0 (i.e. matrix A is non-singular and thus full rank) then A-1 exists and
the unique solution to the system is:

x* = A-1 d
Solutions of a Square System

Suppose:
Ax = d

where the coefficient matrix A is an nxn square matrix (i.e. Ax = d denotes a square
system with n-equations and n-unknown variables).

A = 0: (1) if d = 0 then many solutions exist, including: x* = 0


(2) if d ≠ 0 then either no solution exists or many solutions exist

A ≠ 0: (1) if d = 0 then a unique solution exists where: x* = 0


(2) if d ≠ 0 then a unique solution exists where: x* = A-1 d
Solutions of a Non-Square System

Suppose:
Ax = d

where the coefficient matrix A is an mxn non-square matrix (i.e. Ax = d denotes a non-
square system with m-equations and n-unknown variables).

m < n: underdetermined system; generally many solutions possible

m > n: overdetermined system; generally no solution possible


Cramer’s Rule

Given:
Ax = d

As we have already seen, if A ≠ 0 and d ≠ 0 then a unique solution exists where

x* = A-1 d

According to Cramer’s rule


Aj
xj* =
A

where Aj denotes the matrix A with the jth column replaced by the column vector d.

You might also like