Unit - 1 - Class - Notes - Mathematical - Foundation - in - Computer - Science - 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Matrices and Their Applications

Introduction to Matrices
A matrix is a rectangular array of numbers, symbols, or expressions arranged
in rows and columns. Matrices are a key concept in mathematics, particularly
in linear algebra, and are used to represent linear transformations, solve sys-
tems of linear equations, and perform various operations in multiple disciplines.
Mathematically, a matrix is denoted by capital letters (e.g., A, B, C), and its
elements are typically enclosed in square or round brackets.

Structure of a Matrix
• Order (Dimension): A matrix with m rows and n columns is called an
m × n matrix.
• Elements: The individual items or numbers in a matrix are called ele-
ments or entries, often denoted as aij , where i represents the row number
and j the column number.

For example, a 2 × 3 matrix can be represented as:


 
a a12 a13
A = 11
a21 a22 a23

Matrix Notation
A general matrix A can be represented as:
 
a11 a12 ··· a1n
 a21 a22 ··· a2n 
A= .
 
.. .. .. 
 .. . . . 
am1 am2 ··· amn

where aij represents the element in the i-th row and j-th column.

Applications of Matrices
Matrices have numerous applications in various fields, including:

1
1. Computer Graphics
Matrices are used to perform transformations such as rotation, scaling, and
translation in computer graphics. They allow for efficient manipulation of points
and vectors in 2D and 3D space.

2. Solving Systems of Linear Equations


Matrices provide a compact way to represent and solve systems of linear equa-
tions. Techniques such as Gaussian elimination, LU decomposition, and matrix
inversion are commonly used.

3. Data Science and Machine Learning


In data science, matrices are used to represent datasets, perform operations
like covariance computation, and facilitate various algorithms including linear
regression, PCA, and neural networks.

4. Economics and Finance


Matrices are used in economics and finance for input-output models, optimiza-
tion problems, and to model complex systems with multiple variables and con-
straints.

5. Physics and Engineering


Matrices are employed in physics and engineering to model physical systems,
solve differential equations, and in areas such as quantum mechanics, structural
analysis, and control systems.

6. Markov Chains
In probability and statistics, matrices are used to represent transition matrices
in Markov chains, which model the behavior of stochastic processes over time.

7. Network Theory
Matrices are used to represent graphs and networks in network theory, enabling
the analysis of connectivity, flow, and other properties of complex networks.

Types of Matrices with Examples


1. Row Matrix
A row matrix has only one row and multiple columns. Its order is 1 × n, where
n is the number of columns.

2
Examples:
A = [4, 7, 9] (Order: 1 × 3)
B = [2, −3, 5, 8] (Order: 1 × 4)

2. Column Matrix
A column matrix has only one column and multiple rows. Its order is m × 1,
where m is the number of rows.
Examples:  
6
C = −2 (Order: 3 × 1)
3
 
4
D= (Order: 2 × 1)
7

3. Square Matrix
A square matrix has the same number of rows and columns (m = n).
Examples:  
1 2
E= (Order: 2 × 2)
3 4
 
5 −1 2
F= 0 3 4 (Order: 3 × 3)
−7 8 6

4. Diagonal Matrix
A diagonal matrix is a square matrix where all elements outside the main diag-
onal (top-left to bottom-right) are zero.
Examples:  
5 0
G= (Order: 2 × 2)
0 3
 
7 0 0
H = 0 2 0  (Order: 3 × 3)
0 0 −4

5. Identity Matrix
An identity matrix is a diagonal matrix where all diagonal elements are 1. It
acts as the multiplicative identity in matrix multiplication.
Examples:  
1 0
I2 = (Order: 2 × 2)
0 1

3
 
1 0 0
I3 = 0 1 0 (Order: 3 × 3)
0 0 1

6. Zero Matrix
A zero matrix has all its elements equal to zero. It can be of any order (m × n).
Examples:  
0 0 0
J= (Order: 2 × 3)
0 0 0
 
0 0
K = 0 0 (Order: 3 × 2)
0 0

7. Symmetric Matrix
A symmetric matrix is a square matrix that is equal to its transpose, i.e., A =
AT .
Examples:  
1 2 3
L = 2 4 5 (Order: 3 × 3)
3 5 6
 
7 −1
M= (Order: 2 × 2)
−1 5

8. Skew-Symmetric Matrix
A skew-symmetric matrix is a square matrix that is equal to the negative of its
transpose, i.e., A = −AT . The diagonal elements of a skew-symmetric matrix
are always zero.
Examples:  
0 3 −2
N = −3 0 4  (Order: 3 × 3)
2 −4 0
 
0 1
O= (Order: 2 × 2)
−1 0

9. Orthogonal Matrix
An orthogonal matrix is a square matrix whose rows and columns are orthogonal
unit vectors. Mathematically, AAT = I.
Examples:
 
1 0
P= (Order: 2 × 2), an orthogonal matrix since PPT = I.
0 −1

4
" #
√1 √1
Q= 2 2 (Order: 2 × 2), an orthogonal matrix since QQT = I.
− √12 √1
2

10. Triangular Matrix


A triangular matrix is a square matrix where all the entries either above (upper
triangular) or below (lower triangular) the main diagonal are zero.
Upper Triangular Matrix: All elements below the main diagonal are
zero.
Examples:  
2 3 1
R = 0 −5 4 (Order: 3 × 3)
0 0 6
 
7 −1
S= (Order: 2 × 2)
0 4
Lower Triangular Matrix: All elements above the main diagonal are zero.
Examples:  
3 0 0
T = 5 2 0  (Order: 3 × 3)
4 1 −3
 
1 0
U= (Order: 2 × 2)
9 7
These examples showcase the diverse nature of matrices and highlight their
structural variations, each serving specific purposes in mathematical and applied
contexts.

Operations on Matrices
Matrices are versatile tools in mathematics, and several operations can be per-
formed on them. Here, we provide a detailed description of the basic operations
on matrices along with examples.

1. Addition of Matrices
Matrix addition is defined for two matrices of the same order (i.e., having the
same number of rows and columns). The sum of two matrices A and B, both
of order m × n, is a matrix C of the same order, where each element cij is the
sum of the corresponding elements from A and B:
   
a11 a12 · · · a1n b11 b12 · · · b1n
 a21 a22 · · · a2n   b21 b22 · · · b2n 
A= . , B =
   
. .. .  .. .. .. .. 
 .. .. .. 

.  . . . . 
am1 am2 ··· amn bm1 bm2 ··· bmn

5
 
a11 + b11 a12 + b12 ··· a1n + b1n
 a21 + b21 a22 + b22 ··· a2n + b2n 
A+B=
 
.. .. .. .. 
 . . . . 
am1 + bm1 am2 + bm2 ··· amn + bmn
For matrix addition, the following rules apply:
• Commutative Property:

A+B=B+A

• Associative Property:

(A + B) + C = A + (B + C)

• Additive Identity:
A+0=A
where 0 is the zero matrix of the same order as A.
• Additive Inverse:
A + (−A) = 0
where −A is the matrix with elements that are the negations of the cor-
responding elements of A.

2. Scalar Multiplication of a Matrix


Scalar multiplication involves multiplying each element of a matrix by a scalar
(a constant number). If A is a matrix of order m × n and k is a scalar, then the
scalar multiplication k · A is defined as:
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A= .
 
.. .. .. 
 .. . . . 
am1 am2 · · · amn
 
k · a11 k · a12 · · · k · a1n
 k · a21 k · a22 · · · k · a2n 
k·A= .
 
.. .. .. 
 .. . . . 
k · am1 k · am2 ··· k · amn
For scalar multiplication, the following rules apply:

• Distributive Property over Matrix Addition:

k · (A + B) = k · A + k · B

6
• Distributive Property over Scalar Addition:

(k + l) · A = k · A + l · A

• Associative Property of Scalar Multiplication:

k · (l · A) = (k · l) · A

• Multiplicative Identity:
1·A=A

3. Matrix Multiplication
Matrix multiplication is defined for two matrices where the number of columns
in the first matrix is equal to the number of rows in the second matrix. If A is
an m × n matrix and B is an n × p matrix, their product C = A · B is an m × p
matrix.
The element cij of matrix C is obtained by taking the dot product of the
i-th row of A and the j-th column of B:
n
X
cij = aik · bkj
k=1

Where: - aik is the element from the i-th row and k-th column of A. - bkj
is the element from the k-th row and j-th column of B.

Example of Matrix Multiplication


 
  7 8
1 2 3
Let A = (a 2 × 3 matrix) and B =  9 10 (a 3 × 2 matrix).
4 5 6
11 12
The product C = A · B will be a 2 × 2 matrix:

   
c c12 (1 · 7 + 2 · 9 + 3 · 11) (1 · 8 + 2 · 10 + 3 · 12)
C = 11 =
c21 c22 (4 · 7 + 5 · 9 + 6 · 11) (4 · 8 + 5 · 10 + 6 · 12)

Calculating each element:

c11 = 1 · 7 + 2 · 9 + 3 · 11 = 7 + 18 + 33 = 58

c12 = 1 · 8 + 2 · 10 + 3 · 12 = 8 + 20 + 36 = 64

c21 = 4 · 7 + 5 · 9 + 6 · 11 = 28 + 45 + 66 = 139

c22 = 4 · 8 + 5 · 10 + 6 · 12 = 32 + 50 + 72 = 154

7
Thus, the resulting matrix C is:
 
58 64
C=
139 154
For matrix multiplication, the following rules apply:

• Associative Property:

(A · B) · C = A · (B · C)

• Distributive Property:

A · (B + C) = A · B + A · C

(A + B) · C = A · C + B · C

• Multiplicative Identity:
A·I=A
where I is the identity matrix of appropriate order.
• Non-Commutativity:
A · B ̸= B · A
in general.

4. Transpose of a Matrix
The transpose of a matrix is an operation that flips a matrix over its diagonal,
swapping the row and column indices of the elements. If A is an m × n matrix,
its transpose, denoted by AT , is an n × m matrix.
Mathematically, the transpose of a matrix A is defined as:
 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
A= .
 
.. .. .. 
 .. . . . 
am1 am2 ··· amn
The transpose of A, denoted by AT , is:
 
a11 a21 · · · am1
 a12 a22 · · · am2 
AT =  .
 
.. .. .. 
 .. . . . 
a1n a2n · · · amn
Thus, the element in the i-th row and j-th column of A becomes the element
in the j-th row and i-th column of AT .

8
Example
Let A be a 2 × 3 matrix:
 
1 2 3
A=
4 5 6
The transpose of A, AT , will be a 3 × 2 matrix:
 
1 4
AT = 2 5
3 6
Here, the first row of A becomes the first column of AT , the second row of
A becomes the second column of AT , and so on.

For the transpose of a matrix, the following rules apply:

• Transpose of a Sum:

(A + B)T = AT + BT

• Transpose of a Scalar Multiple:

(k · A)T = k · AT

• Transpose of a Product:

(A · B)T = BT · AT

• Transpose of Transpose:

(AT )T = A

Introduction to Linear Systems of Equations


A linear system of equations is a collection of one or more linear equations
involving the same set of variables. These systems can be represented in ma-
trix form and are foundational in various fields such as mathematics, physics,
engineering, and economics. The goal is to find the values of the variables that
satisfy all the equations simultaneously.

Terminology
1. Linear Equation
A linear equation is an equation of the form:

a1 x1 + a2 x2 + · · · + an xn = b

9
where a1 , a2 , . . . , an and b are constants, and x1 , x2 , . . . , xn are variables. Each
term is either a constant or the product of a constant and a variable. The graph
of a linear equation in two variables is a straight line, and in three variables, it
is a plane.
Example:
The equation 2x1 + 3x2 = 5 is a linear equation in two variables x1 and x2 .

2. System of Equations
A system of equations is a set of one or more equations involving the same
set of variables. The system can be linear or nonlinear, but here we focus on
linear systems.
Example:
Consider the system: (
2x1 + 3x2 = 5
4x1 − x2 = 1
This system consists of two linear equations with two variables, x1 and x2 .

3. Solution
A solution to a system of equations is a set of values for the variables that
satisfies all the equations in the system. If (x1 , x2 ) is a solution to the above
system, substituting these values into both equations should satisfy them.
Example:
For the system (
2x1 + 3x2 = 5
4x1 − x2 = 3
the pair (x1 , x2 ) = (2, 1) is a not a solution because substituting x1 = 2 and
x2 = 1 into both equations doesn’t satisfies them:
2(2) + 3(1) = 4 + 3 = 7 ̸= 5
However, correct values that satisfy both equations are (1, 1):
2(1) + 3(1) = 5 and 4(1) − 1 = 3

4. Consistency
A system is consistent if it has at least one solution. If there are no solutions,
the system is inconsistent.
Example:
The system (
x1 + x2 = 3
2x1 + 2x2 = 7
is inconsistent because the second equation is not a multiple of the first one,
leading to no possible values for x1 and x2 that satisfy both equations.

10
5. Homogeneous System
A system of linear equations is homogeneous if all constant terms are zero.
Such systems always have at least one solution: the trivial solution where all
variables are zero.
Example:
The system (
x1 + 2x2 − x3 = 0
3x1 − x2 + 4x3 = 0
is homogeneous because all constants on the right-hand side are zero.

6. Inhomogeneous/Non-homogeneous System
A system of linear equations is inhomogeneous if at least one constant term
is non-zero. Inhomogeneous systems may or may not have solutions.
Example:
The system (
x1 + x2 = 4
2x1 − x2 = 1
is inhomogeneous because the constants on the right-hand side are not all zero.

7. Matrix Representation
A system of linear equations can be represented using matrices. For instance,
the system (
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
can be represented as
Ax = b
where      
a11 a12 x1 b1
A= , x= , b=
a21 a22 x2 b2

1. Coefficient Matrix
The coefficient matrix is the matrix consisting of only the coefficients of the
variables in a system of linear equations. It is used to represent the system in
matrix form.
Example:
For the system (
2x1 + 3x2 = 5
4x1 − x2 = 1

11
the coefficient matrix A is:  
2 3
A=
4 −1

2. Augmented Matrix
The augmented matrix is formed by appending the column of constants (from
the right-hand side of the equations) to the coefficient matrix. This matrix is
used to perform various operations to solve the system of equations.
Example:
For the system (
2x1 + 3x2 = 5
4x1 − x2 = 1
the augmented matrix [A | b] is:
 
2 3 5
4 −1 1

Gauss Elimination Method


The Gauss Elimination Method is a systematic procedure used to solve
systems of linear equations. It transforms a given system into an equivalent
upper triangular system, which can then be solved by back-substitution. The
method involves using elementary row operations to simplify the augmented
matrix of the system.

Steps Involved in Gauss Elimination Method


1. Form the Augmented Matrix:
-Write the augmented matrix for the system of linear equations.

2. Forward Elimination:
- Use elementary row operations to transform the matrix into an upper
triangular form (all zeros below the main diagonal).
- This involves creating zeros below each pivot (leading entry in a row).
3. Back-Substitution:
- Solve for the variables starting from the bottom row and working up-
wards.
- Use the values obtained to substitute into the above rows, progressively
solving for all variables.

Elementary Row Operations


Elementary row operations are fundamental manipulations that can be applied
to the rows of a matrix. These operations are used to simplify matrices and

12
are crucial in methods like Gaussian Elimination for solving systems of linear
equations. There are three types of elementary row operations:

1. Row Swapping (Ri ↔ Rj )


This operation involves swapping two rows of a matrix. It is used when a pivot
element is zero or when rearranging rows to simplify calculations.
Example: Given matrix A:
 
1 2 3
A = 4 5 6
7 8 9
Swapping row 1 (R1 ) and row 3 (R3 ):
 
7 8 9
R1 ↔ R3 =⇒ 4 5 6
1 2 3

2. Row Multiplication (Ri → kRi )


This operation multiplies every element of a row by a non-zero scalar k. It is
used to create pivots or to simplify row elements.
Example: Given matrix A:
 
1 2 3
A = 4 5 6
7 8 9
Multiplying row 2 (R2 ) by 2:
 
1 2 3
R2 → 2R2 =⇒ 8 10 12
7 8 9

3. Row Addition/Subtraction (Ri → Ri + kRj )


This operation adds or subtracts a multiple of one row to another row. It is
used to create zeros in specific positions of the matrix, typically below pivot
elements.
Example: Given matrix A:
 
1 2 3
A = 4 5 6
7 8 9
Adding −4 times row 1 (R1 ) to row 2 (R2 ):

13
 
1 2 3
R2 → R2 − 4R1 =⇒ 0 −3 −6
7 8 9
These operations are fundamental tools in matrix manipulation, allowing for
simplification and solution of linear systems.

Example 1: Solving a System Using Gauss Elimination


Solve the following system of equations:

2x + 3y + z = 9

4x + y − 2z = 2

−2x + 5y + 3z = 7

Step 1: Form the Augmented Matrix


 
2 3 1 9
 4 1 −2 2 
−2 5 3 7
Step 2: Forward Elimination
- Make the pivot in the first column of the first row to be 1 by dividing the
first row by 2:
3 1 9
 
1
1 2 2 2
R1 → R1 :  4 1 −2 2 
2
−2 5 3 7
- Eliminate the first element of the second and third rows:
3 1 9
 
1 2 2 2
R2 → R2 − 4R1 :  0 −5 −4 −16 
−2 5 3 7

1 23 1 9
 
2 2
R3 → R3 + 2R1 :  0 −5 −4 −16 
0 8 4 12
- Make the pivot in the second row second column to be 1 by dividing the
second row by -5:

1 32 12 92
 
1
R2 → − R2 :  0 1 45 16 5

5
0 8 4 12
- Eliminate the element in the second column of the third row:

14
3 1 9
 
1 2 2 2
4 16
R3 → R3 − 8R2 :  0 1 5 5

0 0 − 12
5 − 44
5

Step 3: Back-Substitution
- Solve for z from the third row:
12 44 44 11
− z=− =⇒ z = =
5 5 12 3
11
- Substitute z = 3 into the second row:
4 11 16 1
y+ · = =⇒ y =
5 3 5 5
1 11
- Substitute y = 5 and z = 3 into the first row:
3 1 1 11 9
x+ · + · = =⇒ x = 1
2 5 2 3 2
Solution: x = 1, y = 51 , z = 11
3 .

Example 2:
Solve the following system of equations:

x + y + z = 6

2x + 3y + 7z = 23

x − y + 2z = 5

Step 1: Form the Augmented Matrix


 
1 1 1 6
 2 3 7 23 
1 −1 2 5
Step 2: Forward Elimination
- Eliminate the first element of the second and third rows:
 
1 1 1 6
R2 → R2 − 2R1 :  0 1 5 11 
1 −1 2 5
 
1 1 1 6
R3 → R3 − R1 :  0 1 5 11 
0 −2 1 −1
- Eliminate the element in the second column of the third row:

15
 
1 1 1 6
R3 → R3 + 2R2 :  0 1 5 11 
0 0 11 21
Step 3: Back-Substitution
- Solve for z from the third row:
21
11z = 21 =⇒ z =
11
21
- Substitute z = 11 into the second row:
21 6
y+5· = 11 =⇒ y =
11 11
6 21
- Substitute y = 11 and z = 11 into the first row:
6 21 54
x+ + = 6 =⇒ x =
11 11 11
54 6 21
Solution: x = 11 , y = 11 , z = 11 .

Inverse of a Matrix
The inverse of a square matrix A is another matrix, denoted by A−1 , such that
when A is multiplied by A−1 , the result is the identity matrix. Mathematically,
if A is an n × n matrix, then:

A · A−1 = A−1 · A = In
where In is the n × n identity matrix. A matrix A has an inverse if and only
if it is non-singular, meaning its determinant is not zero (det(A) ̸= 0).

Methods to Find the Inverse of a Matrix


There are several methods to find the inverse of a matrix:

1. Adjoint Method
To find the inverse of a matrix using the adjoint method:
1
A−1 = · adj(A)
det(A)
where adj(A) is the adjoint (or adjugate) of A, and det(A) is the determinant
of A.
The adjoint (or adjugate) of a square matrix A is the transpose of the cofactor
matrix of A. If A is an n × n matrix, then its adjoint, denoted by adj(A), is
obtained as follows:

16
1. Find the cofactor of each element in the matrix A. The cofactor Cij of
an element aij is given by:

Cij = (−1)i+j det(Mij )


where Mij is the minor of element aij , obtained by deleting the i-th row and
j-th column from A.
2. Construct the cofactor matrix using the cofactors Cij .
3. Transpose the cofactor matrix to obtain the adjoint matrix:

adj(A) = (Cofactor Matrix)T

2. Gauss-Jordan Elimination
This method involves augmenting the matrix A with the identity matrix and
applying row operations until the original matrix is reduced to the identity
matrix. The augmented part then becomes the inverse of A.

3. Matrix Decomposition (LU Decomposition)


The inverse of a matrix can also be found using decomposition methods such
as LU decomposition, where A is decomposed into a lower triangular matrix L
and an upper triangular matrix U . The inverses of these matrices are found
separately and multiplied to get A−1 .

Solved Examples
Example 1: Using Adjoint Method
Find the inverse of matrix A:
 
1 2
A=
3 4
Step 1: Find the Determinant of A

det(A) = (1)(4) − (2)(3) = 4 − 6 = −2


Step 2: Find the Adjoint of A
 
4 −2
adj(A) =
−3 1
Step 3: Find the Inverse
   
1 1 4 −2 −2 1
A−1 = · adj(A) = = 3
det(A) −2 −3 1 2 − 21

17
Example 2: Using Gauss-Jordan Elimination
Find the inverse of matrix B:
 
2 1 1
B = 1 3 2
1 0 0
Step 1: Form the Augmented Matrix
 
2 1 1 | 1 0 0
[B | I] = 1 3 2 | 0 1 0
1 0 0 | 0 0 1
Step 2: Apply Row Operations to Reduce B to the Identity Matrix
- R1 → R1 − R3 :
 
1 1 1 | 1 0 −1
1 3 2 | 0 1 0 
1 0 0 | 0 0 1
- R2 → R2 − R1 :
 
1 1 1 | 1 0 −1
0 2 1 | −1 1 1
1 0 0 | 0 0 1
- R2 → 12 R2 :
 
1 1 1 | 1 0 −1
1
0 1
2 | − 21 1
2
1
2

1 0 0 | 0 0 1
- R1 → R1 − R2 :
1 3
− 12 − 32
 
1 0 2 | 2
1
0 1 2 | − 21 1
2
1
2

1 0 0 | 0 0 1
- R3 → R3 − R1 :
1 3
− 12 − 23
 
1 0 2 | 2
1
0 1
2 | − 21 1
2
1
2

0 0 − 21 | − 32 1
2
5
2
- R3 → −2R3 :
1 3
− 12 − 32
 
1 0 2 | 2
1
0 1
2 | − 21 1
2
1
2

0 0 1 | 3 −1 −5
- R1 → R1 − 12 R3 :

18
 
1 0 0 | 0 0 1
0 1 0 | −1 1 2
0 0 1 | 3 −1 −5
Therefore, the inverse of B is:
 
0 0 1
B −1 = −1 1 2
3 −1 −5

Example: Finding the Inverse of a 3 × 3 Matrix


Using the Adjoint Method
Find the inverse of matrix A using the adjoint method:
 
2 −1 0
A = 1 2 1
1 1 1
Step 1: Find the Determinant of A

2 1 1 1 1 2
det(A) = 2 × − (−1) × +0×
1 1 1 1 1 1

= 2 × (2 × 1 − 1 × 1) − (−1) × (1 × 1 − 1 × 1)

= 2 × (2 − 1) − 0 = 2 × 1 = 2
Step 2: Find the Cofactors of Each Element of A
- Cofactor of a11 :

2 1
C11 = det = (2)(1) − (1)(1) = 1
1 1
- Cofactor of a12 :

1 1
C12 = − det = −(1 × 1 − 1 × 1) = 0
1 1
- Cofactor of a13 :

1 2
C13 = det = (1)(1) − (2)(1) = −1
1 1
- Cofactor of a21 :

−1 0
C21 = − det = −((−1)(1) − (0)(1)) = 1
1 1

19
- Cofactor of a22 :

2 0
C22 = det = (2)(1) − (0)(1) = 2
1 1
- Cofactor of a23 :

2 −1
C23 = − det = −((2)(1) − (−1)(1)) = −3
1 1
- Cofactor of a31 :

−1 0
C31 = det = ((−1)(1) − (0)(2)) = −1
2 1
- Cofactor of a32 :

2 0
C32 = − det = −((2)(1) − (0)(1)) = −2
1 1
- Cofactor of a33 :

2 −1
C33 = det = (2 × 2 − (−1 × 1)) = 5
1 2
Step 3: Form the Cofactor Matrix and Find the Adjoint
 
1 0 −1
Cofactor Matrix =  1 2 −3
−1 −2 5
 
1 1 −1
adj(A) =  0 2 −2
−1 −3 5
Step 4: Find the Inverse of A

  1 1
− 12
 
1 1 −1
1 1 2 2
A−1 = · adj(A) = 0 2 −2 =  0 1 −1 
det(A) 2
−1 −3 5 − 21 − 23 5
2

20

You might also like