Zero Matrix: Matrix That Consists of All Zero Entries Is Called A Zero Matrix and Is Denoted by 0 or
Zero Matrix: Matrix That Consists of All Zero Entries Is Called A Zero Matrix and Is Denoted by 0 or
[ ]
The numbers or the variables in the matrix are called entries or elements of the matrix. If a matrix has
rows and columns then we way that its size is by matrix. An matrix is called
a square matrix or a matrix of order n. A matrix is simply a real number. Matrices will be
denoted by capital bold-faced letters , etc, or by ( ) or ( ). For instance if
[ ] [ ]
For example [ ]
ii. Square matrix: any matrix is called square matrix if . The order of square matrix is
or simply .
Example: [ ]
iii. Diagonal matrix: A matrix is said to be a diagonal matrix if all its entries except the
main diagonal are zeros. In terms of the symbolically . is a diagonal matrix if
for The matrix thus is given by
[ ]
( )
‘Page | 1
Set by Abebaye Assefa
vi. Scalar matrix: A diagonal matrix in which all diagonal entries are equal is called a scalar
matrix.
vii. Identity matrix (unit matrix): is a diagonal matrix in which all diagonal elements are 1. (
for
Notation:
Example 3.2.2 [ ]
viii. Triangular matrix: A matrix is said to be a triangular matrix if all its entries below
the main diagonal are zeros or if all its entries above the main diagonal are zero, [in other words a
square matrix is triangular if for or for j.] More specially, in the
first case the matrix is called upper triangular and in the second case the matrix is called lower
triangular. The following are triangular matrices. Upper triangular matrix Lower triangular matrix
Example 3.2.3
Example 3.2.4: [ ] [ ]
Note:
a. The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper
triangular matrix is lower triangular.
b. The product of lower triangular matrices is lower triangular, and the product of upper
triangular matrices is upper triangular
3.3. Algebra of matrices
Definition 3.3.1: Equality of Matrices
Two matrices and are equal if for each and In other words, two matrices are
equal if and only if they have the same size and their corresponding entries are equal.
Matrix Addition
When two matrices A and B are of the same size we can add them by adding their corresponding
entries.
Definition 3.3.2: If and are matrices, then their sum is
‘Page | 2
Set by Abebaye Assefa
Example: 3.3.1 * + * +, then find
Solution: * + +* + * +
Definition 3.3.3: scalar multiplication of a matrix.
If is a real numbers, then the scalar multiple of a matrix A is
[ ]
[ ]
Solution: * + [ ]
Let the entry of the product is obtained by summing the products of the elements in the
row of with corresponding elements in the column of .
If A has rows , and B has columns , then the product AB can be given
by the formula
[ ]
‘Page | 3
Set by Abebaye Assefa
Definition 3.3.4: Let the number of columns in matrix be the same as the number of rows in matrix
, then the matrix product exists and the element in row and column of is obtained by
multiplying the corresponding elements of row of and column of and adding the product.
In other words if matrix and , then
Where
Note: The product of two matrices and is defined only if the number of columns in and the
number of rows in are equal.
* +[ ] * + * +
We note here that the size of is and the size of is consequently the size of is .
Properties of Matrix Multiplication
In defining the properties of matrix multiplication below, the matrix A, B, and C are assumed to be of
compatible dimensions for the operations in which they appear.
a) Matrix multiplication is, in general, not commutative. That is AB ≠ BA.
Observe that in Example 1 of this section BA is not even defined because the first matrix in this case B
does not have the same number of columns as the number of rows of the second matrix A.
b) From , it does not follow that either or . Here ’s are null matrices of
appropriate order
Example 3.3.4: For the matrix and given by
* + * +
We have * + a null matrix even though A or B is not a null matrix. The relation
does not imply that . The cancellation law does not hold in general as in a
real numbers.
Example 3.3.5: For the matrices
[ ] [ ] [ ] [ ] But
‘Page | 4
Set by Abebaye Assefa
Generally Properties of Matrix Multiplication is given below
Assume all products and additions are defined. Let A, B, C be Matrices. Then
i. and
iii.
=* +* + * +
A (BC) * +=
(b) * +* + =* +
AB+AC=* + +* +=* +
Transpose of a Matrix
Definition 3.3.5: The transpose of a matrix , denoted , is the matrix whose columns are
the rows of the given matrix
Symbolically the transpose of an matrix is an matrix.
i.e.
‘Page | 5
Set by Abebaye Assefa
Definition 3.3.6: A matrix ) is said to be:
a. Symmetric if ) for all and that is if .
b. Skew-symmetric if for all i and j, that is –
[ ] [ ]
(i) is symmetric (
(ii) is skew-symmetric ((
(iii) A can be written as a sum of symmetric and skew-symmetric matrices.
Exercise 3.3.1:
, (b)
2. Find all values of for which is symmetric
( )
( )
Definition 3.3.7: If A is a square matrix, then the trace of A, denoted by is defined to be the
sum of the entries on the main diagonal of A. The trace of A is undefined if A is not a square matrix.
[ ]
Note:
1. If A is a square matrix of order n, then the power of A are given by:
i.
ii.
iii.
iv.
v.
vi.
2. For any n-square matrix and for any polynomial,
where for are scalars, we define .If
then is a zero /root of the polynomial.
‘Page | 6
Set by Abebaye Assefa
Example 3.3.8: Let * + then compute:
i.
ii.
Solution:
* +* + * +
* +* + * +
Solution ( ii) * +* + * + * +
* +* + * +
Then * + * + * + * +
Example 3.4.1: [ ]
[ ]
[ ]
Replacing the second row by 2 times the third row plus the second row of matrix
(i.e. )
[ ]
‘Page | 7
Set by Abebaye Assefa
Definition 3.4.1: a matrix A is called row equivalent to matrix B written . If B can be obtained from
A by finite sequence of elementary row operations.
[ ]
Appling we get
[ ]
Appling we get
In order to find the rank, or to compute the inverse of a matrix, or to solve a linear system, we
usually write the matrix either in its row echelon form or reduced row echelon form.
Definition 3.4.2 A pivot position in a matrix A is a location in A that corresponds to a leading 1 in the
reduced row echelon form of A. A pivot column is a column of A that contains a pivot position. A
pivot element is a nonzero number in a pivot position that is used as needed to create zeros via row
operations.
Definition 3.4.3: An matrix is said to be in echelon form (or row echelon form) if the following
conditions are satisfied:
1. All nonzero rows are above any rows of all zeros.
2. Each leading entry of a row is in a column to the right of the leading entry of the row above it.
(A leading entry refers to the left most nonzero entry in a nonzero row)
3. All entries in a column below a leading entry are zeros.
If a matrix in row echelon form satisfies the following additional conditions, then it is in reduced
echelon form (or reduced row echelon form)
4. The leading entry in each nonzero row is 1.
5. Each leading 1 is the only nonzero entry in its column.
‘Page | 8
Set by Abebaye Assefa
A matrix in row echelon form is said to be in reduced row echelon when every column that has a
leading 1 has zeros in every position above and below the leading entry.
Example 3.4.3:
The matrix [ ]
is not in reduced row echelon form but in row echelon form since the matrix has the first 3 properties
and all the other entries above the leading 1 in the third column are not 0.
The matrix [ ]
are not in row echelon form (also not in reduced row echelon form) since the leading 1 in the second
row is not in the left of the leading 1 in the third row and all the other entries above the leading 1 in the
third column are not 0. Any non-zero matrix can be reduced to echelon form by applying some
elementary row operations on the given matrix.
* +, * +, [ ] , * +
* +, * +, [ ] , * +
Solution:
Appling we get
[ ]
‘Page | 9
Set by Abebaye Assefa
Appling we get
[ ]
Appling ⁄ we get
[ ⁄ ]
Appling we get
[ ⁄ ]
⁄
Appling ⁄ we get
[ ⁄ ]
Appling ⁄ we get
[ ]
Appling we get
[ ]
* +* + * + * +* +
Thus , which shows matrix is the inverse of .
NB:
a) Inverse of a matrix is only defined for square matrices.
b) A matrix may not be invertible even if it is square matrix.
Definition 3.5.2: An matrix is said to be singular if it does not have a multiplicative inverse.
Notation: Let be an invertible matrix. We denote its inverse by .
Gauss-Jordan Elimination for finding the inverse of a matrix
Let be an matrix.
1. Adjoin the identity matrix to to form the augmented matrix
2. Compute the reduced echelon form of If the reduced echelon form is of the
type then is the inverse of . If the reduced echelon form is not of the type
(In: B), in that the first sub matrix is not , then has no inverse.
Example 3.5.1: Determine the inverse of the matrix
[ ]
[ ]
Appling we get
[ ]
Appling we get
[ ]
Appling we get
[ ]
Appling we get
[ ] Thus [ ]
‘Page | 11
Set by Abebaye Assefa
Exercise: Determine the inverse of the matrix below, if it exists.
[ ] [ ] [ ]
Solution:
Appling we get
[ ]
Appling we get
[ ]
‘Page | 12
Set by Abebaye Assefa
Hence rank =2.
Exercise: Find rank
[ ] [ ]
Det | |
Solution: det | |
| | | | | | | |
| | | | | |
Solution: | | | | | | | |
=
| |- | | | | | |
det(2A)= [ ] [ ] [ ]
‘Page | 14
Set by Abebaye Assefa
a. [ ]. b. [ ] c. [ ].
Solution :
a. | | | | | |
b. | | | | | |
c. | | | |
| |
| |
| |
Exercise:
1. Matrix A has m rows and m-8 columns. Matrix B has n rows and 26-n columns. Find the value of
m and n such that the products AB and BA exist. Ans. and
2. Let A be 4x6 matrix, B be 5x2n matrix. If the product exist, then find the value of n.
ans.
3. Let A,B be 2x2 non-singular matrices such that . Then , find the det(2A ).
Ans. 2500
Definition 3.7.2.1 (Definition of rank using Determinant). Let A be an matrix. Then
rank , where is the largest number such that some submatrix of A has a nonzero
determinant.
Example 3.7.2.4 Compute the the rank of matrix * + using determinants.
Solution: Observe that, the largest possible size of any square submatrix of A is 2 × 2. We have (say)
a submatrix * + (which is obtained by deleting the last two columns of A) with
| |
Therefore, rank(A) = 2.
Definition 3.7.2.2: Let A be a squar matrix. Then A is said to be
i. Non-singular matrix if det
ii. Singular matrix if det
‘Page | 15
Set by Abebaye Assefa
Exercise: Find the value of x for which the matrix [ ] is singular.
[ ]
Solution: minor of
Det (
Minor of
Example 3.8.4: Compute the matrix of cofactors for the given matrix.
a) * + b) [ ]
‘Page | 16
Set by Abebaye Assefa
b) The minors of B are
| | , | | | |
| | , | | | |
| | , | | | |
Solution: | |
| |
| |
| |
| |
| |
| |
| |
| |
[ ] Adj [ ] [ ]
‘Page | 17
Set by Abebaye Assefa
Example 3.8.5: Find the inverse of if
a). [ ] [ ]
Solution:
a). | |
| |
| |
| |
| |
| |
| |
| |
| |
[ ] adj [ ] [ ]
| | | | | | | | | |
⁄
[ ] ⁄ ⁄ ⁄
| |
[ ⁄ ]
b). We have, | |
[ ] [ ], and [ ]
[ ] [ ] [ ]
‘Page | 18
Set by Abebaye Assefa
Exercise:
4. Find the inverse of a matrix if exist
[ ] [ ] [ ]
5. Find the values of for which the matrix [ ]is invertible. And compute .
* + * +
[ ] [ ] [ ]
Matrix A is called the coefficient matrix of the system and the matrix
1) It admits a unique Solution: There is one and only one vector that
satisfies all the -equations simultaneously (the system is consistent).
2) It has infinitely Many Solutions: There are infinitely many different values of that satisfy all
the -equations simultaneously (the system is said to be consistent).
3) Has no Solution: There is no vector that satisfies all equations simultaneously, or the solution
set is empty (the system is said to be inconsistent).
Remark: For Homogenous system of equation , is called the trivial solution. Any other
solution of a homogeneous system is called a nontrivial solution.
If and are the matrices of coefficients and the column vector of numbers, respectively
(i.e. ). Then the following statements are true.
i. If | = number of unknowns, then the linear system has only one
solution.
ii. If | number of unknowns, then the linear system has infinitely many
solutions.
iii. If | , then the linear system has no solution.
Remark:
a) From the Theorem above, we observe that the linear system has no solution if an echelon form of
the augmented matrix has a row of the form with nonzero.
b) A linear system has unique solution when there are no free variable, and it has infinitely many
solutions when there is at least one free variable.
The two methods for finding the complete solution set for a given linear system are Gaussian
elimination and the Cramer’s rule.
3.9.2. Cramer’s rule
Consider a system of linear equations in three variables
Let [ ] elements of A arranged in the same order as they occur as coefficient in the
equations.
‘Page | 20
Set by Abebaye Assefa
[ ] is obtained by replacing 2nd column of A by [ ]
Now, | | | | | | | |
ii. If | | , and at least one of the determinant , & is non zero, then the system is in
consistent.( i.e. it has no solution.)
iii. If | | and determinant = ,then the system has infinite many solution.
Example 3.9.1.1: use Cramer’s rule to solve the following system of linear equations
Solution: [ ] [ ]
[ ] [ ]
| | | | | |
Thus | | | |
and | |
Example 3.9.1.2: by using Cramer’s Rule solve the system of a linear equations
{
* +, * + and
| | , | | and | |
| | | |
| |
And | |
( ) ( )
‘Page | 21
Set by Abebaye Assefa
3.9.2. Gaussian elimination Method
A finite set of linear equations in the variables is called linear system of equations.
An arbitrary system of m linear equations in n unknown can be written as:
Where, are the unknowns and the subscripted a's and b's denote constants. In the above
linear system of equations the matrix
Definition 3.9.2.1: the process of using row operations to transfer an augmented matrix of linear system
in to one whose augmented matrix is in row echelon form is called Gaussian elimination. The main
advantage of this method is that it is applicable to any system of equations.
Or
Gaussian elimination, also known as row reduction, is used for solving a system of linear equations.
The Gaussian Elimination method involves the following steps:
1) Write down the augmented matrix of the system of linear equations.(A/B)
2) Find an echelon form of the augmented matrix using elementary row operations.
3) From a new equation using and solve the variables using back substitution.
Example 3.9.2.1: Solve the system of linear equations using Gaussian elimination method.
‘Page | 22
Set by Abebaye Assefa
{
[ ]
Appling we get
[ ]
Appling we get
[ ]
U C
Now apply back- substitution to obtain the solution
[ ]( ) ( )
( ) ( )
a. {
b. {
The process of solving system of linear equations first by changing the augmented matrix of the
system in to RREF and then solving for variables is knows as Gaussian- Jordan elimination method.
Exercise 3.9.3.1: Solve the system of linear equations using Gaussian- Jordan elimination method.
‘Page | 23
Set by Abebaye Assefa
3.9.4 Inverse matrix method
Theorem 3.9.3.1: if is invertible matrix, then for each matrix , the systems has
exactly one solution.
Proof: exercise
Namely
Example 3.9.3.2: Find the solution of the system of linear equations using inverse matrix method
[ ] * + [ ]
| |
| |
| |
| |
| |
| |
| |
| |
| |
[ ] adj [ ] [ ]
| | | | | | | | | |
‘Page | 24
Set by Abebaye Assefa
| |
[ ] [ ]
[ ][ ] [ ]
WORK SHEET
(a) , (b) , , ,
2. (a) Find and if ( )=( ) +( )
4. a. Let . Find if * +.
b. find and w if 3( ) ( ) ( )
. , . . , iv.
7. Let A be a scalar matrix of size 3x3 with det(2A)=216. Find matrix A.
i. | | ii. | | iii. | |
‘Page | 25
Set by Abebaye Assefa
9. Let ( ) Then find
i. Determinant of A
ii. All minors and all cofactors of A
iii. The cofactors of A
iv. The adjoint of A
v. The inverse of A
10. Let ( ) find the set of values of t for which the homogeneous system of linear
Has a) i) no solution
ii) Exactly one solution
iii) In finite many solution
b). In case the system has a solution determine the solution.
12. Solve the following system of equation using either Gauss’s method or Cramer’s rule,
A. { B. { C. {
D. { E. {
i. { ii. {
14. Using Cramer’s rule, find the value of for which the following systems has a unique, many or
no solutions.
i. { ii. {
a. ( ) b. ( )
‘Page | 26
Set by Abebaye Assefa