Session 1 DSP
Session 1 DSP
College of Education
School of Continuing and Distance Education
2017/2018 – 2018/2019 ACADEMIC YEAR
Session Overview
Slide 2
Session Outline
Determinant Definition
Matrix Algebra
Systems of Linear Algebraic Equations
Direct Elimination Methods
LU Factorization
Slide 3
Learning Objectives
Slide 4
Topic One
PROPERTIES OF MATRICES
AND DETERMINANTS
Slide 5
Matrix Definitions
A matrix is a rectangular array of elements which are
arranged in orderly rows and columns.
Elements of a matrix are generally identified by a double
subscripted lowercase letter such as where i identifies
the row and j identifies the column of the matrix.
The size of a matrix is defined by the number of
rows times the number of columns.
Matrices are generally represented by either a
boldface capital letter say A, or the full array
elements as shown on the next slide.
1-6
Matrix Definitions
1-7
Matrix Definitions
1-8
Matrix Definitions
The left-to-right downward-sloping line of elements
from is called the major diagonal of the matrix.
A diagonal matrix D is a square matrix with all elements
equal to zero except the elements on the major
diagonal.
1-9
Matrix Definitions
The identity matrix I is a diagonal matrix with unity
diagonal elements.
1-10
Matrix Definitions
A lower triangular matrix L has all zeros above the
major diagonal.
1-11
Matrix Definitions
A banded matrix B has all zeros excepts along particular
diagonals.
1-12
Matrix Definitions
Symmetric square matrices has identical corresponding
elements on either side of the major diagonal. That is . In
that case .
A sparce matrix is one in which most of the elements are
zeros.
A matrix is diagonally dominant if the absolute value of
each element on the major diagonal is equal to, or larger
than the sum of the absolute values of all the other
elements in that row, with the diagonal being larger than
the corresponding sum of the other elements for at least
one row.
1-13
Matrix Algebra
Matrix algebra consists of matrix addition, matrix
subtraction, and matrix multiplication. Matrix division
is not defined, instead we use matrix inverse.
Matrix addition and subtraction consist of adding or
subtracting the corresponding elements of two matrices
of equal size.
1-15
Matrix Algebra
Matrices that satisfy the condition on the previous slide
are called conformable in the order AB.
The size of matrix C is . Matrices that are not
conformable cannot be multiplied.
Multiplication of the matrix A by a scalar consists of
multiplying each element of A by .
1-16
Matrix Algebra
Matrices that are suitably conformable are associative
on multiplication.
Square matrices are conformable in either order. Thus, if
A and B are matrices, and where C and D are
matrices.
However square matrices in general are NOT
commutative on multiplication.
Matrices A, B, and C are distributive if B and C are the
same size and A is conformable to B and C.
1-17
Matrix Algebra
Consider the two square matrices A and B. If AB=I, then B
is the inverse of A, which is denoted by
Matrix inverse commute on multiplication,
Or equivalently as
1-19
Systems of Linear Algebraic
Equations
There are three row operations that are useful when
solving systems of linear algebraic equations. They are:
Any row may be multiplied by a constant (a
process called scaling)
The order of the rows may be interchanged (a
process called pivoting)
Any row can be replaced by a weighted linear
combination of that row with any other row (a
process called elimination)
1-20
Determinants
The term determinant of a square matrix A, denoted by
det(A) or |A| refers to both the collection of elements
of the square matrix enclosed in vertical lines and the
scalar value represented by that array.
1-21
Determinants
The scalar determinant of a matrix is composed of the
sum of six triple products which can be obtained from
the augmented determinant
1-23
Determinants
One formal procedure for evaluating determinants is
called expansion by minors or the method of cofactors.
In this procedure there are n! products to be summed
where each product has n elements.
Thus, the expansion of a determinant requires the
summation of 10! products, where each product
involves 9 multiplications.
Consequently, the evaluation of determinants is by
method of cofactors is impractical except for very small
determinants like .
1-24
Determinants
The minor is the determinant of the submatrix of the
matrix A obtained by deleting the ith row and the jth
column.
The cofactor associated with the minor is defined as
1-25
Determinants
Alternatively, expanding down any fixed column j yields
1-28
Elimination Methods-Simple
The method solves a system of linear algebraic equation
by solving one equation, say the first equation, for one
of the unknown say , in terms of the remaining
unknowns to , then substituting the expression into the
remaining equation to determine equations involving
to .
This elimination procedure is performed times until the
last step yields an equation involving only
This process is called elimination.
1-29
Elimination Methods-Simple
The value of can be calculated from the final equation in
the elimination procedure. Then can be calculated from
modified equation , which contains only and .
This procedure is performed times to calculate to . This
process is called back substitution.
Elimination involves normalizing the equation about the
element to be eliminated by the element immediately
above the element to be eliminated which is called the
pivot element, multiplying the normalized equation by the
element to be eliminated and subtracting the result from
the equation containing the element to be eliminated.
1-30
Elimination Methods-Pivoting
The element on the major diagonal is called the pivot
element.
The elimination procedure described so far fails
immediately if the first pivot element is zero.
The procedure also fails if any subsequent pivot element
is zero.
Even though there may be no zeros on the major
diagonal in the original matrix, the elimination process
may create zeros on the major diagonal.
1-31
Elimination Methods-Pivoting
The zeros on the major diagonal can be avoided by
rearranging the equations, by interchanging the rows or
columns before each elimination step.
This process is called pivoting.
Interchanging both rows and columns is called full
pivoting. Interchanging only rows is called partial
pivoting.
Pivoting reduces round-off errors since the pivot element
is a divisor during the elimination process, and division by
large numbers introduces smaller round-off errors.
1-32
Elimination Methods-Scaling
The simple elimination also incur significant round-off
errors when the magnitudes of the pivot elements are
smaller than the magnitudes of the other elements in the
equations containing the pivot elements.
In such cases scaling is employed to select the pivot
element. Scaling is employed only to select the pivot
elements.
After pivoting, elimination is applied to the original
equations
The elimination procedure described so far is commonly
called Gauss Elimination
1-33
Gauss-Jordan Elimination
Gauss-Jordan elimination is a variation of Gauss
elimination in which the elements above the major
diagonal are eliminated as well as the elements below
the major diagonal.
The A matrix is transformed to a diagonal matrix.
The rows are usually scaled to yield unity diagonal
elements which transforms the A matrix to the identity
matrix I.
The transformed b vector is then the solution vector x.
1-34
Gauss-Jordan Elimination
The inverse of a square matrix is the matrix such that .
Gauss-Jordan elimination can be used to evaluate the
inverse of matrix by augmenting with the identity
matrix and applying the Gauss-Jordan algorithm.
The transform matrix is the identity matrix and the
transformed identity matrix in the matrix inverse
Gauss-Jordan elimination yields
1-35
The Matrix Inverse Method
Systems of linear algebraic equations can be solved
using the matrix inverse .
Consider the general system of linear algebraic
equations
Multiplying this system by yields
Slide 37
LU Factorization
Matrices like scalars can be factored into the product of
two other matrices in an infinite number of ways.
Thus, A = BC, when B and C are lower and upper
triangular matrices respectively becomes A = LU.
Specifying the diagonal elements of either L or U makes
the factoring unique.
The procedure based on unity elements on the major
diagonal of L is called Doolitle method.
The procedure based on unity elements on the major
diagonal of U is called Crout method.
1-38
LU Factorization
Matrices factoring can be used to reduced the work
involved in Gauss elimination when multiple unknown b
vectors are to be considered.
In Doolitle LU method, this is accomplished by defining
the elimination multipliers determined in the
elimination step of Gauss elimination as the elements of
the L matrix.
The U matrix is defined as the upper triangular matrix
determined by the elimination step of Gauss
elimination.
1-39
LU Factorization
Consider the linear system, Ax = b. Let A be factored
into the product LU.
The linear system becomes LUx = b. Multiplying by
gives
The last two terms give . Define the vector as follows
Multiplying by L gives
This results in and
1-40
Session 1 - Assignment
Consider the following system of linear algebraic
equations :
1-41
Reference
Slide 42
The End
College of Education
School of Continuing and Distance Education
2017/2018 – 2018/2019 ACADEMIC YEAR