0% found this document useful (0 votes)
8 views43 pages

Session 1 DSP

The document outlines a session on Digital Signal Processing focusing on numerical analysis tools, particularly matrices and their properties, matrix algebra, and methods for solving systems of linear algebraic equations. Key topics include determinants, direct elimination methods, LU factorization, and various techniques such as Cramer's Rule and Gauss elimination. Learning objectives emphasize understanding matrix properties, performing matrix operations, and solving equations using direct methods and LU factorization.

Uploaded by

williamsmareh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views43 pages

Session 1 DSP

The document outlines a session on Digital Signal Processing focusing on numerical analysis tools, particularly matrices and their properties, matrix algebra, and methods for solving systems of linear algebraic equations. Key topics include determinants, direct elimination methods, LU factorization, and various techniques such as Cramer's Rule and Gauss elimination. Learning objectives emphasize understanding matrix properties, performing matrix operations, and solving equations using direct methods and LU factorization.

Uploaded by

williamsmareh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 43

DCIT 416

DIGITAL SIGNAL PROCESSING

College of Education
School of Continuing and Distance Education
2017/2018 – 2018/2019 ACADEMIC YEAR
Session Overview

This session explore the basic tools for numerical


analysis. We look at how matrices can be added or
multiplied as well as finding their determinant. We
further look at how to use direct elimination methods
and LU factorization to solve systems of linear algebraic
equations.

Slide 2
Session Outline

 Determinant Definition
 Matrix Algebra
 Systems of Linear Algebraic Equations
 Direct Elimination Methods
 LU Factorization

Slide 3
Learning Objectives

After completing this session, you will be able to:


 Understand the properties of matrices and
determinants.
 Perform matrix algebra.
 Solve systems of linear algebraic equations with Direct
Methods.
 Solve systems of linear algebraic equations with LU
Factorization

Slide 4
Topic One
PROPERTIES OF MATRICES
AND DETERMINANTS
Slide 5
Matrix Definitions
 A matrix is a rectangular array of elements which are
arranged in orderly rows and columns.
 Elements of a matrix are generally identified by a double
subscripted lowercase letter such as where i identifies
the row and j identifies the column of the matrix.
 The size of a matrix is defined by the number of
rows times the number of columns.
 Matrices are generally represented by either a
boldface capital letter say A, or the full array
elements as shown on the next slide.
1-6
Matrix Definitions

 Vectors are a special type of matrix which has only one


column or one row.
 Vectors are represented by either a boldface lowercase
letter say x of the full column or row of elements.
 A column vector is an matrix as shown on the next
slide.

1-7
Matrix Definitions

 A row vector is a matrix.

 Unit vectors, i are special vectors which have a


magnitude of unity.

 There are several special matrices of interest. A square


matrix S is a matrix which has the same number of rows
and columns

1-8
Matrix Definitions
 The left-to-right downward-sloping line of elements
from is called the major diagonal of the matrix.
 A diagonal matrix D is a square matrix with all elements
equal to zero except the elements on the major
diagonal.

1-9
Matrix Definitions
 The identity matrix I is a diagonal matrix with unity
diagonal elements.

 A triangular matrix is a square matrix in which all of the


elements on one side of the major diagonal are zeros.
An upper triangular matrix U has all zeros below the
major diagonal.

1-10
Matrix Definitions
 A lower triangular matrix L has all zeros above the
major diagonal.

 A tridiagonal matrix T is a square matrix in which all of


the elements not on the major diagonal and the two
diagonals surrounding the major diagonal are zero

1-11
Matrix Definitions
 A banded matrix B has all zeros excepts along particular
diagonals.

 A transpose of matrix A is the matrix which has


elements . The transpose of a column vector is a row
vector and vise versa.

1-12
Matrix Definitions
 Symmetric square matrices has identical corresponding
elements on either side of the major diagonal. That is . In
that case .
 A sparce matrix is one in which most of the elements are
zeros.
 A matrix is diagonally dominant if the absolute value of
each element on the major diagonal is equal to, or larger
than the sum of the absolute values of all the other
elements in that row, with the diagonal being larger than
the corresponding sum of the other elements for at least
one row.
1-13
Matrix Algebra
 Matrix algebra consists of matrix addition, matrix
subtraction, and matrix multiplication. Matrix division
is not defined, instead we use matrix inverse.
 Matrix addition and subtraction consist of adding or
subtracting the corresponding elements of two matrices
of equal size.

 Matrices of the same size are associative on addition

 Matrices of the same size are commutative on addition


1-14
Matrix Algebra
 Matrix multiplication consists of row-element to
column-element multiplication and summation of the
resulting product.
 Multiplication of two matrices say A and B is only
defined when the number of columns of matrix A is the
same as the number of rows in matrix B.

1-15
Matrix Algebra
 Matrices that satisfy the condition on the previous slide
are called conformable in the order AB.
 The size of matrix C is . Matrices that are not
conformable cannot be multiplied.
 Multiplication of the matrix A by a scalar consists of
multiplying each element of A by .

 Matrix algebra is much better suited to computers than


humans.

1-16
Matrix Algebra
 Matrices that are suitably conformable are associative
on multiplication.
 Square matrices are conformable in either order. Thus, if
A and B are matrices, and where C and D are
matrices.
 However square matrices in general are NOT
commutative on multiplication.
 Matrices A, B, and C are distributive if B and C are the
same size and A is conformable to B and C.

1-17
Matrix Algebra
 Consider the two square matrices A and B. If AB=I, then B
is the inverse of A, which is denoted by
 Matrix inverse commute on multiplication,

 Matrix factorization refers to the representation of a


matrix as the product of two other matrices.
 Factorization is NOT a unique process. There are, in
general, an infinite number of matrices A and B whose
product is A.
 A particularly useful factorization for square matrices is A =
LU where L and U are lower and upper 1-18
triangular matrices.
Systems of Linear Algebraic
Equations
 Systems of linear algebraic equations can be expressed
very compactly in matrix notation where

 This can also be written as

 Or equivalently as

1-19
Systems of Linear Algebraic
Equations
 There are three row operations that are useful when
solving systems of linear algebraic equations. They are:
 Any row may be multiplied by a constant (a
process called scaling)
 The order of the rows may be interchanged (a
process called pivoting)
 Any row can be replaced by a weighted linear
combination of that row with any other row (a
process called elimination)

1-20
Determinants
 The term determinant of a square matrix A, denoted by
det(A) or |A| refers to both the collection of elements
of the square matrix enclosed in vertical lines and the
scalar value represented by that array.

 The scalar value of the determinant of matrix is the


product of the elements on the major diagonal minus
the product of the elements on the minor diagonal

1-21
Determinants
 The scalar determinant of a matrix is composed of the
sum of six triple products which can be obtained from
the augmented determinant

 The determinant is augmented by repeating the first


two columns of the determinant on the right-hand side
of the determinant.
 Three triple products are formed, starting with the
elements of the first row multiplied by two remaining
elements on the right-downward-sloping diagonals.
1-22
Determinants
 Three more triple products are formed, starting with the
elements of the third row multiplied by the two
remaining elements on the right-upward-sloping
diagonals.
 The value of the determinant is the sum of the first
three triple products minus the sum of the last three
triple products

1-23
Determinants
 One formal procedure for evaluating determinants is
called expansion by minors or the method of cofactors.
 In this procedure there are n! products to be summed
where each product has n elements.
 Thus, the expansion of a determinant requires the
summation of 10! products, where each product
involves 9 multiplications.
 Consequently, the evaluation of determinants is by
method of cofactors is impractical except for very small
determinants like .
1-24
Determinants
 The minor is the determinant of the submatrix of the
matrix A obtained by deleting the ith row and the jth
column.
 The cofactor associated with the minor is defined as

 Using cofactors, the determinant of A is the sum of the


products of the elements of any row or column,
multiplied by their corresponding cofactors.

1-25
Determinants
 Alternatively, expanding down any fixed column j yields

 Each cofactor expansion reduces the order of the


determinant by one, so there are n determinant of
order n-1 to evaluate.
 If the value of the determinant is zero, the matrix is said
to be singular.
 The determinant of a triangular matrix, either upper or
lower triangular is the product of the elements on the
major diagonal
1-26
Topic Two
DIRECT ELIMINATION
METHODS
Slide 27
Cramer’s Rule
 Although this rule is not an elimination method, it is a
direct method for solving systems of linear algebraic
equations.
 Consider the system of linear algebraic equations Ax=b
which represents n equations. Cramer’s rule states that
the solution for is given by

where is the matrix obtained by replacing column j in


matrix A by the column vector b.

1-28
Elimination Methods-Simple
 The method solves a system of linear algebraic equation
by solving one equation, say the first equation, for one
of the unknown say , in terms of the remaining
unknowns to , then substituting the expression into the
remaining equation to determine equations involving
to .
 This elimination procedure is performed times until the
last step yields an equation involving only
 This process is called elimination.

1-29
Elimination Methods-Simple
 The value of can be calculated from the final equation in
the elimination procedure. Then can be calculated from
modified equation , which contains only and .
 This procedure is performed times to calculate to . This
process is called back substitution.
 Elimination involves normalizing the equation about the
element to be eliminated by the element immediately
above the element to be eliminated which is called the
pivot element, multiplying the normalized equation by the
element to be eliminated and subtracting the result from
the equation containing the element to be eliminated.
1-30
Elimination Methods-Pivoting
 The element on the major diagonal is called the pivot
element.
 The elimination procedure described so far fails
immediately if the first pivot element is zero.
 The procedure also fails if any subsequent pivot element
is zero.
 Even though there may be no zeros on the major
diagonal in the original matrix, the elimination process
may create zeros on the major diagonal.

1-31
Elimination Methods-Pivoting
 The zeros on the major diagonal can be avoided by
rearranging the equations, by interchanging the rows or
columns before each elimination step.
 This process is called pivoting.
 Interchanging both rows and columns is called full
pivoting. Interchanging only rows is called partial
pivoting.
 Pivoting reduces round-off errors since the pivot element
is a divisor during the elimination process, and division by
large numbers introduces smaller round-off errors.
1-32
Elimination Methods-Scaling
 The simple elimination also incur significant round-off
errors when the magnitudes of the pivot elements are
smaller than the magnitudes of the other elements in the
equations containing the pivot elements.
 In such cases scaling is employed to select the pivot
element. Scaling is employed only to select the pivot
elements.
 After pivoting, elimination is applied to the original
equations
 The elimination procedure described so far is commonly
called Gauss Elimination
1-33
Gauss-Jordan Elimination
 Gauss-Jordan elimination is a variation of Gauss
elimination in which the elements above the major
diagonal are eliminated as well as the elements below
the major diagonal.
 The A matrix is transformed to a diagonal matrix.
 The rows are usually scaled to yield unity diagonal
elements which transforms the A matrix to the identity
matrix I.
 The transformed b vector is then the solution vector x.

1-34
Gauss-Jordan Elimination
 The inverse of a square matrix is the matrix such that .
 Gauss-Jordan elimination can be used to evaluate the
inverse of matrix by augmenting with the identity
matrix and applying the Gauss-Jordan algorithm.
 The transform matrix is the identity matrix and the
transformed identity matrix in the matrix inverse
 Gauss-Jordan elimination yields

1-35
The Matrix Inverse Method
 Systems of linear algebraic equations can be solved
using the matrix inverse .
 Consider the general system of linear algebraic
equations
 Multiplying this system by yields

 Singular matrices, that is, matrices whose determinant


is zero, do not have inverses.
 The corresponding system of equations does not have a
unique solution.
1-36
Topic Three
LU FACTORIZATION

Slide 37
LU Factorization
 Matrices like scalars can be factored into the product of
two other matrices in an infinite number of ways.
 Thus, A = BC, when B and C are lower and upper
triangular matrices respectively becomes A = LU.
 Specifying the diagonal elements of either L or U makes
the factoring unique.
 The procedure based on unity elements on the major
diagonal of L is called Doolitle method.
 The procedure based on unity elements on the major
diagonal of U is called Crout method.
1-38
LU Factorization
 Matrices factoring can be used to reduced the work
involved in Gauss elimination when multiple unknown b
vectors are to be considered.
 In Doolitle LU method, this is accomplished by defining
the elimination multipliers determined in the
elimination step of Gauss elimination as the elements of
the L matrix.
 The U matrix is defined as the upper triangular matrix
determined by the elimination step of Gauss
elimination.
1-39
LU Factorization
 Consider the linear system, Ax = b. Let A be factored
into the product LU.
 The linear system becomes LUx = b. Multiplying by
gives
 The last two terms give . Define the vector as follows
 Multiplying by L gives
 This results in and

1-40
Session 1 - Assignment
 Consider the following system of linear algebraic
equations :

Solve the system using (you can compare your solution


with MATLAB)
1. Cramer’s Rule
2. Gauss elimination without pivoting
3. Gauss-Jordan elimination
4. LU factorization

1-41
Reference

1. Hoffman, J. D. (2001), Numerical Methods for


Engineers and Scientists (2nd Edition)
2. Johnston, R. L. (1982), Numerical Methods, A
Software Approach, John Wiley & Sons
3. Kahaner, D., Moler, C., and Nash S. (1989),
Numerical Methods and Software, Prentice Hall.

Slide 42
The End

College of Education
School of Continuing and Distance Education
2017/2018 – 2018/2019 ACADEMIC YEAR

You might also like