0% found this document useful (0 votes)
8 views3 pages

NA10 06 Notes

Chapter 6 of 'Numerical Analysis' covers initial-value problems for ordinary differential equations, focusing on linear systems of equations, pivoting strategies, and matrix factorization. It discusses methods like Gaussian elimination, Gauss-Jordan elimination, and various pivoting strategies to minimize round-off errors. Additionally, it introduces matrix factorization techniques such as LU decomposition and special types of matrices, including symmetric and positive definite matrices.

Uploaded by

ICafe Internet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views3 pages

NA10 06 Notes

Chapter 6 of 'Numerical Analysis' covers initial-value problems for ordinary differential equations, focusing on linear systems of equations, pivoting strategies, and matrix factorization. It discusses methods like Gaussian elimination, Gauss-Jordan elimination, and various pivoting strategies to minimize round-off errors. Additionally, it introduces matrix factorization techniques such as LU decomposition and special types of matrices, including symmetric and positive definite matrices.

Uploaded by

ICafe Internet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Numerical Analysis (10th ed)

R.L. Burden, J. D. Faires, A. M. Burden

Chapter 6
Initial-Value Problems for Ordinary Differential Equations

Chapter 6.1: Linear Systems of Equations*


In this section we review the fact that any linear system of equations can be written in the form Ax = b
where A is the coefficient matrix, x is the solution vector, and b is the constant vector. We saw that this
form can be represented by the augmented matrix A b . Elementary row operations are then used to
find the solution of the system. Legal row operations consist of swapping rows, multiplying rows by a
constant, or adding rows.

Gaussian elimination with backward substitution was introduced as a method that, using the legal row
operations, reduces the augmented matrix to row echelon form. That is where the coefficient matrix is
upper triangular. Backward substitution is then used to obtain the solutions to the system. It is
important to note that this method does NOT require entries of 1 along the diagonal.

Another method, Gauss-Jordan elimination, was discussed in the exercises. This method is very
popular since it goes one step further than the previous method in that the legal row operations are used
to reduced the augmented matrix to reduced row echelon form. That is the coefficient matrix is a
diagonal matrix. form. Although entries of 1 along the diagonal are not required, for smaller systems, it
is often useful. However, keep in mind that in doing so, you are adding to the number of computations
needed. With larger systems, this will add to the cost of finding the solution as indicated in the texts
discussion of operation counts.

Chapter 6.2: Pivoting Strategies*


Needless to say, because we are using finite digit arithmetic, round-off error can be a huge issue when
solving systems numerically. This section addresses those issues with several different pivoting
strategies. The development of those strategies is thoroughly discussed in the text so we will simply
summarize them here.

PARTIAL PIVOTING:
Errors can arise when the pivot element akkk , the diagonal entry at the kth step, is small relative to the
absolute value of the other entries below it in the matrix. The strategy is to determine the largest entry in
absolute value on or below the diagonal and swap the two corresponding rows so that that entry
becomes the new pivot element. This is stated mathematically as follows: we need to determine the
smallest p R k such that apkk = max aikk and perform the row swap Ek 4 Ep , if p s k.
k%i%n

SCALED PARTIAL PIVOTING:


This method is used when the first method is not applicable but there are larger elements to the right of
the curent pivot element akk. This method places the element that is largest relative to the entries in its
row as the new pivot element by swapping the corresponding columns. This is stated mathematically as
api aki
follows: we need to determine the smallest integer p R i such that = max where
sp i%k%n s
k
si = max aij and perform the row swap Ek 4 Ep .
1%j%n

COMPLETE PIVOTING:
Complete (or maximal) pivoting at the kth step searches all the entries aij , for i = k, k C 1,..., n and
j = k, k C 1,..., n, to find the entry with the largest magnitude. Both row and column interchanges are
performed to bring this entry to the pivot position. The max |*| of all row entries considered must be
moved to pivot position. This strategy requires numerous comparisons at each stage and is therefore,
only for systems where accuracy is essential and the amount of execution time needed for this method
can be justified.

Chapter 6.3: Linear Algebra and Matrix Inversion


Euler's method is the simplest of the methods. However, it is seldom used in practice due to lack of
accuracy, The objective is to obtain approximations to the well-posed initial-value problem
y' t = f t, y , a % t % b, y a = a. As we stated in the first section, approximations to y will be
generated at equally spaced specified points called mesh points in the interval [a,b]. Once we find the
approximate solutions at these points, approximations at other points can be obtained by interpolation.

Chapter 6.4: The Determinant of a Matrix


Euler's method is the simplest of the methods. However, it is seldom used in practice due to lack of
accuracy, The objective is to obtain approximations to the well-posed initial-value problem
y' t = f t, y , a % t % b, y a = a. As we stated in the first section, approximations to y will be
generated at equally spaced specified points called mesh points in the interval [a,b]. Once we find the
approximate solutions at these points, approximations at other points can be obtained by interpolation.

Chapter 6.5: Matrix Factorization*


In this section we discuss how matrix factorization can be used to solve the system Ax = b. Two
methods that are developed in the section are the LU decomposition, used when no row interchange has
been employed. The other method, used when row interchanges have been made, involves a
permutation matrix.

The LU decomposition takes the system Ax = b / L Ux = b. We let y = Ux.The system Ly = b is


solved for y and then the system Ux = y is solved for x. From a practical standpoint, this factorization is
useful only when row interchanges are not required to control the round-off error resulting from the
use of finitedigit arithmetic.

When a row interchange is required, we decompose the matrix A into A = Pt LU where P is the
permutation matrix that interchanges the rows.

Chapter 6.6: Special Types of Matrices*


In this section, we summarize the special types of matrices. Given a matrix A
• A is symmetric if A = At

• A is singular if det A = 0

n
• A is diagonally dominant if aii R >a
j=1
ij
,jsi
n
• A is strictly diagonally dominant if aii O >a
j=1
ij
, j s i. The matrix A always has an LU

decomposition.
• A is positive definite if xt Ax O 0 for every n-dimensional vector x s 0. The matrix A always has
special factorizations LLt and LDLt .

• A tridiagonal matrix occur often and have special factorizations. Refer to the text.

You might also like