NUMERICAL METHODS I
EEG 823
Dr. A.O. GBENGA-ILORI
Lecture 1
Things to Note:
Course is compulsory
I expect that you are not new to numerical methods
I will introduce the topic in class but expect you to read more
and solve questions on your own.
Hopefully, we shall have class assignment/term paper and a
semester test totaling 30/40 marks.
Examination will be 60/70 marks.
No copying of class assignments otherwise all involved will
forfeit all the marks.
If you do not know MATLAB, please start learning as you will need
it.
Recommended Textbooks
For most part of the lecture, I will use Numerical Methods for
Engineers Steven C. Charpa and Raymond P. Canale. 6th Edition.
Other textbook that is good: Scientific Computing: An
Introductory Survey M.T. Heath.
Prerequisite
Algebra
Calculus
Differential Equations
Linear Algebra
What is Numerical Methods?
Numerical Method is an approximate method set up in opposition
to analytical methods (which are exact).
It is an engineers choice of mathematics.
Before now we learnt: In this course, we will learn:
Algebra Root Finding
Linear Algebra b = Ax , for solving x
Calculus Integration, Differentiation,
optimisation
Exact Methods Numeric Approach/ approximate
Problem Description
Mathematical Model using
Numerical Methods
Solution of Mathematical
Model
Implementing this
solution
Figure 1: Problem Solving with Numerical Methods.
Overview of Numerical Methods
Behind any mathematical model you encounter, there is a real
world problem e.g. energy, environmental.
An example is communication engineering:
Spectrum scarcity and congestion of cellular networks
A solution may be to underlay cellular sub-band with D2D users in
the future 5G networks.
However, this can lead to another problem Interference.
We can address this challenge be developing an optimisation model
to determine how the two devices can co-exist without a breakdown
of the network.
Another example is energy usage how much do we really need in
residential homes on campus for instance.
We can start with taking readings and data over a period of time and
use this to formulate mathematical models that can be used to
forecast energy use in the future.
Real World Problem
We use Using
- Assumptions, Mathematical
- Approximations Modeling
Mathematical Model
Figure 2: Process of Numerical Modeling
Trade offs in Mathematical Modeling
In numerical modeling, we often need to weigh the benefit
of the model we want to solve and the cost attached to it.
The better the solution accuracy, the higher the model
complexity, the more the programming efforts needed and
all of these impact cost.
Our goal is to have a practical solution in which the benefits
outweighs the cost.
As we engage in numerical modeling, we need to continually
ensure that our solution is practical.
ERROR ANALYSIS
We deal with the important topic of error analysis, which must be
understood for the effective use of numerical methods.
Reference: Chapters 3 and 4 of Numerical Methods for Engineers,
Steven C. Charpa and Raymond P. Canale, 6th Edition.
Approximations and Round Off Errors.
Sources of Errors
Modeling Errors:
o Blunders (human mistakes)
o Formulation errors (using models that do not represent
reality e.g. the 5G networks and neglecting mobility of
users)
o Data Uncertainty (using unreliable data)
Numerical Errors:
o Truncation errors (the way we truncate an infinite series)
o Round-off errors (errors that come from our numerical
approximations i.e. by using our various numerical
methods e.g. quantization errors)
When we talk about errors, we talk about two things: Accuracy vrs Precision.
Figure 3: Accuracy and Precision.
Formal Definition of Errors
Errors usually arise from approximations used to represent exact
(true) numbers in numerical analysis.
Therefore numerical true error (Et) is:
---- (1)
A shortcoming of the above definition is that it takes no account of
the order of magnitude of the value under examination
For instance, an error of 1 kg is much more significant if we are
measuring the weight of a baby than the weight of an adult.
We can account for the magnitude of the quantities being
evaluated by normalizing the error to the true value i.e.
The percentage relative error is therefore:
------ (2)
Iterative Calculations
In mathematics, functions can often be represented by an infinite
series e.g. the exponential function can be computed using:
Thus as more and more terms are added in the sequence, the
approximation becomes a better estimate of the true value ex.
Generally, in iterative calculations, you move a step closer to the
true value with each iterations
One of the challenges of numerical methods is to determine error
estimates in the absence of the knowledge regarding the true
value.
Usually in real world situation, we may not know the true value a
priori.
An alternative to equation (2) is to use the best available estimate
of the true value.
For example, in order to normalize an error of 1 kg in the weight
of an adult.
We cannot have a true value of the weight of an adult in Nigeria
but we can estimate that in Nigeria, a woman between 35 and 55
years has an average weight of 120kg while their counterparts in
Germany weigh 40kg.
Therefore, the approximate true error value a is:
---------- (3)
Usually computers are used for iterative computation.
In this approach, a present approximation is made on the basis of
a previous approximation.
The process is repeatedly or iteratively performed in order to
compute better and better approximation.
In that case:
------ (4)
The signs of all the error equations may be positive or negative.
Usually we are not concerned with the sign but in whether the
percentage is lower than a pre-specified percent tolerance
error s.
Therefore if equation 5 below holds, our result is within
acceptable error level.
------ (5)
Computer algorithms for iterative calculations
Figure 4: Pseudocode for a generic iterative calculator (Charpa, page 60)
Round Off Errors
We have round-off errors because computers retain only a fixed
number of signi cant gures during a calcula on e.g. or 7 are
rounded off by the computer.
To understand round off errors we review number representation
i.e. integers, fixed point, floating point.
Integer Number Representation
Recall that we represent decimal numbers on the computer as shown below:
Figure 5: (a) decimal base 10 system (b) binary base 2 system
If we were to represent -173 on a 16 bit computer using the signed
magnitude method we have:
Sign is 1 when ve and 0 when +ve.
Immediately, we recognize a number of problems:
o Quantization Error: We can represent 1 but not 11/2 with this system. So we
have holes on the number lines because we cannot represent numbers
between two consecutive whole numbers e.g. between 1 and 2.
o Overflow Error: In the above example, we cannot represent numbers bigger
than 255 e.g. 256 or 1000 because of the limitation due to the number of bits.
So we have serious approximation problems that comes into play since a
number like 1000 is approximated to 255.
o Underflow Error: This is an error introduced due to the intervals between our
numbers and it is not so serious. To overcome this we can reduce thee scaling
used which may mean using larger number of bits.
Floating Point representation
o Fractional numbers are typically represented in computers using floating-point form.
o The number is expressed as a fractional part i.e. mantissa (m) or significand and an
integer part called an exponent.
o m x be where b is the base of the number system e.g. 0.345 x 103.
o What is the smallest number that can be represented by this system?
o This number is shown above. The initial 0 indicates that the quantity is +ve. The 1 in
the second place designates that the exponent has a -ve sign.
o The 1s in the third and fourth places gives a maximum value to the exponent of
The exponent is therefore -3.
o The mantissa is specified by the 100 in the last three place i.e.
o The smallest number is therefore +0.5 x 2-3 = 0.062510, i.e. 0111100 = 0.062510.
o Similarly, the maximum number possible is 0011111 = 710.
o So anything bigger than 7 is approximated to 7 excepts we have more bits.
We therefore have overflow problem in the floating number representation
too.
o There is also an underflow problem i.e. a hole between 0 and 0.0625.
However, it is not as large as the overflow problem i.e.
Numbers between 0 and the smallest number 0.0625 are approximated to either
0 or 0.0625 (underflow)
Number 1000 is approximated to 7 (overflow)
o This method is an improvement to Integer number representation but still
have its limitation.
Numerical Manipulations of Computer Numbers
Addition
o When two floating point numbers are added, the mantissa of the number
with the smaller exponent is modified so that the exponents are the same.
o For e.g., if we want to add 0.1557 x 101 to 0.4381 x 10-1
the decimal of the smaller number, being the smallest, is modified in order
to make the exponents the same by shifting it to the left a number of places
equal to the difference of the two exponents i.e. 1-(-1) = 2. Therefore:
o So we can add the numbers as:
o The result is chopped to 0.1600 x 101 since the two original numbers have 4
digits of precision.
o Errors called loss of precision errors are introduced this way. It can be very
serious when one number is very much bigger than the other
Subtraction
o Suppose we are subtracting:
o We can normalize the result as 0.1000. Thus, we append three non-
significant digits which the computer cannot discard.
o This introduces a substantial computational error because the computer will
consequently act as if the three zeros after the decimal were significant.
Multiplication and Division
o We do not get as much errors with multiplication and division because we
do not rescale in anyway.
Taylor Series and truncation error
We quickly review Taylor series which is a means to predict a
function value at one point in terms of the function value and its
derivatives at another point .
f n (a)
f ( x) ( x a) n
n 0 n!
The Taylor does not always converge but when it converges, it is
useful.
The Taylor series in expanded form is shown below:
f ' (a ) f ' ' (a) 2 f ' ' ' (a) 3 f iv ( a )
f ( x) f (a) ( x a) ( x a) ( x a) ( x a ) 4 ........
1! 2! 3! 4!
When Taylor series is evaluated at a = 0, it is referred to as
MacLaurin series i.e.: n
f (0) n
f ( x) ( x)
n 0 n!
Taylor series can be represented recursively as:
zero order: f0(x) = f(a)
first order: f1(x) = f0(x) + f ' (a ) ( x a )
1!
f ' ' (a)
second order: f2(x) = f1(x) + ( x a) 2
2!
Taylor Series Example
Determine the Taylor series approximation for the function f(x) = sin
(x) and a = 0 .
Solution:
f n (a )
f ( x) ( x a) n
n0 n!
First, we do the derivatives and evaluate at a = 0.
f(a) = sin (a) = 0, f(a) = cos (a) = 1, f(a) = -sin (a) = 0, f(a) = -cos(a) = -1
fiv(a) = sin (a) = 0, fv(a) = cos (a) = 1, vi(a) = -sin (a) = 0, fvii(a) = -cos(a) = -1
........
So we can continue the derivative and the value continue to cycle.
Now the MacLaurin expansion of sin (x)is:
zero-order of sin(x): f0(x) = 0
1
first-order of sin (x): f1(x) = f0(x) + 1! ( x) = x
0
second-order of sin (x): f2(x) = f1(x) + 2! ( x) 2 = x ......
As we add terms, our approximation becomes better.
Truncation Error
Truncation error is the error that results in cutting off the higher
order series in any infinite series like Taylor series.
f ' (a ) f ' ' (a) 2 f ' ' ' (a) 3 f iv ( a )
f ( x) f (a) ( x a) ( x a) ( x a) ( x a ) 4 ........
1! 2! 3! 4!
Usually we approximate at the nth order but the series in infinite i.e.
Therefore in reality what we have is:
The remainder term which is actually the error Rn is:
It is sometimes convenient to represent Taylor series by defining a
step size h = (xi+1 xi)or h = (x-a).
We can therefore rewrite our last two equations as:
and
We often express the error Rn using the big(Oh) notation.
Rn = O(hn+1)
where O(hn+1) means truncation error is of the order of hn+1.
For example, if the error is O(h), halving the step size will halve
the error. On the other hand, if the error is O(h)2, halving the
step size will reduce the error by a quarter.
In general, we can usually assume that the truncation error is
decreased by the addition of terms to the Taylor series.
Thus for the second case, only a few terms are required to
obtain an adequate estimate.
Lecture 2
Numerical Differentiation and Integration
Introduction
Everything we would do this semester is based on Numerical
Differentiation and Integration.
Before now you have done Ordinary and Partial differential
equations but now we will talk about numerical differentiation and
integration.
As engineers, we continuously deal with system and process
change. So calculus is important to us since it is the mathematics of
change.
The mathematical concepts of differentiation and integration are
the heart of calculus.
Brief review of Differentiation and Integration
Differentiation or derivative represents the rate of change of a
dependent variable wrt an independent variable.
Mathematically, we can represent this as:
-------- (1)
As 0, we have:
---------- (2)
where or is the first derivative of y wrt x. It is the slope.
The second derivative tells us how the slope is changing i.e. it is the
curvature.
---------- (3)
Partial derivatives are used for functions that depends on more
than one variable.
For e.g. if a given function f depends on both x and y, the partial
derivative of f wrt an arbitrary point (x,y) is:
---------- (4)
Similarly, partial derivative of f wrt y is:
----------(5)
Integration is the inverse of differentiation. Mathematically, it is
represented as:
----------- (6)
i.e. the integral of the function wrt the independent variable x
evaluated between x= a and x= b.
The function is referred to as the integrand.
For functions lying above the x-axis, the integral express in eqn (6)
corresponds to the area under the curve of between x= a and
x= b.
The function to be integrated or differentiated are normally given
in 3 forms:
o A simple continuous function such as a polynomial, an exponential or a
trigonometric function,
o a complicated continuous function that is difficult or impossible to
differentiate or integrate directly, and
o a tabulated function where values of x and are given at a number of
discrete points, as is often the case with experimental or field data.
It turns out that the first case can be evaluated using Calculus
(simple integration and differentiation).
However, for the second and third cases, we need approximate
methods.
Numerical Integration
In this section we will look at Numerical integration for two
purposes:
o For tabulated data
Here we have a table of values and from the data and we can find the
integral using numerical integration.
o For Known Function
Here we have a known function but it is difficult to integrate so we use
numerical integration methods.
Now we take a look at the numerical integration for tabulated data.
Tabulated Data
Newton-Cotes Method
o Open and Closed situation
o Evenly and unevenly spaced data
o Application to multi-dimensions
The Newton-Cotes integration formula is based on the idea that
we have some function that we are trying to approximate and we
have values of that function at some points.
The idea is to fit some simpler functions over our function and
integrate over the simple function. By doing this, we hope to have
an approximate integral of our function.
Figure shows approximation of a function using a) single straight
line, b) single parabola, c) a single quadratic function
We can still do better by using a cubic function
We have 2 methods of doing this; open integration formula and
closed integration formula.
The difference between the two is that the closed integration use
the end points of the function we are considering.
C d
If we do not have the end points a and b, we can use open
integration and integrate from c to d.
It seems that open integration is more accurate, however, it can be
grossly deficient if the function is changing so fast.
We will restrict our study for now to closed integration.
As said before, the Newton-Cotes formula is based on the strategy
of replacing a complicated function or tabulated data with an
approximate function that is easy to integrate.
where is a polynomial of the form:
where n is order of the polynomial.
When two points are used for our approximation (first order), we
use the trapezoid rule for the integration.
When three points are used for our approximation (second order),
we use the Simpsons 1/3 rule.
When four points are used for our approximation (third order), we
use the Simpsons 3/8 rule.
We can go on..
Trapezoidal Rule
Our goal in using the trapezoidal rule is to use the linear function
(in dotted lines) to approximate our function .
i.e.
where is equal to the first order integration.
Recall that a straight line can be represented by:
The area under this straight line is an estimate of the integral of
Therefore:
The result of the integration is:
Know this derivation. Charpa page 603.
The equation above is known as the Trapezoid rule or the first
Newton-Cotes formula.
Area of a Trapezoid = (b1 + b2)*h/2
height base 1 and base 2
Multiple-Application of the Trapezoid rule
One way to improve the accuracy of the trapezoid rule is to divide
the integration interval a to b into a number of segments and
apply the trapezoid rule to each segment.
The areas of individual segments can then be added to yield the
entire interval.
The resulting equation are called multiple-application or composite
integration formulas.
Figure below shows the general format and nomenclature used to
characterize multiple application integrals.
There are n+1 equally spaced base points i.e from x0 to xn.
Consequently, there are n segments of
equal width:
If a and b are designated as x0 and xn
respectively, the total integral can be
represented as:
Using the trapezoid rule,
Grouping terms gives:
Generally, we express the integral as:
Simpsons Rule
Aside from applying the trapezoidal rule with finer segmentation,
another way to obtain a more accurate estimate of an integral is to
use higher-order polynomials to connect points.
For example, if there is an extra point midway between a and b,
the three points are connected with a parabola.
If there are two points equally spaced between a and b, the four
points can be connected with a third-order polynomial.
The formulae resulting from taking the integrals under these
polynomials are called the Simpsons rule.
Simpsons 1/3 Rule
Simpsons 1/3 rule results when a second-order interpolating
polynomial is substituted into our integral equation i.e.
If a and b are designated as x0 and x2 and is represented by a
second-order Lagrange polynomial, the integral becomes:
After integration and algebraic manipulation, the foollowing
formula results:
For this case, where n = 2 (i.e. 2 segments).
Substituting this into the above equation gives:
The Multiple application of Simpsons 1/3 Rule
Simpsons rule can also be improved by dividing the integration
interval into a number of segment of equal width.
The total integral is represented as:
Substituting Simpsons 1/3 rule yields:
Combining the terms and using h gives:
Simpsons 3/8 Rule
In a similar manner to the derivation of the trapezoidal and
Simpsons 1/3 rule, a third-order Lagrange polynomial can be fitted
into four points and integrated.
This yields:
where We call the above 3/8 rule because h is multiplied
by 3/8.
It is the third Newton-Cotes closed integration formula.
Substituting h into the integral gives:
Higher-Order Newton-Cotes Closed Formula
Figure below shows the fourth and fifth-order of the Newton-Cotes
closed integration formula.
It must however be stressed that in engineering practice, the
higher order (greater than the third-order) formulae are rarely
used.
The Simpsons rule are sufficient for most applications
Integration with Unequal Segments
Before now, all formulae for numerical integration have been
based on equally spaced data points.
However, in practice, we sometimes have to deal with unequal size
segments e.g. experimentally derived data.
For such cases, one method is to apply trapezoidal rule to each
segment and then sum the results.
where hi is the width of the individual segments i.
Note that the same approach was used for the multiple-
application of the trapezoidal rule, only that h was constant then
Here h is changing because of our changing width.
Open Integration Formula
Below are some Newton-Cotes open integration formulae.
Notice that in the formulae, our start and end points are not used.
Start Point End Point
x0 x2
x0 x3
x0 x4
x0 x5
x0 x6
Only the mid points are used in the formulae
The open formulas are therefore not used for definite integration.
Points to Note:
One of the challenges of Newton-Cotes formulae in practice is that
the data has to be relatively close together and we have limitation
going with the higher order formulae because of over-fitting
problems.
Over-fitting problem intuitively means taking too much
information from our data and using it in our model.
We do not need this because these are approximate methods and
we are trying to leave any information that does not really matter.
Our first choice is always the Simpsons 3/8 rule, followed by the
Simpsons 1/3 rule, and then the trapezoid rule.
It is also possible to use a mixed composite Newton-Cotes by
combining the trapezoid and any of the Simpsons rule.
Multi-Dimensional Newton-Cotes
Before now, we have been dealing with a surface or a plane and
therefore our integral is equivalent to the area of the surface.
We can apply our results to multi-dimensions where our integral is
equivalent to the volume.
In this case, we do a double integral:
One integral for one plane and then the second integral for the
other plane.
We consider the example on page 626 of Charpa, Ex 21.9
Example 21.9 Charpa
The question is as follows:
Note that if we apply analytical method, we get a result of 58.66
which is exactly what we get with Simpsons 1/3 rule. This is because
Simpsons 1/3 rule yields an exact result for 3 points.
The multi-demensional problem is often the case in reality.
Errors and Newton-Cotes Formulae
The table below shows the truncation errors for each formulae:
If we look at the error for trapezoid rule, we will see that the error is
proportional to the second-order derivative i.e. .
Simpsons 1/3 rule has error proportional to the fourth-order derivative i.e.
Which means it is more accurate than the trapezoid rule.
Surprisingly, we would expect that the Simpsons 3/8 rule would be more
accurate but the error is very close i.e. proportional to the fourth-order
derivative too .
For Booles rule, it does get better.
Therefore, in general, we prefer to always use the Simpsons 1/3 rule because
the accuracy is the same as that of the 3/8 rule.
Integration of Equations
Before now, we considered tabular data for our integration but now
we will consider equations.
The limitations of tabular data is that we are limited to the number
of points given.
In contrast, if the function is available, you can generate as many
values of f(x) as required to obtain a better accuracy.
Romberg Integration and Richardsons Extrapolation
Romberg Integration is one technique that is designed to attain
efficient numerical integrals of functions.
It is similar to the techniques discussed before in the sense that it
is based on successive application of the trapezoidal rule.
However, through mathematical manipulations, better results can
be obtained with less effort.
We now look at the Richardsons extrapolation which is a special
case of the Romberg integration
Richardsons Extrapolation
The Richardsons extrapolation use two estimates of an integral to
compute a third and more accurate integral (which is an
approximation).
The estimate and error associated with a multiple application of
trapezoid is represented as:
trapezoidal approximation truncation error
If we use a step size h1, then
If we use a step size h2, then
Therefore
We deviate a little bit now:
The local truncation error for using a single application of the
Trapezoidal rule is:
For multiple application of trapezoidal rule i.e. we sum the
individual errors for each segment to give:
------(*)
where is the second-order derivative at a point located in i.
We can simplify the above equation by estimating the mean or
average value of the second-derivative for the entire interval as:
Using this in eqn. (*) gives:
Now recall that:
Where our error is as shown in the previous slide:
If , then:
is assumed to be the step size
To determine the ratio of the errors in h1 and h2 we have:
Since (b-a) is the interval of integration which is the same, then:
Therefore,
-------(*)
Substituting (*) into gives:
which gives
Thus we have developed an estimate for the truncation error in terms
of the integral estimates and their step sizes.
For the special case where the interval is halved i.e. , the eqn.
becomes:
Collecting terms: -----(**)
Intuitively, what are we saying?
If we have to integrate over the interval a and b using two
trapezoids, Romberg Integration says that if we do it again using 4
trapezoids, for instance, we can put the two approximations
together in a way that produce a more accurate approximation.
So Romberg integration says that
So instead of using the actual trapezoid rule error , we use a
more smaller error as shown below.
General Romberg Integration Algorithm
So from (**),
where are the more and less accurate integrals
respectively and is the improved integral.
K is the level of integration and j is used to distinguish between the
more and less accurate estimate.
So for Richardsons extrapolation, k = 2 i.e. hi is divided into 2 levels
of integration i.e. hj+hj =hi.
With this method, we get a higher accuracy with few evaluations.
Adaptive Quadrature
The previous methods use equal step sizes.
However, let us take a look at the diagram below:
xh
i a b h
i
y
Using the step size hi for x a and b-y will be okay but we need a
smaller step size to capture the changes that occur between a and
b.
Adaptive Quadrature refines the step sizes in areas where the
function changes rapidly.
The theoretical basis of this approach is illustrated in figures (a)
and (b) SEE NOTE
Figure (a) can be estimated using the Simpsons 1/3 rule to give:
As Richardsons extrapolation, a more refined estimate can be
obtained by halving the step size.
That is, we apply multiple application of the Simpsons 1/3 rule
within n = 4 giving:
Using the approach similar to Richardsons extrapolation, we
can derive an estimate for the error of the more refined
estimate as a function of the difference between the two
estimate, i.e.
Therefore,
The equation turns out to be exactly Booles Rule.
Guass Quadrature
The trapezoidal rule is based on taking the area under the straight
line connecting the function values at the ends of the integration
formula i.e.
See note for better diagram.
We can redefine our trapezoid in figure (b) so
that the errors in a and b can account for the
neglected error c.
Guass quadrature is the name for one class of
techniques used to implement this strategy.
This method is based on the method of
undetermined coefficients so we will consider
that first.
Method of Undetermined coefficients
We will show with a simple example how to re-derive the
trapezoid rule using the above method.
The methods says:
If we can figure out the coefficients c0 and c1, then we can
determine our integral
For fig (a),
For fig (b),
Therefore the two equations give:
=1
=0