0% found this document useful (0 votes)
388 views52 pages

Bisection Method (11 Files Merged)

The document discusses the bisection method for finding roots of equations. It begins by introducing the bisection method which works by repeatedly bisecting intervals and narrowing in on a root. An example is provided to illustrate the step-by-step process. The summary concludes by noting that while bisection is conceptually clear, it converges slowly and good approximations may be discarded.

Uploaded by

hammadqayyom006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
388 views52 pages

Bisection Method (11 Files Merged)

The document discusses the bisection method for finding roots of equations. It begins by introducing the bisection method which works by repeatedly bisecting intervals and narrowing in on a root. An example is provided to illustrate the step-by-step process. The summary concludes by noting that while bisection is conceptually clear, it converges slowly and good approximations may be discarded.

Uploaded by

hammadqayyom006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Bisection Method

Applications of Bisection Method in CS


Machine Learning

Determining the adequate


population size

SOLUTIONS OF EQUATIONS IN ONE VARIABLE


We consider one of the most basic problems of numerical approximation, the root-
finding problem. This process involves finding a root, or solution, of an equation
of the form f (x) = 0, for a given function f.
We can solve equations:
𝑥 3 + 4𝑥 2 − 10 = 0 on the interval [1, 2] ,
𝑥 cos 𝑥 − 2𝑥 2 + 3𝑥 − 1 = 0 on the interval [1.2, 1.3] ,
(𝑥 − 2)2 − ln 𝑥 = 0 on the intervals [1, 2] and [e, 4].
BISECTION METHOD

The first technique to solve equations like above, based on the Intermediate Value
Theorem, is called the Bisection, or Binary-search, method.
Suppose f(x)=0 is a continuous function defined on [a, b], with f (a) and f (b) of
opposite sign. The Intermediate Value Theorem implies that a number p exists in
(a, b) with f (p) = 0.
The bisection method calls for bisecting of subintervals of [a, b] and, at each step,
locating the half containing p.
𝑎+𝑏
To begin, set a1 = a and b1 = b, and let, 𝑝1 =
2

Let 𝑓(𝑎1) and 𝑓(𝑝1) have different signs but 𝑓(𝑝1) and 𝑓(𝑏1) have same signs, then
𝑎 +𝑏
the root lies in the interval [𝑎1, 𝑝1]. Set 𝑎1 = 𝑎2 and 𝑝1 = 𝑏2, and let 𝑝2 = 2 2
2

If 𝑓(𝑎2) and 𝑓(𝑝2) have same signs but 𝑓(𝑝2) and 𝑓(𝑏2) have different signs, then
𝑎 +𝑏
the root lies in the interval [𝑝2, b2]. Then find 𝑝3 = 3 3 on the same lines
2

Determining the successive values of 𝑝4 , 𝑝5 , 𝑝6 , … there exists 𝑝𝑛 for some n such


that 𝑓(𝑝𝑛 ) is very close to 0. Such 𝑝𝑛 will be required root of the equation 𝑓(𝑥) = 0.
Example.
If we have an equation in the form 𝑐𝑜𝑠(𝑡) = 3𝑡 − 1, put it equal to 0, that is
𝑓(𝑡) = cos(𝑡) − 3𝑡 + 1 = 0.
Find the interval [a, 𝑏] as [0, 1] of solution for 𝑓(𝑡) = cos(𝑡) − 3𝑡 + 1
with 𝑓(0) = 2 > 0 and 𝑓(1) = −1.46 < 0, then use
𝑎+𝑏
𝑝𝑖 = , 𝑖 = 0,1,2, …
2
First Iteration 𝒑𝟏 : First find
𝑎+𝑏 0+1
𝑝1 = = = 0.5
2 2
Now we will substitute 𝑝1 = 0.5, in 𝑓(𝑡), and we have
𝑓(0.5) = cos(0.5) − 3(0.5) + 1 = 0.3776 > 0
As we know that 𝑓(0) > 0, like 𝑓(0.5) so we will replace 0.5 by 0 in the interval.
We have the new interval as [0.5, 1] and it completes First Iteration.
Second Iteration 𝒊 = 𝟐:
𝑎 + 𝑏 0.5 + 1
𝑝2 = = = 0.75
2 2
We will substitute 𝑝2 = 0.75, in 𝑓(𝑡), implies
𝑓(0.75) = cos(0.75) − 3(0.75) + 1 = −0.5183 < 0
As we know at 𝑡 = 1, 𝑓(𝑡) < 0, so we will replace 0.75 by 1 in the interval and
we have new interval [0.5,0.75]. It completes second Iteration.
In this way we will proceed further to find 𝑝3 , 𝑝4 , 𝑝5 , …
until the gap between two 𝑝𝑖 is very less (say within 10−3 )
|𝑏 − 𝑎| < 10−3 (= 0.001)
or any other given criteria like
(i) |𝑝𝑖 − 𝑝𝑖−1 | < 𝜀 or (ii) |𝑓(𝑝𝑛 )| < 𝜀,
As we get 𝑝3 , 𝑝4 , 𝑝5 , 𝑝6 , … 𝑝12 = 0.625, 0.5625, 0.59375, 0.6094, . . . , 0.6072
with 𝑓(𝑝12 ) = −0.00035 with required accuracy as the solution.
Exercise 2.1
Q3(a) Use bisection method to find solution accurate to within 10−2 for
𝑥 3 − 7𝑥 2 + 14𝑥 − 6 = 0 on the interval [0, 1] .
Solution: Let us mark 𝑎 = 0 and b = 1, and for 𝑓(𝑥) = 𝑥 3 − 7𝑥 2 + 14𝑥 − 6, we
know that
𝑓(0) = −6 < 0 and 𝑓(1) = 2 > 0.
𝑎+𝑏 0+1
So for the first iteration: 𝑝1 = = = 0.5, with
2 2

𝑓(0.5) = −0.625 < 0


As at 𝑥 = 0, 𝑓(𝑥) < 0, so we will replace 0 with 0.5 in the interval. So we have
new interval [0.5, 1] completing the First Iteration.
𝑎+𝑏 0.5+1
Second Iteration: 𝑝2 = = = 0.75
2 2

Now we will substitute 𝑝2 = 0.75, in 𝐹(𝑥), we have


𝑓(0.75) = 0.9844 > 0
As we know at 𝑥 = 1, 𝑓(𝑥) > 0, so we will put 0.75 in place of 1 in the interval.
We can summarize these iterations in the form of a table as:

n 𝒂𝒏 𝒃𝒏 𝒑𝒏 𝒇(𝒑𝒏 )

1 0 1 0.5 -0.625

2 0.5 1 0.75 0.9844

3 0.5 0.75 0.625 0.2598

4 0.5 0.625 0.5625 -0.1619

5 0.5625 0.625 0.5938 0.0544

6 0.5625 0.5938 0.5782 -0.0521


7 0.5782 0.5938 0.5860 0.0015

8 0.5782 0.5860 0.5821 -0.025

9 0.5821 0.5860 0.5841 -0.0115

Since, 𝐸𝑅𝑅𝑂𝑅 = |𝑝9 − 𝑝8 | = 0.002, So the approximated solution is 𝑥 = 0.58.


Q5(a) Use bisection method to find solution accurate to within 10−5 for
𝑥 − 2−𝑥 = 0 on the interval [0, 1] .
Solution: : Let us mark 𝑎 = 0 and b = 1, and for
𝑓(𝑥) = 𝑥 − 2−𝑥 , 𝑓(0) = −1 < 0 and 𝑓(1) = 0.5 > 0.

n 𝒂𝒏 𝒃𝒏 𝒑𝒏 𝒇(𝒑𝒏 )

1 0 1 0.5 -0.2071

2 0.5 1 0.75 0.1554

3 0.5 0.75 0.625 -0.0234

4 0.625 0.75 0.6875 0.0666

5 0.625 0.6875 0.65625 0.02172

6 0.625 0.65625 0.640625 -0.00081

7 0.640625 0.65625 0.64844 0.0105

8 0.640625 0.64844 0.64453 0.005

9 0.640625 0.64453 0.64258 0.002


10 0.640625 0.64258 0.64160 0.0006

11 0.640625 0.64160 0.64111 -0.00011

12 0.64111 0.64160 0.641355 0.000245

13 0.64111 0.641355 0.64123 0.00006

14 0.64111 0.64123 0.64117 -0.000023

15 0.64117 0.64123 0.64120 0.00002

So the approximated solution is 𝑥 = 0.64120.

Disadvantage
The Bisection method, though conceptually clear, has significant drawbacks. It is
relatively slow to converge (that is, N may become quite large before | 𝑝 − 𝑝𝑁 | is
sufficiently small), and a good intermediate approximation might be inadvertently
discarded.
Advantage
The method has the important property that it always converges to a solution.

TASK 1
You are designing a spherical tank to hold water for a small village in a developing
country. The volume of liquid it can hold can be
computed as
𝝅 𝒉𝟐
𝑽= (𝟑𝑹 − 𝒉)
𝟑
where V = volume (m3), h = depth of water in
tank (m), and R = the tank radius (m). If R = 3
m, what depth must the tank be filled to so that
it holds 30 m3? Determine your answer using some suitable initial guess.
TASK 2

PRACTICE
Newton Divided-Difference Formula
A practical difficulty with Lagrange interpolation is that the work done in calculating
the approximation by the lower degree polynomial does not lessen the work needed to
calculate the higher degree polynomial approximation.
Newton Divided-Difference Table
x f(x) First divided Second divided Third divided Fourth divided
differences differences differences difference
𝑥0 𝑓[𝑥0 ]
𝑓[𝑥0 , 𝑥1 ] = 𝐴
𝑓(𝑥1 ) − 𝑓(𝑥0 )
=
𝑥1 − 𝑥0
𝑥1 𝑓[𝑥1 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
𝐵 −𝐴
= =𝐸
𝑥2 − 𝑥0
𝑓[𝑥1 , 𝑥2 ] = 𝐵 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]
𝑓(𝑥2 ) − 𝑓(𝑥1 ) 𝐹−𝐸
= = =𝐻
𝑥2 − 𝑥1 𝑥3 − 𝑥0
𝑥2 𝑓[𝑥2 ] 𝑓[𝑥1 , 𝑥2 , 𝑥3 ] 𝑓[𝑥0 , 𝑥1 , … , 𝑥4 ]
𝐶 −𝐵 𝐼 −𝐻
= =𝐹 = =𝐽
𝑥3 − 𝑥1 𝑥4 − 𝑥0
𝑓[𝑥2 , 𝑥3 ] = 𝐶 𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ]
𝑓(𝑥3 ) − 𝑓(𝑥2 ) 𝐺−𝐹
= = =𝐼
𝑥3 − 𝑥2 𝑥4 − 𝑥1
𝑥3 𝑓[𝑥3 ] 𝑓[𝑥2 , 𝑥3 , 𝑥4 ]
𝐷 −𝐶
= =𝐺
𝑥4 − 𝑥2
𝑓[𝑥3 , 𝑥4 ] = 𝐷
𝑓(𝑥4 ) − 𝑓(𝑥3 )
=
𝑥4 − 𝑥3
𝑥4 𝑓[𝑥4 ]

Newton interpolating polynomials


𝑃1 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝐴,
𝑃2 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝐴 + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝐸
And 𝑃3 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝐴 + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝐸 +(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )𝐻
Next, Newton interpolating polynomial of degree four is?
𝑃4 (𝑥) = 𝑓 (𝑥0 ) + (𝑥 − 𝑥0 )𝐴 + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝐸 +(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )𝐻
+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 )𝐽
Examples
Exercise 3.1
Q1. For the given functions 𝑓(𝑥) = √1 + 𝑥 and 𝑥0 = 0, 𝑥1 = 0.6 and 𝑥2 = 0.9.
Construct interpolation polynomials of degree one and two to approximate 𝑓(0.45),
and find the absolute error.
x f(x) First divided differences
𝑥0 = 0 𝑓[𝑥0 ] = 1
√1.6 − 1
𝑓[𝑥0 , 𝑥1 ] = = 0.44152
0.6 − 0
𝑥1 = 0.6 𝑓[𝑥1 ] =√1.6
𝑃1 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ] = 1 + 0.44152 𝑥
𝑃1 (0.45) = 1.198684

x f(x) First divided differences Second divided differences


𝑥0 = 0 𝑓[𝑥0 ] = 1
√1.6 − 1
𝑓[𝑥0 , 𝑥1 ] = = 0.44152
0.6 − 0
𝑥1 = 0.6 𝑓[𝑥1 ] =√1.6 𝑓[𝑥0 , 𝑥1 , 𝑥2 ] =
𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]
= −0.07023
𝑥2 − 𝑥0
√1.9 − √1.6
𝑓[𝑥1 , 𝑥2 ] = = 0.37831
0.9 − 0.6
𝑥2 = 0.9 𝑓[𝑥2 ] =√1.9

𝑃2 (𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ] + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]


= 1 + 0.44152 𝑥 − 0.07023𝑥(𝑥 − 0.6)
𝑃2 (0.45) = 1.203425

We can also use Newton backward divided difference formula as


𝑃2 (𝑥) = 𝑓(𝑥2 ) + (𝑥 − 𝑥2 )𝑓[𝑥1 , 𝑥2 ] + (𝑥 − 𝑥2 )(𝑥 − 𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
= √1.9 + 0.37831 (𝑥 − 0.9) − 0.07023(𝑥 − 0.9)(𝑥 − 0.6)
𝑃2 (0.45) = 1.203425
Task 1
For a function 𝑓, the forward divided differences are given by

Determine the missing entries.


Practice
Exercise 3.2
Q1(a) Use Newton divided difference interpolating polynomials of degrees 1, 2, and 3
to approximate f(8.4), if f (8.1) =16.94410, f (8.3) =17.56492, f (8.6) =18.50515,
f(8.7) =18.82091.
Solution: First, let us make divided difference table:

x f(x) First divided Second divided Third divided differences


differences differences
𝑥0 = 8.1 𝑓[𝑥0 ] = 16.94410
𝑓[𝑥0 , 𝑥1 ] = 3.1041

𝑥1 = 8.3 𝑓[𝑥1 ] =17.56492 𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]


𝑥2 − 𝑥0
= 0.06

𝑓[𝑥1 , 𝑥2 ] = 3.1341 𝑓[𝑥1 , 𝑥2 , 𝑥3 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]


𝑥3 − 𝑥0
= −0.002083

𝑥2 = 8.6 𝑓[𝑥2 ] =18.50515 𝑓[𝑥2 , 𝑥3 ] − 𝑓[𝑥1 , 𝑥2 ]


𝑥3 − 𝑥1
= 0.05875

𝑓[𝑥2 , 𝑥3 ] = 3.1576

𝑥3 = 8.7 𝑓[𝑥3 ] =18.82091


𝑃1 (𝑥) = 𝑓 (𝑥0 ) + (𝑥 − 𝑥0 )𝑓 [𝑥0 , 𝑥1 ] = 16.94410 + 3.1041 (𝑥 − 8.1)
𝑃1 (8.4) = 17.87533
𝑃2 (𝑥) = 𝑓 (𝑥0 ) + (𝑥 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ] + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
= 16.94410 + 3.1041 (𝑥 − 8.1) + 0.06(𝑥 − 8.1)(𝑥 − 8.3)
𝑃2 (8.4) = 17.87713
𝑃3 (𝑥) = 𝑓 (𝑥0 ) + (𝑥 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ] + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
+(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]
= 16.94410 + 3.1041 (𝑥 − 8.1) + 0.06(𝑥 − 8.1)(𝑥 − 8.3) − 0.002083(𝑥 − 8.1)(𝑥 − 8.3)(𝑥 − 8.6)
𝑃3 (8.4) = 17.87714

Q1 (b). Use Newton divided difference interpolating polynomials of degrees 1, 2, and 3 to


approximate f (0.9), if f (0.6) = − 0.17694460, f (0.7) = 0.01375227, f (0.8) = 0.22363362,
f (1.0) = 0.65809197.
Then, calculate actual value of f(0.9) using the function𝑓(𝑥) = sin(𝑒 𝑥 − 2) and compare
it with values obtained from polynomials of degrees 1, 2, and 3.
FIXED POINT ITERATION
A fixed point for a function is a number at which the value of the function does not
change when the function is applied.
Definition: The number p is a fixed point for a function g if g(p) = p.
Example: Determine any fixed points of
𝑔(𝑥) = 𝑥 2 − 2.
Solution: A fixed point p for g has property
𝑥 = 𝑔(𝑥) = 𝑥 2 − 2.
A fixed point for g occurs
when graph of y = g(x) intersects y = x.
So g has two fixed points,
p = −1 and p = 2 as
−1 = 𝑔(−1) and 2 = 𝑔(2).
Why we are finding fixed point here?
An important application of a fixed point is that if we write an equation 𝑓(𝑥) = 0
in the fixed-point form x = g(x) using simple algebraic manipulation, same x will
also be the solution of the equation 𝑓(𝑥) = 0 = 𝑥 − 𝑔(𝑥) . It means above
calculated fixed points are also the solutions of equation 𝑥 2 − 𝑥 − 2 = 0.
The following theorem gives sufficient conditions for the existence and uniqueness
of a fixed point.

Theorem: (i) If g ∈ C[a, b] and g(x) ∈ [a, b] for all x ∈ [a, b], then g has at least
one fixed point in [a, b].
(ii) If, in addition, 𝑔′ (𝑥) exists on (a, b) and a positive constant k < 1 exists with
|𝑔′ (𝑥)| ≤ k, for all x ∈ (a, b), then there is exactly one fixed point in [a, b].
To approximate the fixed point of a function g, we choose an initial approximation
𝑝0 and generate the sequence {𝑝𝑛 }∞
𝑛=0 by letting 𝑝𝑛 = 𝑔(𝑝𝑛−1 ), for each n ≥ 1.

Such sequence converges to the unique fixed point p in [a, b].


Illustration: The equation 𝑥 3 + 4𝑥 2 − 10 = 0 has a unique root in [1, 2]. There
are many ways to change the equation to the fixed-point form x = g(x) using simple
algebraic manipulation.
Here, to obtain the function g, we can manipulate the equation 𝑥 3 + 4𝑥 2 − 10 = 0
in the following four ways:
10
𝑥 3 = 10 − 4𝑥 2 ⇒ (𝑎) 𝑥 = √ − 4𝑥
𝑥

1
4𝑥 2 = 10 − 𝑥 3 ⇒ (𝑏) 𝑥 = √10 − 𝑥 3
2

10
𝑥 2 (𝑥 + 4) = 10 ⇒ (𝑐) 𝑥 = √
4+𝑥

𝑥 3 +4𝑥 2 −10
and by some other means (𝑑) 𝑥 = 𝑥 −
3𝑥 2 +8𝑥

To start the sequence for each function, let us take 𝑝0 = 1.5 from [1, 2].
We have summarized these four sequences in the form of a table as:

n (a) (b) (c) (d)


10 𝟏 𝟏𝟎 𝒙𝟑 +𝟒𝒙𝟐 −𝟏𝟎
𝑥=√ − 4𝑥 𝒙 = √10 − 𝑥 3 𝒙=√ 𝒙=𝑥−
𝑥 2 4+𝒙 3𝒙𝟐 +8𝑥

0 1.5 1.5 1.5 1.5

1 0.8165 1.28695 1.34840 1.37333

2 2.9969 1.40254 1.36738 1.36526

3 √−8.65 1.34546 1.36496 1.36523

4 1.37517 1.365265 1.36523


5 1.36009 1.365226

6 1.36785 1.36523

7 1.36389

8 1.36592

9 1.36488

10 1.36541

11 1.36514

12 1.36528

We found that sequence obtained by function in (d) rapidly converges to a


solution.
The particular form of the function g(x) derived in (d) leads to a very famous
iterative formula known as Newton-Raphson method (next lecture).
Course: Numerical Computing Course Instructor: Rohsha Tahir
Course Code: CSAL4263 Office: Cabin 01, Block C
Credit Hours: 3 Email: [email protected]

What do you understand from the word “Numerical Computing”?


In Calculus, we use functions (polynomials, trigonometric, exponential etc.) on
continuous domains and their derivatives, integrals etc. What if we have a data set of
points? Like population of US taken every 10 years as:

Year 1960 1970 1980 1990 2000 2010

Population 179,323 203,302 226,542 249,633 281,422 308,748


(in thousands)

Some of the methods you have learned in Calculus, LA, DE as exact methods, will be
solved numerically (using computer) here. These methods are not exact but help us in
solving difficult problems with the help of computer algorithms with good accuracy.
Computer is a very powerful tool which is now used in almost every field of our world:
organizing a super store, launching a rocket in space, building missiles systems,
constructing a building of 100 floors, bringing the world online, shopping, gaming, for
business, and much more. Most of the world population is using computer in one form
or other. As a computer science student, one has to learn how all this work is being done,
what are the difficulties behind it and what the limitations are. The issues discussed in
this course of numerical computing are:
 How computer performs calculations?
 How Mathematics is the base of computer?
 What issues you have to deal while implementing an Algorithm on computer
(error, resources, accuracy, stability, computer compatibility)?
 Which Algorithms are better and how to improve Algorithms?
 How a real life problem is digitalized (making algorithms to implement on
computer, explaining the solution in terms of real world)?
Applications in CS
We can also understand the importance of mathematics by following examples:
GAMING

An angry bird user is only concerned about the quality of the game and how much fun it
provides while playing. But for a CS student, it is important to learn the basics of the
different features of this game which are based on the concepts of numerical computing.
First thing is what the inputs of this game are
1. Angle with which bird is fired
2. Power or Force
Here comes the difficult part that how from these inputs computer will get the output
i.e. the trajectory of the bird, its impact on wood and on other things. What kind of
mathematics and physics is involved? Just try to get the ideas like the trajectory of the
bird is a projectile motion and from input, first differential equation for projectile
motion is made and its solution give us the exact trajectory and impact of the bird. So
in this game Calculus, Differential equation, Linear Algebra, and of course numerical
computing (still to be learned) are used. Students can try to visualize different
phases in making this game and broaden their vision for more innovations.
Similarly, in Racing games like Need for speed, speed is like 1st derivative,
acceleration and brakes are 2nd derivative. In handling movement graphics,
Mathematics is used.

ANIMATION

Look at different options in Corel draw and other graphic and animation soft wares.
Features like zoom in, zoom out, rotation, drawing different shapes, coloring and
spraying are all executed using the algorithms based on Mathematics (Linear Algebra,
coordinate system, geometry, Statistics, Calculus). There may be designers who have
used these soft wares for ages and still questioning ‘where Mathematics is being
used’. Of course principles of numerical computing are very important to
understand in solving any kind of problem. A beautifully written book “Fundamentals
of Computer Graphics Third Edition” can increase your knowledge of how
Mathematics is being used in Computer Science.
Applications in Real life Problems
Now a day, usually for mega projects, we use digital models and test different
possibilities in them before actually manufacturing them.
Flying Jet Simulation for Pilots

Commercial Jet is a very costly thing with the huge responsibility of the safety of
passengers. For these reasons, simulation of the actual jet is made, on which Pilots can
practice many scenarios: flying in snow, rain, bad weather, different emergency
situations. Many things from numerical computing are used in making these
simulations possible, like aero dynamics, graphics, and fluid mechanics.

Engineering
A 100 story building cannot be built without planning. When there is so much at stake,
we have to be sure, like is the base enough powerful to handle the weight of the
building, does the design work, how much weather conditions the building can
handle, and many other different scenarios are to be tested.

Importance of Error Analysis


In this course, when we solve problems using numerical methods, there occur errors
which we can neglect in common scenarios (time of friend’s dinner, money paid at
petrol pump or chicken shop) but in more sophisticated issues (aircrafts, scientific
research, missile system) they can’t be ignored. To understand its importance let us see
some examples:
Millennium Bug
The millennium bug, which has taken many payment and computer systems offline, is
a long-lingering side effect of attempts to fix the Y2K. It stems from the way
computers store dates.
For example, September 24, 1978 …… 092478
and December 31, 1999 …… 123199
So, what about January 1, 2000?
Would it be …… 010100?
There were estimates that this approach to the so-called “Y2K” bug was used for 80%
of the world’s computers since the alternative — converting systems to include all four
digits of a date — “requires a tedious line-by-line repair.”

The Explosion of the Ariane 5

On June 4, 1996 Ariane 5 rocket launched by the European Space Agency exploded
just forty seconds after its lift-off from Kourou, French Guiana.
The rocket was on its first voyage, after a decade of development costing $7 billion,
and its cargo were valued at $500 million.
A board of inquiry investigated the causes of the explosion and in two weeks issued a
report. It turned out that the cause of the failure was a software error based on rounding
off at some 24th decimal place and thus the conversion failed.
The Patriot Missile Failure
In 1991 Gulf War, an American Patriot Missile in Saudi Arabia, failed to track and
intercept an incoming Iraqi Scud missile. The Scud struck an American Army barracks,
killing 28 soldiers and injuring around 100 other people.
The report of the General Accounting office, reported on the cause of the failure. It
turned out that the cause was an inaccurate calculation of the time since boot due to
computer arithmetic errors. Specifically, the time in tenths of second as measured by
the system's internal clock was multiplied by 1/10 to produce the time in seconds. The
small chopping error, when multiplied by the large number giving the time in tenths of
a second, led to a significant error which was about 0.34 seconds.
Nowadays, in era of smart phones and fast internet, we talk about Nanoseconds not
even microseconds.
So in each scenario, we should know our limitations and effects of failures in
calculations which we will learn in this course.
Analysis of the error involved in calculations is an important topic in numerical
analysis.
*In this course, scientific calculators should be with you in all the lectures.
Reference Material:
 Numerical Analysis by Burden and Fairies 10th Edition
 Numerical Analysis: Mathematics of Scientific Computing 6 th Edition by David
Ronald Kincaid, Elliott Ward Cheney
 Applied Numerical Analysis 7th Edition by Curtis F. Gerald
 Numerical Analysis 2nd Edition by Tim Sauer
Review of Calculus (Why in CS and in NM?)
A solid knowledge of calculus is essential for an understanding of the analysis of
numerical techniques, and more thorough review might be needed if you have been away
from this subject for a while.

Limit and Continuity

A function f defined on a set X of real numbers has the limit L at 𝑥0 , written


lim 𝑓(𝑥) = 𝐿
𝑥→𝑥0

And f is continuous at 𝑥0 if
lim 𝑓(𝑥) = 𝑓(𝑥0 )
𝑥→𝑥0

Differentiability
Let f be a function defined in an open interval containing 𝑥0 . The function f is
differentiable at 𝑥0 if
𝑓(𝑥)−𝑓(𝑥0 )
𝑓 ′ (𝑥0 ) = lim𝑥→𝑥0 exists, called the derivative of 𝑓 at 𝑥0 .
𝑥−𝑥0
Interpolation and Polynomial Approximation
Applications
In chapter 1 we found polynomial approximation of a function using Taylor’s
formula about a point 𝑥0 . What if we have a data set of two or more points? Like
population of a country taken every 10 years as:

Year 1960 1970 1980 1990 2000 2010

Population 179,323 203,302 226,542 249,633 281,422 308,746


(in thousands)

To find estimated population in 1975 or in 2008, called interpolation, we can use


Lagrange polynomial.

Geometric Meaning

Algorithm
Lagrange Interpolating Polynomials
A polynomial of degree one passing through points (𝑥0 , 𝑦0 ) and (𝑥1 , 𝑦1 ) is same as
approximating a function 𝑓 with 𝑦0 = 𝑓(𝑥0 ) and 𝑦1 = 𝑓(𝑥1 ).
Lagrange polynomial for it is
𝑃1 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) with
𝑥−𝑥1 𝑥−𝑥0
𝐿0 (𝑥) = and 𝐿1 (𝑥) = .
𝑥0 −𝑥1 𝑥1 −𝑥0
Similarly for three points (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ) and (𝑥2 , 𝑦2 ), we have to calculate three
coefficient polynomials 𝐿0 (𝑥), 𝐿1 (𝑥), 𝐿2 (𝑥) and Lagrange polynomial for it is
𝑃2 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) + 𝐿2 (𝑥)𝑓(𝑥2 ) with
(𝑥−𝑥1 )(𝑥−𝑥2 ) (𝑥−𝑥0 )(𝑥−𝑥2 ) (𝑥−𝑥0 )(𝑥−𝑥1 )
𝐿0 (𝑥) = , 𝐿1 (𝑥) = and 𝐿2 (𝑥) =
(𝑥0 −𝑥1 )(𝑥0 −𝑥2 ) (𝑥1 −𝑥0 )(𝑥1 −𝑥2 ) (𝑥2 −𝑥0 )(𝑥2 −𝑥1 )

Next, can you guess Lagrange polynomial for four points?

Yes…
For (𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) and (𝑥3 , 𝑦3 ), Lagrange polynomial will be
𝑃3 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) + 𝐿2 (𝑥)𝑓(𝑥2 ) + 𝐿3 (𝑥)𝑓(𝑥3 ) with
(𝑥−𝑥1 )(𝑥−𝑥2 )(𝑥−𝑥3 ) (𝑥−𝑥0 )(𝑥−𝑥2 )(𝑥−𝑥3 )
𝐿0 (𝑥) = , 𝐿1 (𝑥) = ,
(𝑥0 −𝑥1 )(𝑥0 −𝑥2 )(𝑥0 −𝑥3 ) (𝑥1 −𝑥0 )(𝑥1 −𝑥2 )(𝑥1 −𝑥3 )

(𝑥−𝑥0 )(𝑥−𝑥1 )(𝑥−𝑥3 ) (𝑥−𝑥0 )(𝑥−𝑥1 )(𝑥−𝑥2 )


𝐿2 (𝑥) = , 𝐿3 (𝑥) = .
(𝑥2 −𝑥1 )(𝑥2 −𝑥1 )(𝑥2 −𝑥3 ) (𝑥3 −𝑥0 )(𝑥3 −𝑥1 )(𝑥3 −𝑥2 )

Example 1
Determine the linear Lagrange interpolating polynomial that passes through the points
(2, 4) and (5, 1).

Solution. Lagrange polynomial for (𝑥0 , 𝑦0 ) = (2, 4) and (𝑥1 , 𝑦1 ) = (5, 1) is


𝑃1 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 )
with
𝑥−𝑥1 𝑥−5 𝑥−𝑥0 𝑥−2
𝐿0 (𝑥) = = and 𝐿1 (𝑥) = = .
𝑥0 −𝑥1 −3 𝑥1 −𝑥0 3

Hence,
1 1
𝑃1 (𝑥) = − (𝑥 − 5)4 + (𝑥 − 2)1 = −𝑥 + 6 .
3 3
*You can see that it is the same linear equation, you have derived in calculus from
the point slope form of the equation using two points. The advantage is that the
Lagrange polynomial is not only for polynomials of degree one but can be
extended to polynomials of degree n using n+1 points of any given data.
Example 2
1
Use 𝑥0 = 2, 𝑥1 = 2.75 and 𝑥2 = 4 to find Lagrange polynomial for 𝑓(𝑥) = .
𝑥
Use this polynomial to approximate 𝑓(3).
Solution: First determine coefficient polynomials 𝐿0 (𝑥), 𝐿1 (𝑥), and 𝐿2 (𝑥) as
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 2.75)(𝑥 − 4) 2
𝐿0 (𝑥) = = = (𝑥 − 2.75)(𝑥 − 4),
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) (2 − 2.75)(2 − 4) 3
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 2)(𝑥 − 4) 16
𝐿1 (𝑥) = = = − (𝑥 − 2)(𝑥 − 4),
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (2.75 − 2)(2.75 − 4) 15
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) (𝑥 − 2)(𝑥 − 2.75) 2
and 𝐿2 (𝑥) = = = (𝑥 − 2)(𝑥 − 2.75).
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) (4 − 2)(4 − 2.75) 5
Therefore, polynomial for it is
𝑃2 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) + 𝐿2 (𝑥)𝑓(𝑥2 )
2 1 16 1 2 1
= (𝑥 − 2.75)(𝑥 − 4) − (𝑥 − 2)(𝑥 − 4) + (𝑥 − 2)(𝑥 − 2.75)
3 2 15 2.75 5 4
1 35 49
⟹ 𝑃2 (𝑥) = 𝑥2 − 𝑥+
22 88 44
1
So, 𝑃2 (3) = 0.32955 while 𝑓(3) = = 0.33333.
3

Advantage over Taylor Polynomials


For the Taylor polynomials all the information used in the approximation is
concentrated at the single number 𝑥0 , so these polynomials will generally give
inaccurate approximations as we move away from 𝑥0 . This limits Taylor
polynomial approximation to the situation in which approximations are needed
only at numbers close to 𝑥0 . For ordinary computational purposes it is more
efficient to use methods that include information at various points. The primary use
of Taylor polynomials in numerical analysis is not for approximation purposes, but
for the derivation of numerical techniques and error estimation.
Problem 1
The following data of the velocity of a body is given as a function of time.
Time (s) 10 15 18 22 24
Velocity (m/s) 22 24 37 25 123
A quadratic Lagrange interpolating polynomial is formed using three data points,
𝑡  15, 18, & 22. Use this information to evaluate that at what times (in seconds) is
the velocity of the body 26 𝑚/𝑠 during the time interval of 𝑡 = 15 to 𝑡 = 22
seconds.
Problem 2
The following table shows the population of US from 1960 to 2010
Year 1960 1970 1980 1990 2000 2010
Population 179,323 203,302 226,542 249,633 281,422 308,746
(in
thousands)
a. Use Lagrange Interpolation to approximate the population in the years 1950,
1975, 2014, and 2020.
b. The population in 1950 was approximately 150,697,360, and in 2014 the
population was estimated to be 317,298,000. How accurate do you think
your 1975 and 2020 figures are?
Problem 3
Let 𝑃3 (𝑥) be the Lagrange interpolating polynomial for the data
(0,0), (0.5, 𝑦), (1,3), & (2,2).
a. Find 𝑦 if the coefficient of 𝑥 3 in 𝑃3 (𝑥) is 6.
b. Find 𝑃3 (𝑥).

Practice
Exercise 3.1
Q. For 𝑥0 = 0, 𝑥1 = 0.6, 𝑥2 = 0.9 and 𝑓(𝑥) = √1 + 𝑥 , construct
Lagrange interpolation polynomial of degree 1 & 2 to approximate 𝑓(0.45) and
find error.
Solution: (i) Lagrange polynomial of degree 1
For 𝑥0 = 0, 𝑥1 = 0.6, 𝑓(𝑥0 ) = 1, 𝑓(𝑥1 ) = √1.6
𝑥 − 𝑥1 𝑥 − 0.6 5
𝐿0 (𝑥) = = = − (𝑥 − 0.6)
𝑥0 − 𝑥1 −0.6 3
𝑥 − 𝑥0 𝑥−0 5
and 𝐿1 (𝑥) = = = 𝑥.
𝑥1 − 𝑥0 0.6 3
5 5
Hence, 𝑃1 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) = − (𝑥 − 0.6) 1 + 𝑥√1.6
3 3

⟹ 𝑃(0.45) = 1.19868,
while f(0.45) = 1.20416
So, Absolute Error =0.00548
(ii) Lagrange polynomial of degree 2
For 𝑥0 = 0, 𝑥1 = 0.6, 𝑥2 = 0.9, 𝑓(𝑥0 ) = 1, 𝑓(𝑥1 ) = √1.6, 𝑓(𝑥2 ) = √1.9
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 0.6)(𝑥 − 0.9)
𝐿0 (𝑥) = = ,
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) (−0.6)(−0.9)
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) 𝑥(𝑥 − 0.9)
𝐿1 (𝑥) = = ,
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (0.6)(−0.3)
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) 𝑥(𝑥 − 0.6)
𝑎𝑛𝑑 𝐿2 (𝑥) = = .
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) (0.9)(0.3)
Therefore, 𝑃2 (𝑥) = 𝐿0 (𝑥)𝑓(𝑥0 ) + 𝐿1 (𝑥)𝑓(𝑥1 ) + 𝐿2 (𝑥)𝑓(𝑥2 )
(𝑥 − 0.6)(𝑥 − 0.9) 𝑥(𝑥 − 0.9) 𝑥(𝑥 − 0.6)
= 1− √1.6 + √1.9
0.54 0.18 0.27
⟹ 𝑃(0.45) = 1.20342
So, Absolute Error = 0.00074
NEWTON’S (OR NEWTON-RAPHSON) METHOD
Newton’s (or the Newton-Raphson) method is one of the most powerful and well-
known numerical methods for solving a root-finding problem. There are many
ways of introducing Newton’s method.
If we only want an algorithm, we can consider the technique graphically, as is
often done in calculus. Another possibility is to derive Newton’s method as a
technique to obtain faster convergence than offered by other types of functional
iterations. A third means of introducing Newton’s method, which is discussed next,
is based on Taylor polynomials. We will see there that this particular derivation
produces not only the method, but also a bound for the error of the approximation.

Algorithm

Suppose that f ∈ C2[a, b]. Let p0 ∈ [a, b] be an approximation to p such that


𝑓 ′ (𝑝0 ) ≠ 0 and | p - p0| is “small.” Consider the first Taylor polynomial for f (x)
expanded about p0 and evaluated at x = p.
𝑓 ′′ (𝜉(𝑝))
𝑓(𝑝) = 𝑓(𝑝0 ) + 𝑓 ′ (𝑝0 )(𝑝 − 𝑝0 ) + (𝑝 − 𝑝0 )2 ,
2!
Where 𝜉(𝑝) lies between p and p0 . Since 𝑓(𝑝) = 0, this equation gives
𝑓 ′′ (𝜉(𝑝))
0 = 𝑓(𝑝0 ) + 𝑓 ′ (𝑝0 )(𝑝− 𝑝0 ) + (𝑝 − 𝑝0 )2
2!
Newton’s method is derived by assuming that since | p − p0| is small, the term
involving (p − p0)2 is much smaller, so
0 ≈ 𝑓(𝑝0 ) + 𝑓 ′ (𝑝0 )(𝑝 − 𝑝0 )
Solving for p gives
𝑓(𝑝0 )
𝑝 ≈ 𝑝0 − ≡ 𝑝1
𝑓 ′ (𝑝0 )
This sets the stage for Newton’s method, which starts with an initial approximation
𝑝0 and generates the sequence {𝑝𝑛 }∞
𝑛=0 by
𝑓(𝑝𝑛−1 )
𝑝𝑛 = 𝑝𝑛−1 − , for n ≥ 1.
𝑓′ (𝑝𝑛−1 )
Geometric Meaning

Example
Use Newton’s method to approximate a root of
𝑓(𝑥) = cos 𝑥 − 𝑥 = 0 with 𝑝0 = 𝜋⁄4.

Solution. Putting 𝑓(𝑥) = cos 𝑥 − 𝑥 and 𝑓 ′ (𝑥) = −sin 𝑥 − 1


in Newton’s formula for n =1, we get
𝑓(𝑝0 )
𝑝1 = 𝑝0 − =0.739536
𝑓′ (𝑝0 )

𝑓(𝑝1 )
For n =2, 𝑝2 = 𝑝1 − =0.739085
𝑓′ (𝑝1 )

𝑓(𝑝2 )
and 𝑝3 = 𝑝2 − =0.739085.
𝑓′ (𝑝2 )

Which gives 𝑝2 = 𝑝3 upto 7 decimal places, as required solution of f(x).


* For 𝑝1 in calculator take 𝑝0 = 𝑥 and write function as
(cos 𝑥 − 𝑥)
𝑥−
(− sin 𝑥 − 1)
and for 𝑝2 𝑡𝑎𝑘𝑒 𝑥 = 0.739536 in the same function.
For 𝑝3 𝑡𝑎𝑘𝑒 𝑥 = 0.739085 and so on.

Advantage/Disadvantage
In a practical application, an initial approximation is selected and successive
approximations are generated by Newton’s method. These will generally either
converge quickly to the root, or it will be clear that convergence is unlikely.

Word Problem 1
The sum of two numbers is 20. If each number is added to its square root, the
product of the two sums is 155.55. Determine the two numbers to within 10−4 .

Word Problem 2
The accumulated value of a savings account based on regular periodic payments
can be determined from the annuity due equation,

In this equation, A is the amount in the account, P is the amount regularly


deposited, and i is the rate of interest per period for the n deposit periods. An
engineer would like to have a savings account valued at $750,000 upon retirement
in 20 years and can afford to put $1500 per month toward this goal. What is the
minimal interest rate at which this amount can be invested, assuming that the
interest is compounded monthly?

Practice
Q2. Let 𝑓(𝑥) = −𝑥 3 − cos 𝑥 and 𝑝0 = −1. Use Newton’s method to find 𝑝2 .
Could 𝑝0 = 0 be used?
Solution: Putting 𝑓(𝑥) = −𝑥 3 − cos 𝑥 and 𝑓 ′ (𝑥) = −3𝑥 2 + sin 𝑥
𝑓(𝑝𝑛−1 )
in Newton’s formula 𝑝𝑛 = 𝑝𝑛−1 −
𝑓′ (𝑝𝑛−1 )

𝑓(𝑝0 )
for n=1, we get 𝑝1 = 𝑝0 − = −0.88033
𝑓′ (𝑝0 )

𝑓(𝑝1 )
For n=2, 𝑝2 = 𝑝1 − = −0.86568.
𝑓′ (𝑝1 )

We cannot use 𝑝0 = 0 as 𝑓 ′ (0) = 0 gives 𝑝1 undefined.


Q5(a) Use Newton’s method to find solution accurate to within 10−5 for
(i) 𝑓(𝑥) = 𝑥 3 − 2𝑥 2 − 5 in [1, 4] and
(ii) 𝑓(𝑥) = 𝑥 3 + 3𝑥 2 − 1 in [-3, -2].
Ans. (i) We can take any point within the interval as 𝑝0 , usually we take mid
point. Here for 𝑝0 = 2, we have the solution 𝑝5 = 2.69065 .
(ii) For 𝑝0 = −2.5, we have the solution 𝑝3 = −2.87939.
Q6(c) Use Newton-Raphson method to find solution accurate within 10−3 for the
equation:
2𝑥 cos(2𝑥) − (𝑥 − 2)2 = 0 𝑓𝑜𝑟 2 ≤ 𝑥 ≤ 3.
University of Central Punjab
Faculty of Information Technology

PROGRAM (S) TO BE
BSCS
EVALUATED

Course Description

Course Code CSAL4263


Course Title Numerical Computing
Credit Hours 3
Prerequisites by Course(s) Calculus
and Topics
Assessment Instruments
with Weights (homework, Quiz 15%, Assignment 10%, Class Assignment 10%,
quizzes, midterms, final,
programming assignments, Midterm 20% , Final 45%
lab work, etc.)
Semester Spring 2023
Course Instructor Rohsha Tahir
Email: [email protected]
Office Address: Block D, Floor 01.
Course Coordinator Rohsha Tahir
Office Hours Displayed before office
Plagiarism Policy All the parties involved will be awarded Zero in first instance.
Repeat of the same offense will result in (F) grade.
Marks will be uploaded on portal and can be contested within a week
or would be considered final.
Current Catalog One of the principle objectives of this course to enable students
Description understanding how to transform a numerical problem into an
algorithm and further to make computer programs in language like
C++. On completion of this course student should know how computer
implement different algorithms, what are the difficulties and possible
errors of calculations done by computers. Solution of non-linear
equations, integration, interpolation and differentiation will be taught
to students.
Textbook (or Laboratory Numerical Analysis By Burden and Faires 9th Edition
Manual for Laboratory
Courses)
Reference Material Applied Numerical Analysis by Curtis F. 7th Edition,
Numerical Analysis by Tim Sauer 1st Edition

1 NCEAC.FORM.001.C
Course Goals Students should be:
1) able to assess the approximation techniques to formulate and
apply appropriate strategy to solve real world problems.
2) aware of the use of numerical computing in modern scientific
computing.
3) familiar with numerical solution of integration, differentiation,
linear equations, ordinary differential equations,
interpolations.
Topics Covered in the Attached
Course
Programming Assignments No
Done in the Course
Class Time Spent on (in Theory Problem Solution Social and Ethical Issues
credit hours) Analysis Design
0.5 1.5 0.5 0.5
Oral and Written At least 4 assignments will be submitted by each student.
Communications

CLO# Course Learning Outcome (CLO) Taxonomy Mapping to


Level PLO

Students will be able to solve non-linear equations using


CLO 1 C3 PLO – 02
various numerical methods.
Students will be able to apply appropriate numerical
CLO 2 C3 PLO – 02
techniques for data interpolation.
Students will be able to analyze the accuracy of common
CLO 3 C4 PLO - 03
numerical methods.

Week Topic Evaluation CLO


Instrument Achieved
Used
Introduction to Numerical Computing.
1 Discussion on use and importance of Mathematics in computer and real
life.
Purpose of this course.
Revision of Mathematics needed in this course.
Brief introduction of Algorithms.

Revision of Functions, Continuity, Differentiability.


Revision of some fundamental theorems of Calculus: Rolle’s Theorem,
Mean Value Theorem, Intermediate Value Theorem.
Revision of some other mathematical ideas needed to study this course.

2 NCEAC.FORM.001.C
2 Discussion of main problems of Numerical Computing and how Class
Mathematics can solve these problems Assignment
Taylor’s polynomials as solution to 1st main problem of Numerical
Computing. Theory of Taylor’s polynomials and its uses.
Truncation error.

Approximation by Taylor’s polynomial.


Error bound and error analysis and its uses.
3 Benefits and limitations of Computers/Algorithms. Class
2nd main problem of Numerical Computing. Assignment
Physical memory 32 bit and 64 bit IEEE standards for storing a
number, floating point, chopping and round off methods, calculation of
errors, finite digits’ arithmetic.
Applications of these concepts in Computer Sciences.
4 Bisection Method: Method, Geometric Meaning, Error Analysis, Quiz 1
Application in Computer Science, Drawbacks
Order of convergence of this method.
Assignment 1
Fixed Point Iterative Algorithm: Method, Geometric Meaning, Error 1
Analysis, Application in Computer Science, Drawbacks
Order of convergence of this method.

Comparison of Bisection Method and Fixed Point Algorithm


5 Newton’s method: Geometric Meaning, Error Analysis, Application in Quiz 2
Computer Science, Drawbacks
Order of convergence of this method.
Secant method: Geometric Meaning, Error Analysis, Application in
Computer Science, Drawbacks 1
Order of convergence of this method.
Comparison of Bisection Method, Fixed Point Algorithm, Newton’s
Method, and Secant Method.
6 Introduction to interpolation, Uses of Interpolation in Computer Assignment 2
Science and Real Life.
Method of Lagrange: Lagrange Polynomial, Building Interpolation 2
Polynomial, Uniqueness of interpolation polynomial, Geometric
Meaning, Error Analysis, Application in Computer Science,
Drawbacks.
7 Newton’s Divided-Difference Formula: Building Interpolation Quiz 3
Polynomial, Geometric Meaning, Error Analysis, Application in
Computer Science, Drawbacks. 2

Comparison of Lagrange Interpolation and Interpolation using


Newton’s Divided-Difference Formula.
8 REVISION

MID TERM
9 Concept of Differentiation, its uses in Real Life and Computer Science.
Numerical Differentiation, its Purpose and Applications. 3

3-Point and 5-Point Formulae: Construction using Lagrange


Polynomial, Geometric Meaning, Method, Uses and Applications,

3 NCEAC.FORM.001.C
Drawbacks, Error Analysis, Comparison of both Formulae.
10 Concept of Integration, its uses in Real Life and Computer Science. Quiz 4
Numerical Integration, its Purpose and Applications.

Trapezoidal Rule, Simpson’s Rule (Basic & Composite): Construction 3


using Lagrange Polynomial, Geometric Meaning, Method, Uses and
Applications, Drawbacks, Error Analysis, Comparison of both Assignment 3
Formulae, Benefits of Composite Numerical Integration.
11 Concept of Differential Equations and Initial Value Problems:
definition, different types, solution of DE, applications in real life, Quiz 5
different real life models explained in terms of DE, different steps to
prepare digital models of real world, review of some exact methods to
solve DE which students have learned in the course of DE, numerical
methods to solve DE and their purpose. 3

Euler’s Method: Purpose, Method, Geometric Meaning, Error Analysis,


Applications in Computer Science, Drawbacks

12 Introduction to Runge-Kutta Methods. Assignment 4


Runge-Kutta Methods of Order 2: Purpose, Method, Geometric
Meaning, Error Analysis, Applications in Computer Science,
Drawbacks 3

Comparison of Euler’s Method and RK-2.


13 Runge-Kutta Methods of Order 4: Purpose, Method, Geometric Class
Meaning, Error Analysis, Applications in Computer Science, Assignment
Drawbacks 3

Comparison of Euler’s Method, RK-2, and RK-4.


14 RK-4 Continued Class
Assignment 3

15 REVISION
16 REVISION
FINAL TERM

4 NCEAC.FORM.001.C
Review of Calculus

This lecture contains a short review of those topics from single-variable calculus that
will be needed in later topics. A solid knowledge of calculus is essential for an
understanding of the analysis of numerical techniques, and more thorough review
might be needed if you have been away from this subject for a while.

Limit and Continuity

A function f defined on a set X of real


numbers has the limit L at 𝑥0 , written
lim 𝑓(𝑥) = 𝐿
𝑥→𝑥0

And f is continuous at 𝑥0 if
lim 𝑓(𝑥) = 𝑓(𝑥0 )
𝑥→𝑥0

Differentiability
Let f be a function defined in an open
interval containing 𝑥0 . The function f
is differentiable at 𝑥0 if
𝑓(𝑥) − 𝑓(𝑥0 )
𝑓 ′ (𝑥0 ) = lim
𝑥→𝑥0 𝑥 − 𝑥0
exists, called the derivative of f at 𝑥0 .

Rolle’s Theorem
Suppose f ∈ C[a, b] and f is
differentiable on (a, b). If f (a) = f (b),
then a number c in (a, b) exists with
𝑓 ′ (𝑐) = 0.

Mean Value Theorem


If f ∈ C[a, b] and f is differentiable on
(a, b), then a number c in (a, b) exists with
𝑓(𝑏) − 𝑓(𝑎)
𝑓 ′ (𝑐) =
𝑏−𝑎

Intermediate Value Theorem


If f ∈ C[a, b] and K is any number between
f (a) and f (b), then there exists a number c
in (a, b) for which f (c) = K.

* Learn the use of calculator, write a function and calculate its values at different
points.

Example
Show that 𝑥 5 − 2𝑥 3 + 3𝑥 2 − 1 = 0 has a solution in the interval [0, 1].
Solution:
Consider the function defined by
𝑓 (𝑥) = 𝑥 5 − 2𝑥 3 + 3𝑥 2 − 1
The function f is continuous on [0, 1]. In addition, f (0) = −1 < 0 and 0 < 1 = f (1).
The Intermediate Value Theorem implies that a number 𝑥 exists, with 0 < 𝑥 < 1, for
which 𝑥 5 − 2𝑥 3 + 3𝑥 2 − 1 = 0.
Work to do
Exercise 1.1
Q1: Show that following equations have at least one solution in the given intervals.
a. 𝑥 cos 𝑥 − 2𝑥 2 + 3𝑥 − 1 = 0, [0.2, 0.3] and [1.2, 1.3]
b. (𝑥 − 2)2 − ln 𝑥 = 0, [1, 2] and [𝑒, 4]
c. 𝑥 − (ln 𝑥)𝑥 = 0, [4, 5]

Q2: Find intervals containing solutions to the following equations.


a. 𝑥 − 2−𝑥 = 0
b. 2 𝑥 cos(2𝑥) − (𝑥 + 1)2 = 0

Q3: Show that 𝑓 ′ (𝑥) is 0 at least once in the given intervals.


𝜋
a. 𝑓(𝑥) = 1 − 𝑒 𝑥 + (𝑒 − 1) sin( 𝑥), [0, 1]
2
b. 𝑓(𝑥) = (𝑥 − 1) tan 𝑥 + 𝑥 sin 𝜋𝑥 , [0, 1]
SECANT METHOD
The word secant is derived from
the Latin word secan, which
means to cut. The secant method
uses a secant line, a line joining
two points that cut the curve, to
approximate a root.
Algorithm
Newton’s method is very good and fast but its major weakness is the calculation of
derivative in
𝑓(𝑝𝑛−1 )
𝑝𝑛 = 𝑝𝑛−1 − , for n ≥ 1.
𝑓′ (𝑝𝑛−1 )

To avoid 𝑓 ′ (𝑝𝑛−1 ) in Newton’s method we can replace it with an approximation


𝑓(𝑥) − 𝑓(𝑝𝑛−1 )
𝑓 ′ (𝑝𝑛−1 ) = lim
𝑥→𝑝𝑛−1 𝑥 − 𝑝𝑛−1
If 𝑝𝑛−2 is close to 𝑝𝑛−1 , then
𝑓(𝑝𝑛−1 ) − 𝑓(𝑝𝑛−2 )
𝑓 ′ (𝑝𝑛−1 ) ≈
𝑝𝑛−1 − 𝑝𝑛−2
Putting this approximation in Newton’s Formula gives
𝑓(𝑝𝑛−1 )(𝑝𝑛−1 − 𝑝𝑛−2 )
𝑝𝑛 = 𝑝𝑛−1 −
𝑓(𝑝𝑛−1 ) − 𝑓(𝑝𝑛−2 )
This technique is called the Secant method.

Geometric Meaning
Example
Use Secant method to find solution accurate to within 10−5 for 𝑓(𝑥 ) =
𝑥 3 − 2𝑥 2 − 5 in [1, 4].
Solution: For Secant method, let us take 𝑝0 = 2 and 𝑝1 = 3, from [1, 4].
Formula for Secant method is
𝑓(𝑝𝑛−1 )(𝑝𝑛−1 − 𝑝𝑛−2 )
𝑝𝑛 = 𝑝𝑛−1 −
𝑓 (𝑝𝑛−1 ) − 𝑓(𝑝𝑛−2 )
* For calculator take 𝑝0 = 𝑥 and 𝑝1 = 𝑦 and write function as
(𝑦 3 − 2𝑦 2 − 5)(𝑦 − 𝑥)
𝑦−
((𝑦 3 − 2𝑦 2 − 5) − (𝑥 3 − 2𝑥 2 − 5))
and for 𝑝2 𝑡𝑎𝑘𝑒 𝑥 = 2 and 𝑦 = 3.
For 𝑝3 𝑡𝑎𝑘𝑒 𝑥 = 3 and 𝑦 = 2.55556 and so on.
So for n=2, we have
𝑓(𝑝1 )(𝑝1 − 𝑝0 )
𝑝2 = 𝑝1 − = 2.55556
𝑓 (𝑝1 ) − 𝑓(𝑝0 )
𝑓(𝑝2 )(𝑝2 − 𝑝1 )
𝑝3 = 𝑝2 − = 2.66905
𝑓(𝑝2 ) − 𝑓(𝑝1 )
𝑝4 = 2.69237
𝑝5 = 2.69063
𝑝6 = 2.69065 = 𝑝7 is the solution of f(x).

Advantage/Disadvantage
The convergence of the Secant method is much faster than functional iteration but
slightly slower than Newton’s method. This is generally the case. Newton’s
method or the Secant method is often used to refine an answer obtained by another
technique, such as the Bisection method, since these methods require good first
approximations but generally give rapid convergence.

Word Problem 1

Word Problem 2
The accumulated value of a savings account based on regular periodic payments
can be determined from the annuity due equation,

In this equation, A is the amount in the account, P is the amount regularly


deposited, and i is the rate of interest per period for the n deposit periods. An
engineer would like to have a savings account valued at $750,000 upon retirement
in 20 years and can afford to put $1500 per month toward this goal. What is the
minimal interest rate at which this amount can be invested, assuming that the
interest is compounded monthly?
Practice
Q7(b) Use Secant method to find solution of 𝑥 3 + 3𝑥 2 − 1 = 0 accurate to within
10−5 for in [-3, -2].
Solve (Hint: 𝑝7 = −2.87939)

Q8(c) Use Secant method to find solution accurate within 10−3 for the equation:
2𝑥 cos(2𝑥) − (𝑥 − 2)2 = 0 𝑓𝑜𝑟 2 ≤ 𝑥 ≤ 3.
TAYLOR SERIES AND POLYNOMIALS

Taylor series and polynomials are of basic importance and used extensively in numerical
analysis, in approximating a differentiable function and its integral etc.

Taylor Series
Let f and its higher order derivatives be continuous on [a, b] and x0 be a fixed point in [a, b].
Then, for x ∈ [a, b], 𝑓(𝑥) can be expressed as the infinite series (called Taylor’s Series)
𝑓 ′′ (𝑥0 ) 𝑓 (𝑛) (𝑥0 ) 𝑓 (𝑛+1) (𝑥0 )
𝑓(𝑥) = 𝑓(𝑥0 ) + 𝑓 ′ (𝑥0 )(𝑥 − 𝑥0 ) + (𝑥 − 𝑥0 )2 + ⋯ + (𝑥 − 𝑥0 )𝑛 + (𝑥 − 𝑥0 )𝑛+1 + ⋯
2! 𝑛! (𝑛 + 1)!

Taylor Polynomial
𝑓(𝑛) (𝑥0 )
If the Taylor’s series is truncated to the term (𝑥 − 𝑥0 )𝑛 , then the polynomial
𝑛!
obtained is called the Taylor’s polynomial
′′ 𝑛 ( )
𝑓 (𝑥0 )
′ 2 𝑓 (𝑥0 )
𝑓(𝑥) ≈ 𝑃𝑛 𝑥 = 𝑓 𝑥0 + 𝑓 𝑥0 𝑥 − 𝑥0 +
( ) ( ) ( )( ) ( 𝑥 − 𝑥0 + ⋯ +
) (𝑥 − 𝑥0 )𝑛
2! 𝑛!
Theorem (Taylor’s Theorem)
Suppose f ∈ Cn [a, b], that f (n+1) exists on [a, b], and x0 ∈ [a, b]. For every x ∈ [a, b],there
exists a number ξ(x) between x0 and x with
𝑓(𝑥) = 𝑃𝑛 (𝑥) + 𝑅𝑛 (𝑥)
Where
𝑓 ′′ (𝑥0 ) 𝑓 (𝑛) (𝑥0 )
𝑃𝑛 (𝑥) = 𝑓 (𝑥0 ) + 𝑓 ′ (𝑥0 )(𝑥 − 𝑥0 ) + (𝑥 − 𝑥0 )2 + ⋯ + (𝑥 − 𝑥0 )𝑛
2! 𝑛!
and
𝑓 (𝑛+1) (𝜉(𝑥))
𝑅𝑛 (𝑥) = (𝑥 − 𝑥0 )𝑛+1 .
(𝑛 + 1)!
Here 𝑃𝑛 (𝑥) is the nth Taylor polynomial for f about x0, and 𝑅𝑛 (𝑥) is the remainder term
(or truncation error) associated with 𝑃𝑛 (𝑥). Since the number ξ(x) in the truncation error
𝑅𝑛 (𝑥) depends on the value of x at which the polynomial 𝑃𝑛 (𝑥) is being evaluated, it is a
function of the variable x. However, we should not expect to be able to explicitly determine
the function ξ(x). Taylor’s Theorem simply ensures that such a function exists, and that its
value lies between x and x0.
In the case x0 = 0, the Taylor polynomial is often called a Maclaurin polynomial, and the
Taylor series is often called a Maclaurin series.
The term truncation error in the Taylor polynomial refers to the error involved in using a
truncated, or finite, summation to approximate the sum of an infinite series.
Example. Let f (x) = cos x and x0 = 0. Determine
(a) the second, third and fourth Taylor polynomial for f about x0; and use these polynomials
to approximate f (0.02) by finding the actual error.
(b) an upper bound for |f (0.02) − 𝑃𝑛 (0.02)| using the error formula 𝑅2 (𝑥), compare it to the
actual error.
Solution. Since f ∈ C∞(R), Taylor’s Theorem can be applied for any n ≥ 0. Also,
𝑓 ′ (𝑥) = − sin 𝑥 , 𝑓 ′′ (𝑥) = − cos 𝑥 , 𝑓 ′′′ (𝑥) = sin 𝑥 , and 𝑓 (4) (𝑥) = cos 𝑥,
So
𝑓 (0) = 1, 𝑓 ′ (0) = 0, 𝑓 ′′ (0) = −1, 𝑓 ′′′ (0) = 0, 𝑎𝑛𝑑 𝑓 (4) (0) = 1.

(a) For n = 2 and 𝑥0 = 0, we have



𝑓′′ (0) 1 2
𝑃2 (𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 𝑥2 = 1 − 𝑥
2! 2
To approximate f (0.02),
we use 𝑃2 (𝑥) at x = 0.02,
Implies 𝑃2 (0.02) =0.9998

While exact value is

cos 0.02 = 0.9998000067,

Therefore, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫


= 6.67 × 10−9 .
For n = 3 and 𝑥0 = 0, we have

𝑓′′ (0) 𝑓′′′ (0) 1 2
𝑃3 (𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 2
𝑥 + 𝑥3 = 1 − 𝑥
2! 3! 2
𝑃3 (0.02) =0.9998, same as 𝑃2 (0.02).
For n = 4 and 𝑥0 = 0, we have
𝑓′′ (0) 𝑓′′′(0) 𝑓(4) (0)
𝑃4 (𝑥) = 𝑓(0) + 𝑓 ′ (0)𝑥 + 𝑥2 + 𝑥3 + 𝑥4
2! 3! 4!
1 1
= 1 − 𝑥2 + 𝑥4
2! 4!

⟹ 𝑃4 (0.02) =0.99980001
(b) To find upper bound of error we use formula
𝑓 (𝑛+1) (𝜉(𝑥))
𝑅𝑛 (𝑥) = (𝑥 − 𝑥0 )𝑛+1 .
(𝑛 + 1)!
where ξ(x) is some (generally unknown) number between 0 and x.
Therefore, the upper bound of error for 𝑃2 (𝑥) will be
𝑓 ′′′ (𝜉 (𝑥)) 3 sin 𝜉 (𝑥) 3
𝑅2 (𝑥) = 𝑥 = 𝑥
3! 6
The error bound is much larger than the actual error. This is due to the poor bound we used
for | sin ξ(x)|.
As from calculus, we have | sin x| ≤ |x|. Since 0 ≤ ξ < 0.02, we could have used the fact that
| sin ξ(x)| ≤ 0.02 in the error formula, producing the bound 𝑅2 (0.02) =2.67 × 10−8.
Similarly, the upper bound of error for 𝑃3 (𝑥) will be
𝑓 (4) (𝜉(𝑥)) 4 cos 𝜉 (𝑥) 4
𝑅3 (𝑥) = 𝑥 = 𝑥
4! 24
Since, | cos ξ(x)| ≤ 1, so 𝑅3 (0.02) ≤ 6.67 × 10−9 .

Applications of Taylor’s Theorem


We can write a differentiable function 𝑓(𝑥) in the form of infinite series or a finite
polynomial.
Example (i)
𝑥2 𝑥3
𝑒𝑥 = 1 + 𝑥 + + +⋯ about 𝑥0 = 0 as series or
2! 3!
𝑥2
𝑒𝑥 = 1 + 𝑥 + as polynomial of degree 2,
2!
𝑥2 𝑥3
𝑒𝑥 = 1 + 𝑥 + + as polynomial of degree 3.
2! 3!
Example (ii)
Find Taylor series and polynomials of degree 4 and 5 for the function sin 𝑥 about 𝑥0 = 0 and
use the polynomial to find the actual error at 𝑥 = 0.1 and 𝑥 = 0.5.
Solution: 𝑓(𝑥) = sin 𝑥 , 𝑓(0) = 0
𝑓 ′ (𝑥) = cos 𝑥, 𝑓 ′ ( 0) = 1
𝑓 ′′ (𝑥) = − sin 𝑥, 𝑓 ′′ (0) = 0
𝑓 ′′′ (𝑥) = − cos 𝑥, 𝑓 ′′′ (0) = −1
𝑓 (4) (𝑥) = sin 𝑥, 𝑓 (4) (0) = 0


𝑓′′ (0) 𝑓′′′ (0) 𝑓(4) (0)
𝑓(𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 𝑥 + 2 3
𝑥 ++ 𝑥4 + ⋯
2! 3! 4!
𝑥3 𝑥5 𝑥7
sin 𝑥 = 𝑥 − + − +⋯
3! 5! 7!
𝑥3 𝑥3 𝑥5
or sin 𝑥 ≈ 𝑃4 (𝑥) = 𝑥 − and sin 𝑥 ≈ 𝑃5 (𝑥) = 𝑥 − +
3! 3! 5!

To approximate f (0.05) at 𝑥 = 0.1, we use 𝑃4 (𝑥) and 𝑃5 (𝑥) at x = 0.1, respectively


Implies 𝑃4 (0.1) =0.09983333 and 𝑃5 (0.1) = 0.09983341667.

While exact value is sin 0.1 = 0.09983341665,

Therefore, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 for 𝑃4 (0.1) = 8.7 × 10−8 and for 𝑃5 (0.1) = 2 × 10−11 , resp.
* But if we check error at 𝑥 = 0.5, with 𝑃5 (0.5) = 0.4794271

and exact value is sin 0.5 = 0.4794255,

Therefore, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 = 0.0000016, greater error at point farther from 𝑥0 = 0.


Taylor Polynomials and Series Continued….

As Taylor polynomial of degree n is


𝑓 ′′ (𝑥0 ) 𝑓 (𝑛) (𝑥0 )
𝑃𝑛 (𝑥) = 𝑓 (𝑥0 ) + 𝑓 ′ (𝑥0 )(𝑥 − 𝑥0 ) + (𝑥 − 𝑥0 )2 + ⋯ + (𝑥 − 𝑥0 )𝑛
2! 𝑛!
and upper bound of error is
𝑓 (𝑛+1) (𝜉(𝑥))
𝑅𝑛 (𝑥) = (𝑥 − 𝑥0 )𝑛+1 𝑓𝑜𝑟 𝑥0 ≤ 𝜉(𝑥) ≤ 𝑥.
(𝑛 + 1)!
Q 9. Find 𝑃2 (𝑥) for the function 𝑓(𝑥) = 𝑒 𝑥 cos 𝑥 about x0 = 0.
(a) Use 𝑃2 (0.5) to approximate f (0.5) by finding the actual error.
(b) Find an upper bound of error |𝑓 (0.5) − 𝑃2 (0.5)| using formula 𝑅2 (𝑥), and compare
it to the actual error.
(c) Find an upper bound of error |𝑓 (𝑥) − 𝑃2 (𝑥)| using 𝑅2 (𝑥) on the interval [0,1].
1 1
(d) Approximate ∫0 𝑓(𝑥)𝑑𝑥 using ∫0 𝑃2 (𝑥) 𝑑𝑥 .
Solution. As, 𝑓(𝑥) = 𝑒 𝑥 cos 𝑥,
so 𝑓 ′ (𝑥) = 𝑒 𝑥 (cos 𝑥 − sin 𝑥), 𝑓 ′′ (𝑥) = −2𝑒 𝑥 sin 𝑥,
and 𝑓 ′′′ (𝑥) = −2𝑒 𝑥 (cos 𝑥 + sin 𝑥)
So
𝑓 (0) = 1, 𝑓 ′ (0) = 1, 𝑓 ′′ (0) = 0.
For n = 2 and 𝑥0 = 0, we have

𝑓′′ (0)
𝑃2 (𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 𝑥2 = 1 + 𝑥
2!
(a) 𝑃2 (0.5) = 1.5
While exact value is 𝑓(0.5) = 1.4469 So, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 = 0.0531.

(b) To find upper bound of error for 𝑃2 (𝑥) will be


𝑓 ′′′ (𝜉 (𝑥)) 3 𝑥 3 𝜉(𝑥)
𝑅2 (𝑥) = 𝑥 =− 𝑒 (cos 𝜉 (𝑥) + sin 𝜉 (𝑥))
3! 3
0.53
so |𝑅2 (0.5)| = |− | |𝑒𝜉(𝑥) (cos 𝜉(𝑥) + sin 𝜉(𝑥))|
3

≤ 0.103 taking 𝜉 (𝑥) = 0.5, cos 𝜉 (𝑥) = 1 and sin 𝜉 (𝑥) = 𝜉 (𝑥) .
(c) To find upper bound of error for 𝑃2 (𝑥) on the interval [0,1]

𝑓 ′′′ (𝜉 (𝑥)) 3 𝑥 3 𝜉(𝑥)


𝑅2 (𝑥) = 𝑥 =− 𝑒 (cos 𝜉 (𝑥) + sin 𝜉 (𝑥))
3! 3
13 𝜉(𝑥)
≤ |− | |𝑒 (cos 𝜉 (𝑥) + sin 𝜉 (𝑥))|
3

= 1.812 taking 𝜉 (𝑥) = 1, cos 𝜉 (𝑥) = 1, sin 𝜉 (𝑥) = 𝜉 (𝑥)


1
(d) To approximate ∫0 𝑓(𝑥)𝑑𝑥 , we use

1 1
∫0 𝑃2 (𝑥) 𝑑𝑥 = ∫0 (1 + 𝑥) 𝑑𝑥 = 1.5
1
While exact value is ∫0 𝑒 𝑥 cos 𝑥 𝑑𝑥 = 1.378.

Q. Let 𝑓(𝑥) = √1 − 𝑥 and x0 = 0. Determine


(a) Third and fourth Taylor polynomial for f about x0; and use these polynomials to
approximate f (0.2) by finding the actual error.
(b) an upper bound of error using formula 𝑅3 (𝑥), compare it to the actual error.
0.5 0.5
(c) Approximate ∫0 𝑓(𝑥)𝑑𝑥 using ∫0 𝑃3 (𝑥) 𝑑𝑥 .
Solution. As, 𝑓(𝑥) = √1 − 𝑥,
1 1 1 3
so 𝑓 ′ (𝑥) = − 2 (1 − 𝑥)−2 , 𝑓 ′′ (𝑥) = − (1 − 𝑥)−2 ,
4
3 15
𝑓 ′′′ (𝑥) = − (1 − 𝑥)−5/2 , and 𝑓 (4) (𝑥) = − (1 − 𝑥)−7/2
8 16
So
1 1
𝑓 (0) = 1, 𝑓 ′ (0 ) = − , 𝑓 ′′ (0) = − ,
2 4
3 15
𝑓 ′′′ (0) = − , 𝑎𝑛𝑑 𝑓 (4) (0) = − .
8 16

(a) For n = 3 and 𝑥0 = 0, we have



𝑓′′ (0) 𝑓′′′ (0)
𝑃3 (𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 2
𝑥 + 𝑥3
2! 3!
1 1 1
= 1 − 𝑥 − 𝑥2 − 𝑥3
2 8 16

𝑃3 (0.2) = 0.8945
While exact value is 𝑓(0.2) = √1 − 0.2 = 0.8944272
So, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 = 7 × 10−5 .
For n = 4 and 𝑥0 = 0, we have

𝑓′′ (0) 𝑓′′′ (0) 𝑓(4) (0)
𝑃4 (𝑥) = 𝑓(0) + 𝑓 (0)𝑥 + 2
𝑥 + 3
𝑥 + 𝑥4
2! 3! 4!
1 1 1 5
= 1 − 𝑥 − 𝑥2 − 𝑥3 − 𝑥4
2 8 16 128

⟹ 𝑃4 (0.2) = 0.8944375.
Therefore, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 = 1.03 × 10−5 .
(b) To find upper bound of error for 𝑃3 (𝑥) will be
𝑓 (4) (𝜉(𝑥)) 4 5
𝑅3 (𝑥) = 𝑥 =− (1 − 𝜉 (𝑥))−7/2 𝑥 4
4! 128
so 𝑅3 (0.2) ≤ 1.365 × 10−4 taking 𝜉(𝑥) = 0.2.
0.5
(c) To approximate ∫0 𝑓(𝑥)𝑑𝑥 , we use

0.5 0.5 1 1 1
∫0 𝑃3 (𝑥) 𝑑𝑥 = ∫0 (1 − 2 𝑥 − 8 𝑥 2 − 16 𝑥 3 ) 𝑑𝑥 = 0.431315
0.5
While exact value is ∫0 √1 − 𝑥 𝑑𝑥 = 0.430964.
Therefore, 𝐀𝐜𝐭𝐮𝐚𝐥 𝐞𝐫𝐫𝐨𝐫 = 3.51 × 10−4 .

You might also like