Numerical Analysis
Numerical Analysis
af
N
ad
an
Numerical Analysis
oh
M
T HIRD G RADE ,
r.
C OLLEGE OF E DUCATION
&
F OR P URE S CIENCES
d
hi
I BN AL-H AITHAM
as
U NIVERSITY OF B AGHDAD
R
il
Ad
By
Dr. Adil Rashid
r.
D
aa
1 Introduction 1
af
1.1 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
N
1.2 Computational and Errors . . . . . . . . . . . . . . . . . . . 4
1.3 ERRORS AND STABILITY . . . . . . . . . . . . . . . . . . . . 6
ad
1.4 Taylor Series Expansions . . . . . . . . . . . . . . . . . . . . 12
1.5 Maclaurin Series . . . . . . . . . . . . . . . . . . . . . . . . . 16
an
2 Solutions of Equations in One Variable oh 21
2.1 Bisection Technique . . . . . . . . . . . . . . . . . . . . . . . 22
M
2.2 MaTlab built-In Function fzero . . . . . . . . . . . . . . . . . 25
2.3 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
r.
2.5 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6 Newton-Raphson method . . . . . . . . . . . . . . . . . . . . 33
&
2.7 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.8 System of Non Linear Equations . . . . . . . . . . . . . . . . 38
d
hi
2.9 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.10 Fixed Point for System of Non Linear Equations . . . . . . . 44
as
2.11 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
R
3.2 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3 Gauss Jordan Method . . . . . . . . . . . . . . . . . . . . . . 55
r.
3.4 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
D
ii
4 Interpolation and Curve Fitting 80
4.1 General Interpolation . . . . . . . . . . . . . . . . . . . . . . 81
4.2 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . 84
4.3 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . 87
4.4 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.5 Divided Differences Method . . . . . . . . . . . . . . . . . . 95
4.6 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.7 Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.8 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . 102
aa
4.9 Parabolic Regression . . . . . . . . . . . . . . . . . . . . . . 110
af
5 Numerical Differentiation and Integration 116
N
5.1 Numerical Differentiation: Finite Differences . . . . . . . . 116
5.1.1 Finite Difference Formulas for f 0 (x): . . . . . . . . . . 118
ad
5.1.2 Finite Difference Formulas for f 00 (x): . . . . . . . . . 124
5.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . 126
an
5.2.1 The Trapezoidal Rule . . . . . . . . . . . . . . . . . . 126
5.2.2 Simpson’s Rule . . . . . . . . . . . . . . oh . . . . . . . . 130
5.2.3 Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . 133
M
5.2.4 EXERCISE . . . . . . . . . . . . . . . . . . . . . . . . 134
5.3 Simpson’s 3/8 Rule . . . . . . . . . . . . . . . . . . . . . . . 135
r.
iii
Chapter 1
Introduction
aa
af
N
Numerical analysis is concerned with the development
ad
and analysis of methods for the numerical solution of
an
practical problems. Traditionally, these methods have been
mainly used to solve problems in the physical sciences and
oh
engineering. However, they are finding increasing relevance
M
in a much broader range of subjects including economics
and business studies.
r.
1
Similarly, we can certainly evaluate the integral
Z b
A= ex dx
a
aa
2
since no function exists which differentiates to ex . Even
af
when an analytical solution can be found it may be of more
N
theoretical than practical use. For example, if the solution
ad
of a differential equation
an
y 00 = f (x, y, y 0 )
oh
is expressed as an infinite sum of Bessel functions, then it
M
is most unsuitable for calculating the numerical value of y
corresponding to some numerical value of x.
r.
D
1.1 Errors
&
d
definition.
r.
fined as
err(xA ) = xT − xA (1.1)
The absolute error is defined as
Aerr(xA ) = |err(xA )| = |xT − xA | (1.2)
2
And the relative error is given by
Absolute error |xT − xA |
rel(xA ) = = , xT 6= 0 (1.3)
T rue value xT
Note that if the true value happens to be zero, x = 0, the
relative error is regarded as undefined. The relative error is
generally of more significance than the absolute error.
aa
19
Example 1.1. Let xT = ≈ 2.714285 and xA = 2.718281. Then
af
7
N
19
err (xA ) = xT − xA = − 2.718281 ≈ −0.003996
ad
7
Aerr (xA ) = |err (xA )| ≈ 0.003996
an
Aerr (xA ) 0.003996
rel (xA ) =
xT
= oh
2.718281
≈ 0.00147
M
Example 1.2. Consider the following table
r.
100 99 1 0.01
R
aa
much smaller than the first. Because they convey a more spe-
af
cific type of information, relative errors are considered more
N
significant than absolute errors.
ad
an
1.2 Computational and Errors
oh
Numerical methods are procedures that allow for effi-
M
cient solution of a mathematically formulated problem
in a finite number of steps to within an arbitrary preci-
r.
D
4
program is usually written for its implementation. The pro-
gram is run to obtain numerical results, although this may
not be the end of the story. The computed solution could
indicate that the original mathematical model needs mod-
ifying with a corresponding change in both the numerical
algorithm and the program.
Although the solution of ’real problems’ by numerical tech-
niques involves the use of a digital computer or calculator,
aa
Determination of the eigenvalues of large matrices, for ex-
af
ample, did not become a realistic proposition until comput-
N
ers became available because of the amount of computation
ad
involved. Nowadays any numerical technique can at least
be demonstrated on a microcomputer, although there are
an
some problems that can only be solved using the speed and
oh
storage capacity of much larger machines.
M
2. Approximation errors
Ad
5
(a) Roundoff errors arise everywhere in numerical com-
putation because of the finite precision arithmetic.
(b) Roundoff errors behave quite unorganized.
4. Truncation error: Whenever an expression is approx-
imated by some type of a mathematical method. For
example, suppose we use the Maclaurin series repre-
sentation of the sine function:
aa
∞ (n−1) (m−1)
(−1) 2 n 1 3 1 5 (−1) 2
af
X
sin α = α = α− α + α −· · ·+ αm +Em
n! 3! 5! 3!
N
n=odd
ad
where Em is the tail end of the expansion, neglected in
an
the process, and known as the truncation error.
oh
1.3 ERRORS AND STABILITY
M
r.
expansion
il
0.66666666666666 · · ·
Ad
6
The difference between the exact and stored values is
called the rounding error which, for this example, is
Suppose that for a given real number α the digits after the
decimal point are
d1 d2 · · · dn dn+1 · · ·
aa
To round α to n decimal places (abbreviated to nD) we pro-
af
ceed as follows. If dn+1 < 5, then α is rounded down; all
N
digits after the nth place are removed. If dn+1 ≥ 5, then α
is rounded up; dn is increased by one and all digits after
ad
the nth place are removed. It should be clear that in either
an
case the magnitude of the rounding error does not exceed
0.5 × 10−n . oh
In most situations the introduction of rounding errors into
M
the calculations does not significantly affect the final re-
sults. However, in certain cases it can lead to a serious loss
r.
D
7
aa
af
N
ad
Figure 1.1: sketche of example 1.4
an
obtain them, will be highly inaccurate and may be worth-
oh
less. The second reason is that even if the data is exact it
M
will not necessarily be stored exactly on a computer. Con-
sequently, the problem which the computer is attempting to
r.
solve may differ slightly from the one originally posed. This
D
then the computed results may differ wildly from those ex-
d
pected.
hi
x+y =2
il
x + 1.01y = 2.01
Ad
8
intersection is sensitive to small movements in either of these
lines since they are nearly parallel. In fact, if the coefficient
of y in the second equation is 1.00, the two lines are exactly
parallel and the system has no solution. This is fairly typical
of ill-conditioned problems. They are often close to ’critical’
problems which either possess infinitely many solutions or
no solution whatsoever.
aa
Example 1.5. Consider the initial value problem
af
y 00 − 10y 0 − 11y = 0; y(0) = 1, y 0 (0) = −1
N
defined on x ≥ 0. The corresponding auxiliary equation has
ad
roots −1 and 11, so the general solution of the differential
an
equation is
y = Ae−x + Be11x oh
for arbitrary constants A and B. The particular solution which
M
satisfies the given initial conditions is
r.
y = e−x
D
y 0 (0) = −1 +
d
y(0) = 1 + δ,
hi
11δ δ
e−x + e11x
il
y = 1+ − +
12 12 12 12
Ad
11δ δ
D
− e−x + + e11x
12 12 12 12
(δ + )e11x
The term is large compared with e−x for x > 0, indi-
12
cating that this problem is ill-conditioned.
9
To inherent stability depends on the size of the solution to
the original problem as well as on the size of any changes in
the data. Under these circumstances, one would say that the
problem is ill-conditioned.
We now consider a different type of instability which is
a consequence of the method of solution rather than the
problem itself.
aa
Definition 3. A method is said to suffer from induced in-
af
stability if small errors present at one stage of the method
N
lead to bad effect in subsequent stages to such final results
ad
are totally inaccurate.
an
Nearly all numerical methods involve a repetitive sequence
of calculations and so it is inevitable that small individual
oh
rounding errors accumulate as they proceed. However, the
M
actual growth of these errors can occur in different ways. If,
after n steps of the method, the total rounding error is ap-
r.
D
it only takes about five steps before the sixth decimal place
is affected. The second case is illustrated by the following
r.
example.
D
10
Some of these are described later. An attractive idea would
be to use these methods to estimate one of the real roots, α
say, and then to divide Pn (x) by x − α to produce a polyno-
mial of degree n − 1 which contains the remaining roots. This
process can then be repeated until all of the roots have been
located. This is usually referred to as the method of defla-
tion. If α were an exact root of Pn (x) = 0, then the remaining
n − 1 roots would, of course, be the zeros of the deflated poly-
aa
nomial of degree n − 1. However, in practice α might only be
af
an approximate root and in this case the zeros of the deflated
N
polynomial can be very different from those of Pn (x). For ex-
ad
ample, consider the cubic
an
p3 (x) = x3 − 13x2 + 32x − 20 = (x − 1)(x − 2)(x − 10)
oh
and suppose that an estimate of its largest zero is taken as
10.1. If we divide p3 (x) by x−10.1, the quotient is x2 −2.9x+2.71
M
which has zeros 1.45 ± 0.78i. Clearly an error of 0.1 in the
r.
largest zero of p3 (x) has induced a large error into the calcu-
D
corresponding quadratic has zeros 1.9 and 10.0 which are per-
d
magnitude.
il
11
not only on the degree of ill-conditioning involved but also
on the context from which the problem is taken.
aa
(or cos, or cot) of any angle ?.
af
• How it can give you the square root (or cube root, or 4th
N
root) of any positive number ?.
ad
• How it can find the logarithm of any (positive) number
an
you give it ?.
oh
Does a calculator store every answer that every human may
M
ever ask it ?. Actually, no. The pocket calculator just re-
members special polynomials and substitutes whatever you
r.
D
(1.4)
il
the polynomial.
r.
Example 1.7.
D
12
Given a infinitely differentiable function f : < → <, de-
fined in a region near the value x = a, then its Taylor series
expanded around a is
aa
We can write this more conveniently using summation no-
af
tation as:
N
∞
f (n) (a) (x − a)n
ad
X
f (x) ≈ (1.7)
n=0
n!
an
By Taylor series we can find a polynomial that gives us a
oh
good approximation to some function in the region near
x = a, we need to find the first, second, third (and so on)
M
place:
R
value (x = a).
Let’s see what a Taylor Series is all about with an example.
Example 1.8. Find the Taylor Expansion of f (x) = ln x near
x = 10.
13
aa
af
N
ad
an
oh
Figure 1.2: Graph of f (x) = ln(x)
M
Our aim is to find a good polynomial approximation to the
curve in the region near x = 10. We need to use the Taylor
r.
D
Series with a = 10. The first term in the Taylor Series is f (a).
In this example,
&
d
So
R
il
x 10
x2 102
D
14
infinitely differentiable. Now to substitute these values into
the Taylor Series:
(x − a)2 (x − a)3
f (x) ≈ f (a) + f 0 (a)(x − a) + f 00 (a) + f 000 (a)
2! 3!
(n) (x − a)n
+ ··· + f (a) + ···
n!
We have
aa
0 (x − 10)2
00 000 (x − 10)3
ln(x) ≈ ln(10) + ln (10)(x − 10) + ln (10) + ln (10)
af
2! 3!
n
(x − 10)
N
+ · · · + ln(n) (10) + ···
n!
ad
an
−0.01 2 × 0.001
ln(x) ≈ 2.302585093 + 0.1(x − 10) + (x − 10)2 + (x − 10)3
2! oh 3!
−6 × 0.0001
+ (x − 10)4 + · · ·
M
4!
Expanding this all out and collecting like terms, we obtain the
r.
15
aa
af
N
ad
an
Figure 1.3: Graph of the approximating polynomial, and f (x) = ln(x)
Home Work: oh
by the same procedure we can find the Taylor series of log x
M
near x = 1
r.
∞
(−1)n+1 (x − 1)n (x − 1)2 (x − 1)3 (x − 1)4
D
X
log x = = (x−1)− + − +· · ·
n 2 3 4
&
n=1
d
Series.
Ad
2! 3! n!
We can write this using summation notation as:
∞
X f (n) (0) xn
f (x) = (1.8)
n=0
n!
16
aa
af
N
ad
an
oh
M
r.
D
series
d
hi
as
17
f 000 (x) = − cos(x) f 000 (0) = − cos(0) = −1.
aa
2! 3! n!
we have
af
x2 x3 xn
N
0 00 000 (n)
sin(x) = sin(0) + sin (0)x + sin (0) + sin (0) + · · · + sin (0) + · · ·
2! 3! n!
ad
This gives us:
an
x3 x5 x7 x9 oh
sin x = x − + − + − ···
M
3! 5! 7! 9!
∞
X (−1)n 2n+1
r.
= x
(2n + 1)!
D
n=0
&
1 clc
hi
2 clear
as
3 close
x1 = −3∗p i : p i /100:3∗ p i ;
R
5 y1 = sin ( x1 ) ;
il
y2=@( x ) x ;
Ad
7 y3=@( x ) x − x . ˆ 3 / f a c t o r i a l ( 3 ) ;
y4=@( x ) x − x . ˆ 3 / f a c t o r i a l ( 3 ) + x . ˆ 5 / f a c t o r i a l
r.
8
D
(5) ;
9 y5=@( x ) x − x . ˆ 3 / f a c t o r i a l ( 3 ) + x . ˆ 5 / f a c t o r i a l
( 5 )− x . ˆ 7 / f a c t o r i a l ( 7 ) ;
10 y6=@( x ) x − x . ˆ 3 / f a c t o r i a l ( 3 ) + x . ˆ 5 / f a c t o r i a l
( 5 )− x . ˆ 7 / f a c t o r i a l ( 7 ) +x . ˆ 9 / f a c t o r i a l ( 9 ) ;
18
11 y7=@( x ) x − x . ˆ 3 / f a c t o r i a l ( 3 ) + x . ˆ 5 / f a c t o r i a l
( 5 )− x . ˆ 7 / f a c t o r i a l ( 7 ) +x . ˆ 9 / f a c t o r i a l ( 9 )−x
.ˆ11 / f a c t o r i a l ( 1 1 ) ;
12
13 subplot ( 3 , 2 , 1 )
14 p l o t ( x1 , y1 , x1 , y2 ( x1 ) , ’ LineWidth ’ , 2 )
15 axis ([ −8 8 −3 3 ] )
t i t l e ( ’x ’ )
aa
16
17
af
18 subplot ( 3 , 2 , 2 )
N
19 p l o t ( x1 , y1 , x1 , y3 ( x1 ) , ’ LineWidth ’ , 2 )
ad
20 axis ([ −8 8 −3 3 ] )
t i t l e ( ’ x − xˆ3/3! ’ )
an
21
22
23 subplot ( 3 , 2 , 3 ) oh
p l o t ( x1 , y1 , x1 , y4 ( x1 ) , ’ LineWidth ’ , 2 )
M
24
25 axis ([ −8 8 −3 3 ] )
r.
26
D
27 t i t l e ( ’ x − xˆ3/3! + xˆ5/5! ’ )
&
28
subplot ( 3 , 2 , 4 )
d
29
hi
30 p l o t ( x1 , y1 , x1 , y5 ( x1 ) , ’ LineWidth ’ , 2 )
axis ([ −8 8 −3 3 ] )
as
31
32
33
il
subplot ( 3 , 2 , 5 )
Ad
34
35 p l o t ( x1 , y1 , x1 , y6 ( x1 ) , ’ LineWidth ’ , 2 )
axis ([ −8 8 −3 3 ] )
r.
36
37
38
39 subplot ( 3 , 2 , 6 )
40 p l o t ( x1 , y1 , x1 , y7 ( x1 ) , ’ LineWidth ’ , 2 )
41 axis ([ −8 8 −3 3 ] )
19
42 t i t l e ( ’ x − xˆ3/3! + xˆ5/5! − xˆ7/7! +xˆ9/9! −+x
ˆ{11} /11! ’ )
Home Work:
Use the same procedure as in previous example 1.9 to check
the following Maclaurin series:
∞
1 X
xn = 1 + x + x2 + x3 + · · · (when −1 < x < 1)
aa
=
1 − x n=0
af
∞
x
X xn x2 x3
N
e = =1+x+ + + ···
n=0
n! 2! 3!
ad
∞
X (−1)n 2n x2 x4
=1− − ···
an
cos x = x +
n=0
(2n)! 2! 4!
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
20
Chapter 2
aa
Variable
af
N
ad
One of the fundamental problems of mathematics is that of
an
solving equations of the form
oh
f (x) = 0 (2.1)
M
equation or a zero of f .
&
find them.
Graphically, a solution, or a root, of Equation (2.1) refers
R
21
2.1 Bisection Technique
This technique based on the Intermediate Value Theorem.
Suppose f is a continuous function defined on the interval
[a, b], with f (a) and f (b) of opposite sign. The Intermediate
Value Theorem implies that a number p exists in (a, b) with
f (p) = 0. The method calls for a repeated halving of subin-
tervals of [a, b] and, at each step, locating the half containing
aa
p. To begin, set a1 = a and b1 = b, and let p1 be the midpoint
af
of [a, b]; that is,
N
b 1 − a1 a1 + b 1
p 1 = a1 + =
ad
2 2
an
1. If f (p1 ) = 0, then p = p1 , and we are done.
oh
2. If f (p1 ) 6= 0, then f (p1 ) has the same sign as either f (a1 )
or f (b1 ).
M
a2 = p1 and b2 = b1 .
D
a2 = a1 and b2 = p1 .
d
hi
2.1.
We can select a tolerance > 0 and generate p1 , p2 , · · · , pN
R
• |pN − pN −1 | < ,
|pN −pN −1 |
• < , pN 6= 0, or
r.
|pN |
D
• f (pN ) < ,
When using a computer to generate approximations, it is
good practice to set an upper bound on the number of iter-
ations. This eliminates the possibility of entering an infinite
22
aa
af
N
ad
an
oh
M
r.
D
23
4 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
5 clc
6 clear
7 close a l l
8 f =@( x ) x .ˆ3+4∗ x.ˆ2−10 ;
9 % f =@( x ) ( x +1) ˆ2∗exp ( xˆ2−2)−1;
10 a=1;
b=2;
aa
11
12 c = (a+b ) /2;
af
13 e=0.00001;
N
14 k=1;
ad
15 fprintf ( ’ k a b f (c)
\n ’ ) ;
an
16 fprintf ( ’ −−− −−−−−−−−− −−−−−−−−−
−−−−−−−− \n ’ ) ; oh
M
17
19 c = (a+b ) /2;
D
20 i f f ( c ) ∗ f ( a )<0
&
21 b=c ;
else
d
22
hi
23 a=c ;
end
as
24
25
,f (c) ) ;
il
k=k+1;
Ad
26
27 end
f p r i n t f ( ’ The approximated r o o t i s c= %10.10 f
r.
28
\n ’ , c ) ;
D
24
3 1 1.00000000 1.50000000 2.37500000
4 2 1.25000000 1.50000000 −1.79687500
5 3 1.25000000 1.37500000 0.16210938
6 4 1.31250000 1.37500000 −0.84838867
7 5 1.34375000 1.37500000 −0.35098267
8 6 1.35937500 1.37500000 −0.09640884
9 7 1.35937500 1.36718750 0.03235579
8 1.36328125 1.36718750 −0.03214997
aa
10
af
12 10 1.36425781 1.36523438 −0.01604669
N
13 11 1.36474609 1.36523438 −0.00798926
ad
14 12 1.36499023 1.36523438 −0.00395910
13 1.36511230 1.36523438 −0.00194366
an
15
22 >>
d
2
Example 2.3. The function f (x) = (x + 1)2 e(x −2) − 1 has a root
hi
in [0, 1] because f (0) < 0 and f (1) > 0. Use Bisection method
as
25
1 clc
2 clear
3 fun = @( x ) x.ˆ3+4∗x.ˆ2 −10; % function
4 x0 = 1; % i n i t i a l point
5 x = f z e r o ( fun , x0 )
the resulte is:
x = 1.365230013414097
aa
Theorem 2.4. Suppose that f ∈ C[a, b] and f (a)f (b) < 0. The
Bisection method generates a sequence {pn }∞
af
n=1 approximat-
ing a zero p of f with
N
b−a
ad
|pn − p| < n , n≥1
2
an
Proof. For each n ≥ 1, we have
1 oh
b1 − a1 = (b − a), and p1 ∈ (a1 , b1 )
2
M
r.
1 1 1
b2 − a2 = (b − a) = 2 (b − a), and p2 ∈ (a2 , b2 )
D
2 2 2
&
1 1
b3 − a3 = (b2 − a2 ) = 3 (b − a), and p3 ∈ (a3 , b3 )
2 2
d
1
as
that
Ad
b−a
|pn − p| < bn − an = n
2
r.
∞
the sequence {pn }n=1 converges to p with rate of conver-
D
26
It is important to realize that Theorem 2.4 gives only a
bound for approximation error and that this bound might
be quite conservative. For example, this bound applied to
the problem in Example 2.1 ensures only that
2−1
|p − p9 | < = 0.001953125 ≈ 2 × 10−3
29
but the actual error is much smaller:
aa
|p − p9 | ≤ |1.365230013414097 − 1.365234375|
af
N
≈ −0.000004361585903
≈ 4.4 × 10−6
ad
an
Example 2.5. Determine the number of iterations necessary
to solve f (x) = x3 +4x2 −10 = 0 with accuracy 10−3 using a1 = 1
oh
and b1 = 2.
M
r.
satisfies
&
= 2−N (2 − 1)
hi
One can use logarithms to any base, but we will use base−10
il
Since 2−N < 10−3 implies that log10 2−N < log10 10−3 = −3, we
have
r.
D
3
−N log10 2 < −3 and N> ≈ 9.96
log10 2
Hence, 10 iterations will ensure an approximation accurate
to within 10−3 .
27
2.3 EXERCISE
√
1. Use the Bisection method to find p3 for f (x) = x − cos x
on [0, 1].
2. Let f (x) = 3(x + 1)(x − 12 )(x − 1) Use the Bisection method
on the intervals [−2, 1.5] and [−1.25, 2.5] to find p3 .
3. Use the Bisection method on the solutions accurate to
aa
within 10−2 for f (x) = x3 − 7x2 + 14x − 6 = 0 on each
af
intervals: [0, 1], [1, 3.2] and [3.2, 4].
N
√
4. Find an approximation to 3 correct to within 10−4 us-
ad
ing the Bisection Algorithm. Hint: Consider f (x) = x2 − 3.
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
28
2.4 Fixed-Point Iteration
A fixed point for a function is a number at which the value of
the function does not change when the function is applied.
Definition 4. The number p is a fixed point for a given func-
tion g if g(p) = p.
Suppose that the equation f (x) = 0 can be rearranged as
aa
af
x = g(x) (2.2)
N
Any solution of this equation is called a fixed point of g. An
ad
obvious iteration to try for the calculation of fixed points is
an
xn+1 = g(xn ) n = 0, 1, 2, · · · (2.3)
oh
The value of x0 is chosen arbitrarily and the hope is that the
M
sequence x0 , x1 , x2 , · · · converges to a number α which will
automatically satisfy equation (2.2).
r.
D
example demonstrates.
R
x2 − 2x − 8 = 0
r.
equation are
√
(a) xn+1 = 2xn + 8
2xn +8
(b) xn+1 = x
29
x2n −8
(c) xn+1 = 2
2 clc
aa
3 clear
af
4 close a l l
N
5
xa =5; % I n i t i a l value o f r o o t
ad
6
7 xb =5;
an
8 xc =5;
9 fprintf ( ’ k Xa Xb
oh Xc
\n ’ ) ;
M
10 fprintf ( ’ −−− −−−−−−−−− −−−−−−−−−
−−−−−−−− \n ’ ) ;
r.
D
11
f o r k=1:1:6
&
12
xb =(2∗ xb +8)/xb ;
hi
14
15 xc = ( xcˆ2−8) /2;
as
, xb , xc ) ;
il
17 end
Ad
k Xa Xb Xc
D
30
6 4 4.00375413 4.05405405 131072.0000
7 5 4.00093842 3.97333333 8589934592.0
8 6 4.00023460 4.01342282 3.6893e+19
9 >>
Consider that the sequence converges for (a) and (b), but
diverges for (c).
This example highlights the need for a mathematical analy-
aa
sis of the method. Sufficient conditions for the convergence
of the fixed point iteration are given in the following (without
af
proof) theorem.
N
Theorem 2.8. If g 0 exists on an interval I = [α − A, α + A]
ad
containing the starting value x0 and fixed point α, then xn
an
converges to α provided
|g 0 (x)| < 1 on
oh
I
M
We can now explain the results of Example 2.6
r.
1
(a) If g(x) = (2x + 8) 2 then g 0 (x) = (2x + 8)−1/2 Theorem 2.8
D
Xb in the Table.
2
(c) If g(x) = (x 2−8) then g 0 (x) = x Theorem 2.8 cannot be
r.
D
31
by fixed point iteration method starting with x0 = 1.5 with
|xn − xn−1 | < 0.009
Solution
The function f (x) has a root in the interval (1, 2), Why ?,
rearrange the equation as
aa
√
xn+1 = g(xn ) = xn + 10
af
N
then −3
(x + 10) 4
ad
0
g (x) =
4
an
Achieving the condition
|g 0 (x)| ≤ 0.04139
oh
on (1, 2)
M
2.5 EXERCISE
d
hi
of
R
(a) x − cos x = 0
il
(b) x2 + ln x = 0
Ad
0.5 × 10−2 .
D
32
2.6 Newton-Raphson method
Newton-Raphson method is one of the most popular tech-
niques for finding roots of non-linear equations.
aa
af
N
ad
an
oh
M
Figure 2.2: sketch of the Newton Raphson method
r.
D
n
(x1 − x0 )
D
33
consider x1 as a root and take only the first two terms as an
approximation:
0 = f (x0 ) + f 0 (x0 )(x1 − x0 )
f (x0 )
(x1 − x0 ) = − 0
f (x0 )
f (x0 )
x1 = x0 − 0
f (x0 )
aa
So, we can find the new approximation x1 . Now we can re-
af
peat the whole process to find an even better approximation.
N
f (x1 )
ad
x2 = x1 −
f 0 (x1 )
an
we will arrive at the following formula.
f (xn )
oh
xn+1 = xn − n = 0, 1, 2, · · · (2.4)
M
f 0 (xn )
r.
f (x) = ex − x − 2
hi
as
is given by
R
exn − xn − 2
xn+1 = xn −
il
exn − 1
Ad
xn
e (xn − 1) + 2
=
e xn − 1
r.
D
34
aa
af
N
Figure 2.3: sketch of the Newton Raphson method for example 2.10
ad
Matlab Code 2.11. Newton Raphson method
an
1 % ∗∗∗∗∗∗∗∗ Newton Raphson method ∗∗∗∗∗∗∗∗∗∗∗∗
2
oh
% ∗∗∗∗ t o f i n d a r o o t o f the f u n c t i o n f ( x ) ∗∗∗
clc
M
3
4 clear
r.
5 close a l l
D
8 xa=−10; % I n i t i a l value o f f i r s t r o o t
as
10
11 fprintf ( ’ k Xa Xb \n ’ ) ;
il
12
);
f p r i n t f ( ’ %6. f %10.8 f %10.8 f \n ’ , 0 , xa ,
r.
13
D
xb ) ;
14 f o r k=1:1:14
15 i f fp ( xa ) ==0; r
16 return
17 e l s e i f fp ( xb ) ==0; r
35
18 return
19 end
20 xa=xa−f ( xa ) /fp ( xa ) ;
21 xb=xb−f ( xb ) /fp ( xb ) ;
22 f p r i n t f ( ’ %6. f %10.8 f %10.8 f \n ’ , k , xa ,
xb ) ;
23 end
aa
The result as the following table:
af
1 k Xa Xb
N
2 −−− −−−−−−−−− −−−−−−−−−
ad
3 1 −1.99959138 9.00049942
2 −1.84347236 8.00173312
an
4
5 3 −1.84140606 7.00474864
6 −− −−−−−−−−−− −−−−−−−−−−oh
13 −1.84140566 1.14619325
M
7
8 14 −1.84140566 1.14619322
r.
9 >>
D
close to α.
R
xn+1 = g(xn ) n = 0, 1, 2, · · ·
and the equation
r.
f (xn )
D
xn+1 = xn −
f 0 (xn )
shows that Newton’s method is a fixed point iteration with
f (x)
g(x) = x −
f 0 (x)
36
By the quotient rule,
aa
Hence by the continuity of f 00 , there exists an interval I =
af
[α − δ, α + δ], for some δ > 0, on which |g 0 (x)| < 1. Theorem 2.8
N
then guarantees convergence provided x0 ∈ I, i.e. provided
ad
x0 is sufficiently close to α.
an
2.7 EXERCISE oh
M
1. Use Newton’s method to find the roots of
r.
(a) x − cos x = 0
D
(b) x2 + ln x = 0
&
(b) x3 + 4x2 + 4x + 3 = 0
d
hi
10−6 .
R
37
2.8 System of Non Linear Equations
Consider a system of m nonlinear equations with m un-
knowns
f1 (x1 , x2 , · · · , xm ) = 0
f2 (x1 , x2 , · · · , xm ) = 0
.. .. ..
. . .
aa
fm (x1 , x2 , · · · , xm ) = 0
af
N
where each f i(i = 1, 2, ..., m) is a real valued function of m
real variables. we shall only consider the generalization of
ad
Newton’s method. In order to motivate the general case,
an
consider a system of two non linear simultaneous equations
in two unknowns given by oh
M
f (x, y) = 0
g(x, y) = 0 (2.5)
r.
D
f (x, y) = x2 + y 2 − 4 = 0
as
g(x, y) = 2x − y 2 = 0
R
are shown in Fig. 2.4. The roots of this system are then
il
to deduce that
D
0 = f [α, β]
= f [xn + (α − xn ), yn + (β − yn )]
= f (xn , yn ) + (α − xn )fx (xn , yn ) + (β − yn )fy (xn , yn ) + · · ·
38
aa
af
N
ad
Figure 2.4: sketche of example 2.13
an
and
0 = g[α, β]
oh
M
= g[xn + (α − xn ), yn + (β − yn )]
r.
∂f ∂f
&
39
scheme
Or rewritten as:
aa
af
At a starting approximation (x0 , y0 ), the functions f , fx , fy ,
N
g, gx and gy are evaluated. The linear equations are then
solved for (x1 , y1 ) and the whole process is repeated until
ad
convergence is obtained.
an
In matrix notation, equation (2.7) may be written as
fx fy
xn+1 − xn
f
oh
=−
M
gx gy yn+1 − yn g
r.
(xn , yn ). Hence
&
−1
xn+1 − xn fx fy f
=−
d
yn+1 − yn gx gy g
hi
as
Or rewritten as
−1
R
xn+1 xn fx fy f
= − (2.8)
il
yn+1 yn gx gy g
Ad
The matrix
fx fy
r.
J=
gx gy
D
40
where division by f 0 generalizes to pre-multiplication by J −1 .
For a larger system of equations it is convenient to use vec-
tor notation.
Note: for a 2 × 2 matrix the inverse is
−1
a b 1 d −b
= (2.9)
c d ad − dc −c a
aa
Example 2.13. As an illustration of the above, consider the
solution of
af
N
f (x, y) = x2 + y 2 − 4 = 0
ad
g(x, y) = 2x − y 2 = 0
an
starting withx0 = y0 = 1. In this case
f = x2 + y 2 − 4, fx = 2x,
oh fy = 2y
M
g = 2x − y 2 , gx = 2, gy = −2y
r.
2(x1 − 1) + 2(y1 − 1) = 2
2(x1 − 1) − 2(y1 − 1) = −1
d
hi
41
7 close a l l
8 % Define the f u n c t i o n s f and g
9 % and t h e i r p a r t i a l d e r i v a t i v e
10 f =@( x , y ) xˆ2+yˆ2−4 ; % the f u n c t i o n f ( x , y )
11 g=@( x , y ) 2∗x−yˆ2 ; % the f u n c t i o n g ( x , y )
12 f x =@( x , y ) 2∗x ; % partial derivative of f
to x
fy=@( x , y ) 2∗y ; % partial derivative of f
aa
13
to y
af
14 gx=@( x , y ) 2 ; % partial derivative of f
N
to x
ad
15 gy=@( x , y ) −2∗y ; % partial derivative of f
to y
an
16 a=1; b=1; % I n i t i a l r o o t value
17 fprintf ( ’ n Xn oh Yn \n ’ )
f o r k=1:1:5
M
18
19 X= [ a ; b ] ;
r.
20 xn ( k ) =a ; yn ( k ) =b ;
D
21 F=[ f ( a , b ) ; g ( a , b ) ] ;
&
22 J = [ f x ( a , b ) , fy ( a , b ) ; gx ( a , b ) , gy ( a , b ) ] ; % the
Jacobian matrix
d
hi
23 X=X−inv ( J ) ∗F ;
a=X ( 1 ) ;
as
24
b=X ( 2 ) ;
R
25
end
Ad
27
1 n Xn Yn
2 1 1.250000 1.750000
3 2 1.236111 1.581349
4 3 1.236068 1.572329
5 4 1.236068 1.572303
42
6 5 1.236068 1.572303
7 >>
2.9 EXERCISE
1. The system
aa
3x2 + y 2 + 9x − y − 12 = 0
x2 + 36y 2 − 36 = 0
af
N
has exactly four roots. Find these roots starting with
ad
(1, 1), (1, −1), (−4, 1) and (−4, −1). Stop when successive
iterates differ by less than 10−7 .
an
2. The system oh
M
4x3 + y − 6 = 0
x2 y − 1 = 0
r.
D
(1, 1), (0.5, 5) and (−1, 5). Stop when successive iterates
d
2
three nonzero terms) for the functions e−x , 2+x
1
, ecos x ,
R
43
2.10 Fixed Point for System of Non Linear Equa-
tions
We now generalize fixed-point iteration to the problem of
solving a system of m nonlinear equations in m unknowns
f1 (x1 , x2 , · · · , xm ) = 0
f2 (x1 , x2 , · · · , xm ) = 0
aa
.. .. ..
. . .
af
fm (x1 , x2 , · · · , xm ) = 0
N
ad
We can define fixed-point iteration for solving a system of
nonlinear equations. First, we transform this system of
an
equations into an equivalent system of the form
oh
x1 = g1 (x1 , x2 , · · · , xm )
M
x2 = g2 (x1 , x2 , · · · , xm )
r.
.. .. ..
. . .
D
xm = gm (x1 , x2 , · · · , xm )
&
d
1
xn+1 = g2 (xn1 , xn2 , · · · , xnm )
R
2
.. .. ..
. . .
il
Ad
n+1 n n n
x3 = gm (x1 , x2 , · · · , xm )
r.
f (x, y) = 0
g(x, y) = 0 (2.10)
44
to solve this system by fixed iteration method, transform
this system of equations into an equivalent system of the
form
x = F (x, y)
y = G(x, y) (2.11)
aa
xn+1 = F (xn , yn )
af
yn+1 = G(xn , yn ) (2.12)
N
ad
The convergent condition for this subsequent is
an
|Fx | + |Fy | < 1
|Gx | + |Gy | < 1oh
M
Example 2.15. consider the solution of
r.
f (x, y) = x3 + y 3 − 6x + 3 = 0
D
g(x, y) = x3 − y 3 − 6y + 2 = 0
&
x3 + y 3 + 3
as
x = F (x, y) =
6
R
x − y3 + 2
3
y = G(x, y) =
il
6
Ad
2
x y2
Fx = Fy =
2 2
r.
2
x −y 2
Gx = Gy =
D
2 2
45
Now consider that at the point (0.5, 0.5) we have
x20 y02
|Fx | + |Fy | = | | + | |
2 2
2
(0.5) (0.5)2
= + = 0.25 < 1
2 2
x20 −y02
|Gx | + |Gy | = | | + | |
2 2
aa
(0.5)2 (0.5)2
= + = 0.25 < 1
2 2
af
N
so, the convergence condition is satisfied at the point (0.5, 0.5).
then
ad
x30 + y03 + 3
an
x1 = = 0.5417
6
x3 − y03 + 2 oh
y1 = 0 = 0.3390
M
6
by the same procedure we have:
r.
D
x2 = 0.5330 y2 = 0.3520
&
x3 = 0.5325 y3 = 0.3512
d
and so on.
hi
as
1 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
Ad
3
D
46
9 % Define the f u n c t i o n s f and g
10 % and t h e i r p a r t i a l d e r i v a t i v e
11 f =@( x , y ) ( xˆ3+yˆ3+3)/6 ; % the f u n c t i o n f ( x , y )
12 g=@( x , y ) ( xˆ3−yˆ3+2)/6 ; % the f u n c t i o n g ( x , y )
13 f x =@( x , y ) x∗x ∗ 0 . 5 ; % partial derivative of
f to x
14 fy=@( x , y ) y∗y ∗ 0 . 5 ; % partial derivative of
f to y
aa
15 gx=@( x , y ) x∗x ∗0.5 ; % partial derivative
af
of f to x
N
16 gy=@( x , y ) −y∗y ∗ 0 . 5 ; % partial derivative of
ad
f to y
a=0.5; b=0.5; % I n i t i a l r o o t value
an
17
18 fprintf ( ’ n Xn Yn \n ’ )
19 f p r i n t f ( ’ %2.0 f %2.8 f oh %2.8 f \n ’ , 0 ,a , b )
f o r k=1:1:8
M
20
21 w1=abs ( f x ( a , b ) +fy ( a , b ) ) ;
r.
22 w2=abs ( gx ( a , b ) +gy ( a , b ) ) ;
D
25
hi
26 b=g ( a , b ) ;
f p r i n t f ( ’ %2.0 f %2.8 f %2.8 f \n ’ , k ,a ,
as
27
b)
R
28 end
il
Ad
1
D
2 0 0.50000000 0.50000000
3 1 0.54166667 0.33898775
4 2 0.53298008 0.35207474
5 3 0.53250741 0.35122633
6 4 0.53238788 0.35126185
47
7 5 0.53237312 0.35125757
8 6 0.53237077 0.35125750
9 7 0.53237043 0.35125745
10 8 0.53237038 0.35125745
11 >>
2.11 EXERCISE
aa
af
1. solve problems 1 and 2 from exercise 2.9 by the fixed
N
point method.
ad
2. solve the system
an
x = sin y
oh
y = cos x
M
using Newton method and the fixed point method with
r.
48
Chapter 3
aa
af
N
Many important problems in science and engineering re-
ad
quire the solution of systems of simultaneous linear equa-
an
tions of the form
oh
a11 x1 + a12 x2 + · · · + a1n xn = b1
a21 x1 + a22 x2 + · · · + a2n xn = b2
M
.. .. .. ..
. . . . (3.1)
r.
D
Where the coefficients aij and the right hand sides bi are
given numbers, and the quantities xi are the unknowns
d
hi
AX = b (3.2)
R
49
3.1 Gauss elimination
Gauss elimination is used to solve a system of linear equa-
tions by transforming it to an upper triangular system (i.e.
one in which all of the coefficients below the leading diago-
nal are zero) using elementary row operations. The solution
of the upper triangular system is then found using back
substitution.
aa
We shall describe the method in detail for the general exam-
af
ple of 3 × 3 system
N
a11 x1 + a12 x2 + a13 x3 = b1
ad
a21 x1 + a22 x2 + a23 x3 = b2
an
a31 x1 + a32 x2 + a33 x3 = b3
oh
In matrix notation this system can be written as
M
a11 a12 a13 x1 b1
r.
STEP 1
hi
as
The first step eliminates the variable x1 from the second and
R
11
Ad
x1
(2) (2) (2)
D
50
STEP 2
The second step eliminates the variable x2 from the third
equation. This can be done by subtracting a multiple m32 =
(2)
a32
(2) from row 2 and 3, producing the equivalent upper trian-
a22
gular system
a11 a12 a13 x1 b1
aa
(2) (2) (2)
0 a22 a23 x2 = b2
af
(3) (3)
0 0 a33 x3 b3
N
ad
(3) (2) (2) (2) (2) (2)
where a33 = a33 − m32 a23 and b3 = b3 − m32 b2 .
Since these row operations are reversible, the original sys-
an
tem and the upper triangular system have the same solu-
oh
tion. The upper triangular system is solved using back sub-
stitution. The last equation implies that
M
(3)
r.
b3
x3 =
D
(3)
a33
&
(2) (2)
b2 − a23 x3
x2 =
R
(2)
a22
il
b1 − a12 x2 − a13 x3
D
x1 =
a11
It is clear from previous equations that the algorithm fails
(j)
if any of the quantities ajj are zero, since these numbers
are used as the denominators both in the multipliers mij
51
and in the back substitution equations. These numbers
are usually referred to as pivots. Elimination also produces
poor results if any of the multipliers are greater than one
in modulus. It is possible to prevent these difficulties by
using row interchanges. At step j, the elements in column
j which are on or below the diagonal are scanned. The row
containing the element of largest modulus is called the piv-
otal row. Row j is then interchanged (if necessary) with the
aa
pivotal row.
af
(j) (j)
It can, of course, happen that all of the numbers ajj , aj+1.j ,
N
(j)
· · · , anj are exactly zero, in which case the coefficient matrix
ad
does not have full rank and the system fails to possess a
an
unique solution.
oh
Example 3.1. To illustrate the effect of partial pivoting, con-
sider the solution of
M
0.61 1.23 1.72 x1 0.792
r.
as follows:
il
−4.34
Step 1: The multipliers are m21 = 1.02
0.61 = 1.67 and m31 = 0.61 =
Ad
0.61 1.23 1.72 x1 0.792
D
52
20
Step 2 The multiplier is m32 = 0.1 = 200, which gives
0.61 1.23 1.72 x1 0.792
0 0.10 −8.38 x2 = 10.7
0 0 1690 x3 −2120
Solving by back substitution, we obtain
x3 = −1.25 x2 = 2 x1 = 0.790
aa
With partial pivoting we proceed as follows:
Step 1: Since |−4.34| > |0.610| and |1.02|, rows 1 and 3 are
af
interchanged to get
N
ad
−4.34 11.2 −4.25 x1 16.3
1.02 2.15 −5.51 x2 = 12
an
0.61 1.23 1.72 x3 0.792
1.02
oh 0.610
The multiplier is m21 = −4.34 = −0.235 and m31 = −4.34 = −0.141
M
which gives
r.
−4.34 11.2 −4.25 x1 16.3
D
2.81
needed and m32 = 4.78 = 5.88, which gives
as
R
−4.34 11.2 −4.25 x1 16.3
0 4.78 −6.51 x2 = 15.8
il
Ad
0 0 4.95 x3 −6.20
Solving by back substitution, we obtain
r.
D
53
x3 = −1.26, x2 = 1.60 and x1 = 1.61 ) However, the values ob-
tained without partial pivoting are totally unacceptable; the
value of x1 is not even correct to one significant figure.
aa
af
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
54
3.2 EXERCISE
Solve the following systems of linear equations using Gauss
elimination (i) without pivoting (ii) with partial pivoting.
1.
0.005x1 + x2 + x3 = 2
aa
x1 + 2x2 + x3 = 4
−3x1 − x2 + 6x3 = 2
af
N
2.
ad
x1 − x2 + 2x3 = 5
an
2x1 − 2x2 + x3 = 1
oh
30x1 − 2x2 + 7x3 = 20
M
3.
r.
D
55
Convention: For these row operations, we will use the fol-
lowing notations:
• Ri ←→ Rj means: Interchange row i and row j.
• αRi means: Replace row i with α times row i.
• Ri + αRj means: Replace row i with the sum of row i and
α times row j.
aa
The Gauss-Jordan elimination method to solve a system of
af
linear equations is described in the following steps.
N
1. Write the extended matrix of the system.
ad
an
2. Use row operations to transform the extended matrix to
have following properties:
oh
(a) The rows (if any) consisting entirely of zeros are grouped
M
together at the bottom of the matrix.
r.
1 or a pivot).
d
56
Example 3.2. Solve the following system of equations using
the Gauss Jordan elimination method.
x+y+z = 5
2x + 3y + 5z = 8
4x + 5z = 2
Solution: The extended matrix of the system is the following.
aa
1 1 1 5
af
2 3 5 8
N
4 0 5 2
ad
use the row operations as following:
an
1 1 1 5 R2 = R2 − 2R1 1 1 1 5
2 3 5 8 =============⇒ 0oh 1 3 −2
4 0 5 2 R3 = R3 − 4R1 0 −4 1 −18
M
1 1 1 5
r.
R3 = R3 + 4R2
=============⇒ 0 1 3 −2
D
1
R3 = 13 R3 0 0 1 −2
&
R2 = R2 − 3R3
d
1 0 0 3
hi
R1 = R1 − 3R3
=============⇒ 0 1 0 4
as
R1 = R1 − R2 0 0 1 −2
R
From this final matrix, we can read the solution of the system.
il
It is
Ad
x = 3, y = 4, z = −2
r.
57
Solution: The extended matrix of the system is the following.
1 2 −3 2
6 3 −9 6
7 14 −21 13
use the row operations as following:
1 2 −3 2 R2 = R2 − 6R1 1 1 −3 2
aa
6 3 −9 6 =============⇒ 0 −9 9 −6
7 14 −21 13 R3 = R3 − 7R1 0 0 0 −1
af
N
We obtain a row whose elements are all zeros except the last
one on the right. Therefore, we conclude that the system of
ad
equations is inconsistent, i.e., it has no solutions.
an
Example 3.4. Solve the following system of equations using
the Gauss Jordan elimination method.
oh
M
4y + z = 2
r.
2x + 6y − 2z = 3
D
4x + 8y − 5z = 4
&
0 4 1 2
2 6 −2 3
as
4 8 −5 4
R
0 4 1 2 R ←→ R2 2 6 −2 3
2 6 −2 3 =====1======= =⇒ 0 4 1 2
r.
4 8 −5 4 R3 = R3 − 2R1 0 0 0 0
D
R2 = 14 R2 −7
R1 = R1 − 6R2 1 0 4 0
=============⇒ 0 1 41 12
R1 = 12 R1 0 0 0 0
58
We can stop This because the form of the last matrix. It cor-
responds to the following system.
7
x− z =0
4
1 1
y+ z=
4 2
We can express the solutions of this system as
aa
7 1 1
x= z y=− z+
af
4 4 2
N
Since there is no specific value for z, it can be chosen arbi-
ad
trarily. This means that there are infinitely many solutions
for this system. We can represent all the solutions by using a
an
parameter t as follows.
7 1 1
oh
x= t y =− t+ z=t
M
4 4 2
r.
2 , 4)
−7
t = −2 gives the solution (x, y, z) = ( 2 , 1, −2)
d
hi
as
Matlab code:
il
1 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
% ∗∗∗∗ Solve a system o f l i n e a r equation ∗∗∗∗
r.
2
D
3 % ∗∗ by Gauss e l i m i n a t i o n method ∗∗
4 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
5 clc
6 clear
7 close a l l
59
8 a = [ 3 4 −2 2 2
9 4 9 −3 5 8
10 −2 −3 7 6 10
11 1 4 6 7 2];
12 [m, n ] = s i z e ( a ) ;
13 % m = Number o f Rows
14 % n = Number o f Colomns
f o r j =1:m−1
aa
15
16 f o r z =2:m
af
17 % Pivoting
N
18 i f a ( j , j ) ==0
ad
19 t =a ( j , : ) ;
a ( j , : ) =a ( z , : ) ;
an
20
21 a( z , : ) =t ;
22 end oh
end
M
23
24 f o r i = j +1:m
r.
25 a ( i , : ) =a ( i , : ) −a ( j , : ) ∗ ( a ( i , j ) /a ( j , j ) ) ;
D
26 end
&
27 end
x=zeros ( 1 ,m) ;
d
28
hi
29 % Back S u b s t i t u t i o n
f o r s=m: −1:1
as
30
c =0;
R
31
32 f o r k=2:m
il
c=c+a ( s , k ) ∗x ( k ) ;
Ad
33
34 end
x ( s ) = (a ( s , n )−c ) /a ( s , s ) ;
r.
35
end
D
36
37 % Display the r e s u l t s
38 disp ( ’Gauss e l i m i n a t i o n method : ’ ) ;
39 a
40 x’
60
The result as the following:
1 Gauss elimination method :
2
3 a =
4
aa
6 0 3.6667 −0.3333 2.3333
5.3333
af
0 0 5.6364 7.5455
N
7
11.8182
ad
8 0 0 0 −4.6129
−17.0323
an
9
10
oh
M
11 ans =
12
r.
−2.1538
D
13
14 −1.1538
&
15 −2.8462
d
16 3.6923
hi
17
as
18 >>
R
1
D
61
7 close a l l
8 a = [ 3 4 −2 2 2
9 4 9 −3 5 8
10 −2 −3 7 6 10
11 1 4 6 7 2];
12 [m, n ] = s i z e ( a ) ;
13 % m = Number o f Rows
% n = Number o f Colomns
aa
14
15
af
16 f o r j =1:m−1
N
17 % Pivoting
ad
18 f o r z =2:m
i f a ( j , j ) ==0
an
19
20 t =a ( 1 , : ) ;
21 a ( 1 , : ) =a ( z , : ) ; oh
a( z , : ) =t ;
M
22
23 end
r.
24 end
D
25 f o r i = j +1:m
&
26 a ( i , : ) =a ( i , : ) −a ( j , : ) ∗ ( a ( i , j ) /a ( j , j ) ) ;
end
d
27
hi
28 end
as
29
f o r j =m: −1:2
R
30
31 f o r i = j −1:−1:1
il
a ( i , : ) =a ( i , : ) −a ( j , : ) ∗ ( a ( i , j ) /a ( j , j ) ) ;
Ad
32
33 end
end
r.
34
D
35
36 f o r s=1:m
37 a ( s , : ) =a ( s , : ) /a ( s , s ) ;
38 x ( s ) =a ( s , n ) ;
39 end
62
40 % Display the r e s u l t s
41 disp ( ’Gauss−Jordan method : ’ ) ;
42 a
43 x’
aa
2
a =
af
3
N
5 1.0000 0 0 0
ad
−2.1538
0 1.0000 0 0
an
6
−1.1538
7 0 0 1.0000
oh 0
M
−2.8462
8 0 0 0 1.0000
r.
3.6923
D
9
&
10
d
11 ans =
hi
12
as
13 −2.1538
−1.1538
R
14
−2.8462
il
15
3.6923
Ad
16
17
r.
18 >>
D
3.4 EXERCISE
1. solve exercise 3.2 by Gauss Jordan Method
63
2. Solve the following system of equations using the Gauss
Jordan elimination method.
x + y + 2z = 1
2x + −y + w = −2
x − y − z − 2w = 4
2x − y + 2z − w = 0
aa
af
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
64
3.5 Matrix Inverse using Gauss-Jordan method
Given a matrix A of order (n×n), its inverse A−1 is the matrix
with the property that AA−1 = I = A−1 A, Note the following
identities
1. (A−1 )−1 = A
2. (AT )−1 = (A−1 )T
aa
3. (AB)−1 = B −1 A−1
af
N
Moreover, A is invertible, then the solution to the system of
ad
linear equations AX = b can be written as X = A−1 b. We can
an
oh
M
r.
D
&
d
hi
as
Figure 3.1: digram of find the inverse of a matrix using elementary row
operations
R
il
matrix A as following:
r.
tity matrix.
2. use Gauss Jordan Elimination steps on partitioned ma-
trix.
65
3. If done correctly (A have an inverse), the resulting par-
titioned matrix will take the form I|A−1 .
4. Double check your work by making sure that AA−1 = I.
Below is a demonstration of this process:
3 2 0
Example 3.7. Find inverse of the matrix A = 1 −1 0 us-
aa
0 5 1
ing Gauss-Jordan method.
af
Solution: The partitioned matrix of the system is the follow-
N
ing.
ad
3 2 0 1 0 0
an
1 −1 0 0 1 0
0 5 1 0 0 1 oh
use the row operations as following:
M
3 2 0 1 0 0 R ←→ R2 1 −1 0 0 1 0
r.
1 −1 0 0 1 0 ===1======⇒ 3 2 0 1 0 0
D
0 5 1 0 0 1 0 5 1 0 0 1
&
R2 = R2 − 3R1 1 −1 0 0 1 0
d
=============⇒ 0 5 0 1 −3 0
hi
0 5 1 0 0 1
as
−1
R
R3 = R3 − R2 1 0 0 1 0
============⇒ 0 5 0 1 −3 0
il
Ad
0 0 1 −1 3 1
1
R2 = 5 R2 1 −1 0 0 1 0
r.
========⇒ 0 1 0 15 −3 0
D
5
0 0 1 −1 3 1
1 2
R1 = R1 + R2 1 0 0 5 5 0
============⇒ 0 1 0 15 −3 5 0
0 0 1 −1 3 1
66
Now we have
1 2
5 0 5
A−1 = 1
5 0 −3
5
−1 3 1
check the solution (AA−1 = I).
aa
af
Cramer’s rule begins with the clever observation
N
x1 0 0
ad
x2 1 0 = x1
x3 0 1
an
That is to say, if you replace the first column of the identity
oh
T
matrix with the vector x = x1 , x2 , x3 , the determinant is
M
x1 . Now, we’ve illustrated this for the 3 × 3 case and for
column one. In general, if you replace the ith column of an
r.
D
a11 a12 a13 b1 x1
as
then
Ad
x1 0 0 b1 a12 a13
A x2 1 0 = b2 a22 a23 .
r.
x3 0 1 b3 a32 a33
D
67
where B1 is the matrix we get when we replace column 1 of
A by the vector b. So,
det(B1 )
x1 = .
det(A)
In general
det(Bi )
xi = ,
aa
det(A)
af
where Bi is the matrix we get by replacing column i of A
N
with b.
ad
Example 3.8. Use Cramer’s rule to solve for the the linear
an
system:
2x1 + x2 − 5x3 + x4 oh =8
x1 − 3x2 − 6x4
M
=9
2x2 − x3 + 2x4 = −5
r.
x1 + 4x2 − 7x3 + x4 =0
D
&
2 1 −5 1 8
as
1 −3 0 −6
and b = 9 .
A=
R
0 2 −1 2 −5
1 4 −7 6 0
il
Ad
2 1 −5 1
D
1 −3 0 −6 then
A= 0 2 −1 2 ====⇒ det(A) = 27 6= 0
1 4 −7 6
68
8 1 −5 1
9 −3 0 −6 then
−5 2 −1 2 ====⇒ det(B1 ) = 81
B1 =
0 4 −7 6
2 8 −5 1
1 9 0 −6 then
0 −5 −1 2 ====⇒ det(B2 ) = −108
B2 =
aa
1 0 −7 6
af
N
2 1 8 1
1 −3 9 −6 then
ad
0 2 −5 2 ====⇒ det(B3 ) = −27
B3 =
an
1 4 0 6
2 1 −5 8
oh
1 −3 0 9 then
M
0 2 −1 −5 ====⇒ det(B4 ) = 27
B4 =
r.
1 4 −7 0
D
det(B1 ) 81
x1 = = =3
det(A) 27
d
hi
det(B2 ) −108
x2 = = = −4
as
det(A) 27
R
det(B3 ) −27
x3 = = = −1
il
det(A) 27
Ad
det(B4 ) 27
x4 = = =1
det(A) 27
r.
D
3.7 EXERCISE
1. Solve problems in exercise 3.2 and exercise 3.4 using
Cramer’s rule.
69
2. Use Cramer’s rule to solve for the vector X = [x1 , x2 , x3 ]t :
−1 2 −3 x1 1
2 0 1 x2 = 0
3 −4 4 x3 2
aa
af
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
70
3.8 Iterative Methods: Jacobi and Gauss-Seidel
Jacobi’s method is the easiest iterative method for solving a
system of linear equations. Given a general set of n equa-
tions and n unknowns ( An×n xn×1 = bn×1 ), where
a11 a12 · · · a1n b1 x1
a a ··· a b x
21 22 2n 2 2
A = .. .. , b = .. . and x = .. .
aa
.. ..
. . . . . .
af
an1 an2 · · · ann bn xn
N
If the diagonal elements are non-zero, each equation is rewrit-
ad
ten for the corresponding unknown, that is, the first equa-
tion is rewritten with x1 on the left hand side, the second
an
equation is rewritten with x2 on the left hand side and so on
as follows
oh
M
n
!
1 X
x1 = b1 − a1j xj
r.
a11 j=2
D
n
1
&
X
x2 = b2 − a2j xj (3.3)
a22
d
j=1;j6=2
hi
n−1
!
1 X
R
xn = bn − anj xj
ann
il
j=1
Ad
r.
D
71
This suggests an iterative method by
n
!
1 X
xk+1
1 = b1 − a1j xkj
a11 j=2
n
1 X
xk+1
2 = b2 − a2j xkj
a22
j=1;j6=2
aa
··· ··· ··· ···
af
n−1
!
1 X
xk+1 = bn − anj xkj
N
n
ann j=1
ad
where xk means the value of kth iteration for unknown x
an
with k = 1, 2, 3, · · · , and x(0) = (x01 , x02 , · · · , x0n ) is an initial
guess vector. oh
This is so called Jacobi’s method.
M
−3x1 + 9x2 + x3 = 14
d
1
xk+1 = (12 + 2xk2 − 3xk3 )
Ad
1
5
1
xk+1 = (14 + 3xk1 − xk3 )
r.
2
9
D
−1
xk+1
3 = (−12 − 2xk1 + xk2 )
7
the approximation is
72
k x1 x2 x3
0 0 0 0
1 2.40000000 1.55555556 1.71428571
2 1.99365079 2.16507937 2.17777778
3 1.95936508 1.97813051 1.97460317
4 ··· ··· ···
Example 3.10. Now for the same previous example but with
aa
changing the order of equations:
af
−3x1 + 9x2 + x3 = 14
N
2x1 − x2 − 7x3 = −12
ad
5x1 − 2x2 + 3x3 = 12
an
Applying Jacobi method and rewrite the system
oh
−1
M
xk+1
1 = (14 − 9xk2 − xk3 )
3
r.
1
xk+1 = (12 − 5xk1 + 2xk2 )
&
3
3
d
Choose the same initial guess x(0) = (0, 0, 0), the approxima-
hi
tion is
as
k x1 x2 x3
R
0 0 0 0
il
6
and this is divergence.?
Theorem 3.11. The convergence condition (for any iterative
method) is when the matrix A is diagonally dominant.
73
Definition 5. A matrix Ax×n is said to be diagonally dominant
iff, for each i = 1, 2, · · · , n
n
X
|aii | > |aij |
j=1;j6=i
aa
1 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
af
2 % ∗∗∗∗ Solve a system o f l i n e a r equation ∗∗∗∗
N
3 % ∗∗ AX=b by Jaccobi method ∗∗
ad
4 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
clc
an
5
6 clear
7 close a l l
oh
M
8 A=[4 1 2;1 3 1;1 2 5 ] ; % input the matrix A
9 b=[16;10;12]; % input the v e c t o r b
r.
10 x0 = [ 0 ; 0 ; 0 ] ; % input the v e c t o r X0
D
11 n = length ( b ) ;
&
12 fprintf ( ’ k x1 x2 x3 \n ’
d
)
hi
13 for j = 1 : n
as
15 end
il
16
(2) ,x (3) )
x1 = x ’ ;
r.
17
D
18 k = 1;
19 while abs ( x1−x0 ) > 0.0001
20 for j = 1 : n
21 xnew ( j ) = ( ( b ( j )−A ( j , [ 1 : j −1, j +1:n ] ) ∗x1 ( [ 1 : j −1, j +1:
n ] ) ) / A( j , j ) ) ;
74
22 end
23 x0 = x1 ;
24 x1 = xnew ’ ;
25 f p r i n t f ( ’ %2.0 f %2.8 f %2.8 f %2.8 f \n ’ , k+1
,xnew ( 1 ) ,xnew ( 2 ) , xnew ( 3 ) )
26 k = k + 1;
27 end
aa
The result as the following:
af
1 k x1 x2 x3
N
2 1 4.00000000 3.33333333 2.40000000
2 1.96666667 1.20000000 0.26666667
ad
3
an
5 4 2.58944444 1.63555556 0.65111111
6 .... .... oh ....
7 27 3.00003358 2.00003137 1.00002897
M
8 >>
r.
n
!
1 X
&
xk+1
1 = b1 − a1j xk1
a11
d
j=2
hi
i−1 n
!
1 X X
xk+1 = bi − aij xk+1 − aij xkj
R
i j
aii j=1 j=i+1
il
n−1
!
1 X
xk+1 = bn − anj xk+1
r.
n j
a11
D
j=1
75
in the Gauss-Seidel method, the updated variables are used
in the computations as soon as they are updated. Thus in
the Jacobi method, during the computations for a particular
iteration, the “known” values are all from the previous itera-
tion. However in the Gauss-Seidel method, the “known” val-
ues are a mix of variable values from the previous iteration
(whose values have not yet been evaluated in the current
iteration), as well as variable values that have already been
aa
updated in the current iteration.
af
Example 3.13. Apply the Gauss-Seidel method to solve
N
ad
5x1 − 2x2 + 3x3 = 12
an
−3x1 + 9x2 + x3 = 14
2x1 − x2 − 7x3 = −12
oh
M
Choose the initial guess x(0) = (0, 0, 0).
Solution: To begin, rewrite the system
r.
D
1
xk+1
1 = (12 + 2xk2 − 3xk3 )
5
&
1
xk+1 = (14 + 3xk+1 − xk3 )
d
2 1
9
hi
−1
xk+1 = (−12 − 2xk+1 + xk+1
2 )
as
3 1
7
R
the approximation is
il
Ad
k x1 x2 x3
0 0 0 0
r.
76
For Gauss-Seidel method we can use the following Mat-
lab code:
Matlab Code 3.14. Gauss-Seidel method
1 % ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
2 % ∗∗∗∗ Solve a system o f l i n e a r equation ∗∗∗∗
3 % ∗∗ Ax=b by Gauss−Seidel method ∗∗
% ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
aa
4
5 clc
af
6 clear
N
7 close a l l
ad
8 A=[4 1 2;1 3 1;1 2 5 ] ; % input the matrix A
9 b=[16;10;12]; % input the v e c t o r b
an
10 x0 = [ 0 ; 0 ; 0 ] ; % input the v e c t o r X0
11 xnew=x0 ; oh
n = length ( b ) ;
M
12
13 fprintf ( ’ k x1 x2 x3 \n ’ )
r.
x0 ( 2 ) , x0 ( 3 ) )
&
15 f l a g =1;
w=0;
d
16
while f l a g > 0
hi
17
w=w+1;
as
18
19 f o r k=1:n
R
20 sum=0;
il
21 f o r i =1:n
Ad
22 i f k˜= i
23 sum=sum+A ( k , i ) ∗xnew ( i ) ;
r.
end
D
24
25 end
26 xnew ( k ) = ( b ( k )−sum) /A ( k , k ) ;
27 end
28 f p r i n t f ( ’ %2.0 f %2.8 f %2.8 f %2.8 f \n ’ ,w, xnew ( 1 ) ,
77
xnew ( 2 ) ,xnew ( 3 ) )
29 f o r k=1:n
30 i f abs ( xnew ( k )−x0 ( k ) ) > 0.0001
31 x0=xnew ;
32 break
33 else
34 f l a g =0;
end
aa
35
36 end
af
37 end
N
ad
The result as the following:
k x1 x2 x3
an
1
2 0 0 0 0
3 1 4.00000000 2.00000000
oh 0.80000000
M
4 2 3.10000000 2.03333333 0.96666667
5 3 3.00833333 2.00833333 0.99500000
r.
9 >>
hi
as
3.9 EXERCISE
R
il
−10 2 3
D
A = 4 −50 6 .
7 8 −90
78
2. Solve the system
−2 1 0 0 0
x1 1
1 −2 1 0 0 x2 0
0 1 −2 1 0 x3 = 0
0 0 1 −2 1 x4 0
0 0 0 1 −2 x5 0
aa
using both the Jacobi and the Gauss-Seidel iterations.
af
3. Solve the system linear of equations
N
2x1 + 7x2 + x3 = 19
ad
4x1 + x2 − x3 = 3
an
x1 − 3x2 + 12x3 = 31
oh
by the Jacobi method and by the Gauss-Seidel method
M
(stop after three iterations).
r.
D
&
d
hi
as
R
il
Ad
r.
D
79
Chapter 4
aa
af
N
Suppose one has a set of data pairs:
ad
xi x1 x2 · · · xn
an
yi y1 y2 · · · yn
oh
and we need to find a function f (x) such that
M
yi = f (xi ), i = 1, . . . n (4.1)
r.
D
80
4.1 General Interpolation
The general interpolation is to assume the function f (x) is a
linear combination of basis functions f1 (x), . . . , fn (x)
aa
that the interpolation conditions
af
yi = f (xi ) = a1 f1 (xi ) + a2 f2 (xi ) + · · · + an fn (xi ) i = 1, . . . , n
N
are satisfied. We are assuming that we have the same num-
ad
ber of basis functions as we have data points, so that the
an
interpolation conditions are a system of n linear equations
for the n unknowns ai . oh
Writing out the interpolation conditions in full gives
M
..
.
&
f1 (x1 ) f2 (x1 ) · · · fn (x1 ) a1 y1
R
f (x ) f (x ) · · · f (x ) a y
1 2 2 2 n 2 2 2
il
.. .. .. .. .. = ..
Ad
. . . . . .
f1 (xn ) f2 (xn ) · · · fn (xn ) an yn
r.
81
aa
af
N
ad
an
oh
M
Figure 4.1: digram of function f (x) of example 4.1, the given points are
r.
blue stars
D
&
x -1 0 1 2
Ad
82
Now the problem is to determine the coefficients a1 , a2 , a3 and
a4 . The basis matrix is
−x1
1 ex1 e2x1 e 1 e−1 e−2
1
e
e−x2 1 ex2 e2x2 e0 1 e0 e0
e−x3 1 ex3 e2x3 = e−1 1 e1 e2
aa
coefficients of the basis functions are [a1 , a2 , a3 , a4 ] = [0.7352, −1.0245, 1.1978, −
So our interpolating function is (see figure 4.1):
af
N
f (x) = 0.7352e−x − 1.0245 + 1.1978ex − 0.1085e2x
ad
Example 4.2. We will again interpolate the same data in ex-
ample 4.1 but by a cubic polynomial
an
f (x) = a1 + a2 x + a3 x2 + a4 x3 .
oh
In this case the basis functions are
M
1 x1 x21 x31
1 −1 1 −1
&
1 x2 x22 x32
= 1 0 0 0
d
1 x3 x23 x33 1 1 1 1
hi
1 x4 x24 x34 1 2 4 8
as
1 −1 1 −1 a1 1.4
il
1 1 1 1 a3 1.7
1 2 4 8 a4 2
r.
D
Solving the problem gives [a1, a2, a3, a4 ] = [0.8, 0.5, 0.75, −0.35]
and we have the interpolating polynomial (see figure 4.2):
f (x) = 0.8 + 0.5x + 0.75x2 − 0.35x3
.
83
aa
af
N
ad
an
oh
M
Figure 4.2: digram of function f (x) of example 4.2, the given points are
r.
blue stars
D
(xi , yi ), i = 1, . . . n,
R
il
p(x) = a1 + a2 x + a3 x2 + · · · + an xn−1 .
the basis functions are
f1 (x) = 1, f2 (x) = x, f3 (x) = x2 , ..., fn (x) = xn−1
84
aa
af
Figure 4.3: digram of function f (x) of example 4.3
N
ad
For data x1 , . . . , xn the basis matrix is
an
n−1
1 x1 x21 . . . x1 a1 y1
1 x x2
.. .. . .
2 2
oh
n−1
. . . x 2 a2 y 2
.. = ..
(4.2)
. ..
. . . . .
M
1 xn x2n . . . xn−1
n an yn
r.
D
degree two whose graph passes through the points (1, 6),
d
Solution:
R
il
a1 + a2 + a3 = 6
D
a1 + 2a2 + 22 a3 = 3
a1 + 3a2 + 32 a3 = 2
85
Or by matrix notation
1 1 1 a1 6
2
1 2 2 a2 = 3 (4.3)
1 3 32 a3 2
aa
Example 4.4. Determine the equation of the polynomial whose
af
graph passes through the points:
N
x 0 0.5 1.0 1.5 2.0 3.0
ad
y 0.0 -1.40625 0.0 1.40625 0.0 0.0
an
The Solution is Homework oh
M
which giving the interpolating polynomial
r.
86
aa
af
N
ad
Figure 4.4: digram of function f (x) of example 4.4
an
4.3 Lagrange Interpolation oh
M
Another way to construct the interpolating polynomial is
through the Lagrange interpolation formula. Suppose one
r.
D
as
d
n
hi
X
pn (x) = yk Lk (x) (4.4)
as
k=0
R
where
Ad
n
Y x − xi
Lk (x) = (4.6)
xk − xi
r.
i=0,i6=k
D
or
(x − x0 )(x − x1 ) · · · (x − xk−1 )(x − xk+1 ) · · · (x − xn )
Lk (x) =
(xk − x0 )(xk − x1 ) · · · (xk − xk−1 )(xk − xk+1 ) · · · (xk − xn )
Note that
87
1 i=k
Lk (xi ) = δik = (4.7)
6 k
0 i=
this lead to
pn (xi ) = y0 L0 (xi ) + · · · + yi−1 Li−1 (xi ) + yi Li (xi ) + yi+1 Li+1 (xi ) + · · · + yn Ln (xi )
= y0 0 + · · · + yi−1 0 + yi 1 + yi+1 0 + · · · + yn 0
= yi
aa
af
Theorem 4.5. Let x0 , x1 , · · · , xn , be n+1 distinct numbers, and
N
let f (x) be a function defined on a domain containing these
ad
numbers. Then the polynomial defined by
an
n
X
pn (x) = f (xk ) Lk (x)
oh (4.8)
k=0
M
following data
as
i 0 1 2 3
R
xi -1 0 1 2
il
yi 3 -4 5 -6
Ad
88
yields
(x − x1 )(x − x2 )(x − x3 )
L0 (x) =
(x0 − x1 )(x0 − x2 )(x0 − x3 )
(x − 0)(x − 1)(x − 2)
=
(−1 − 0)(−1 − 1)(−1 − 2)
x(x2 − 3x + 2)
=
(−1)(−2)(−3)
aa
−1 3
= (x − 3x2 + 2x)
af
6
N
ad
(x − x0 )(x − x2 )(x − x3 )
L1 (x) =
an
(x1 − x0 )(x1 − x2 )(x1 − x3 )
(x + 1)(x − 1)(x − 2)
oh
=
(0 + 1)(0 − 1)(0 − 2)
M
(x2 − 1)(x − 2)
=
(1)(−1)(−2)
r.
D
1
= (x3 − 2x2 − x + 2)
2
&
d
hi
(x − x0 )(x − x1 )(x − x3 )
L2 (x) =
as
=
(1 + 1)(1 − 0)(1 − 2)
il
Ad
x(x2 − x − 2)
=
(2)(1)(−1)
r.
−1 3
= (x − x2 − 2x)
D
89
(x − x0 )(x − x1 )(x − x2 )
L3 (x) =
(x3 − x0 )(x3 − x1 )(x3 − x2 )
(x + 1)(x − 0)(x − 1)
=
(2 + 1)(2 − 0)(2 − 1)
x(x2 − 1)
=
(3)(2)(1)
aa
1
= (x3 − x)
af
6
N
By substituting xi for x in each Lagrange polynomial Lj (x), for
ad
j = 0, 1, 2, 3, it can be verified that
an
1 i=j
Lj (xi ) = (4.10)
0 i 6= j oh
M
It follows that the Lagrange interpolating polynomial p(x) is
given by
r.
3
D
X
p3 (x) = f (xk ) Lk (x) (4.11)
&
k=0
d
3
hi
X
p3 (x) = f (xj ) Lj (x)
as
j=0
R
−1 3 2 1
= (3) (x − 3x + 2x) + (−4) (x3 − 2x2 − x + 2)
Ad
6 2
−1 1
(x3 − x2 − 2x) + (−6) (x3 − x)
r.
+ (5)
2 6
D
3 2
= −6x + 8x + 7x − 4
90
aa
af
N
ad
an
Figure 4.5: digram of function p3 (x) of example 4.6
oh
Using Lagrange Interpolation method with the Matlab code
to plot the interpolation polynomial.
M
1 %
&
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
d
hi
2 % ∗∗∗∗ interpolation
∗∗∗∗
as
I n t e r p o l a t i o n ∗∗
il
%
Ad
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
r.
clc
D
6 clear
7 close a l l
8
91
0.500 , 0 . 9 3 0 ] ; % x i data
10 % y i =[ −1.000 , −0.151, 0.894 , 0.986 , 0.895 ,
0.500 , −0.306]; % y i data
11
12 x i =[ −2 , −1 ,0 ,1 ,2 ,3];
13 y i = [ 4 , 3 , 5 ,5 , −7 , −2];
14
aa
15
polynomial
af
16 % we p l o t t i n g the lagrang polynomial use x
N
values
ad
17 % from x 1 t o x n with 1000 d i v i s i o n s
dx = ( x i (m)− x i ( 1 ) ) /1000;
an
18
19 x = ( x i ( 1 ) : dx : x i (m) ) ;
20
oh
xlabel ( ’x ’ ) ;
M
21
22 ylabel ( ’ y ’ ) ;
r.
23
D
25
26
hi
28
L ( k , : ) =L ( k , : ) . ∗ ( ( x−x i ( i ) ) / ( x i ( k )−x i ( i
R
29
)));
il
end
Ad
30
31 end
end
r.
32
D
33
34 y=0;
35 f o r k=1:m
36 f=yi ( k ) .∗ L ( k , : ) ;
37 y=y+ f ;
92
38 end
39
aa
44
af
Interpolation ’ )
N
ad
The result as the figure 4.6.
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
93
4.4 EXERCISE
1. Solve example 4.6 using polynomial interpolation method
and compare with the Lagrange interpolation method.
Estimate the value of f (1.5) and f (2.5).
2. Solve examples 4.1, 4.2, 4.3, 4.4 using Lagrange inter-
polation method and compare with polynomial interpo-
aa
lation method. Find the approximation value of f (1.25)
and f (2.75).
af
N
3. Construct the cubic interpolating polynomial to the fol-
ad
lowing data and hence estimate f(1):
xi -2 0 3 4
an
f (xi ) 5 1 55 209
oh
4. Use each of the methods described before to construct
M
a polynomial that interpolates the points
r.
{(−2, 4), (−1, 3), (0, 5), (1, 5), (2, −7), (3, −2)}
D
.
&
d
hi
as
R
il
Ad
r.
D
94
4.5 Divided Differences Method
It’s also called Newton’s Divided Difference. Suppose that
Pn (x) is the nth Lagrange polynomial that agrees with the
function f at the distinct numbers x0 , x1 , · · · , xn . Although
this polynomial is unique, (Why ?), there are alternate alge-
braic representations that are useful in certain situations.
The divided differences of f with respect to x0 , x1 , · · · , xn are
aa
used to express Pn (x) in the form
af
Pn (x) = a0 +a1 (x−x0 )+a2 (x−x0 )(x−x1 )+· · ·+an (x−x0 ) · · · (x−xn−1 )
N
(4.12)
ad
for appropriate constants a0 , a1 , · · · , an . To determine the
an
first of these constants, a0 , note that if Pn (x) is written in
the form of Eq. (4.12), then evaluating Pn (x) at x0 leaves
oh
only the constant term a0 ; that is
M
a0 = Pn (x0 ) = f (x0 )
r.
D
so
f (x1 ) − f (x0 )
R
a1 = (4.13)
x1 − x0
il
95
denoted f [xi , xi+1 ] and defined as
f [xi+1 ] − f [xi ]
f [xi , xi+1 ] = (4.15)
xi+1 − xi
The second divided difference, f [xi , xi+1 , fi+2 ] is defined as
f [xi+1 , xi+2 ] − f [xi , xi+1 ]
f [xi , xi+1 , xi+2 ] = (4.16)
xi+2 − xi
aa
Similarly, after the (k−1)st divided differences f [xi , xi+1 , · · · , xi+k−1 ]
af
and f [xi+1 , xi+2 , · · · , xi+k−1 , xi+k ] have been determined, the
N
kth divided difference relative to xi , xi+1 , xi+2 , · · · , xi+k is
ad
f [xi+1 , xi+2 , · · · , xi+k ] − f [xi , xi+1 , · · · , xi+k−1 ]
f [xi , xi+1 , · · · , xi+k ] =
an
xi+k − xi
(4.17)
The process ends with the single nth divided difference
oh
M
f [x1 , x2 , · · · , xn ] − f [x0 , x1 , · · · , xn−1 ]
f [x0 , x1 , · · · , xn ] = (4.18)
r.
xn − x0
D
Pn (x) = f [x0 ]+f [x0 , x1 ](x−x0 )+a2 (x−x0 )(x−x1 )+· · ·+an (x−x0 ) · · · (x−xn−1 )
as
ak = f [x0 , x1 , · · · , xk ]
r.
n
X
Pn (x) = f [x0 ] + f [x0 , x1 , · · · , xk ](x − x0 )(x − x1 ) · · · (x − xk−1 )
k=1
96
D
r.
Ad
il
x
R
f (x) 1st Divided Difference 2nd Divided Difference 3rd Divided Difference
x0 f [x0 ]
as
[x0 ]
f [x0 , x1 ] = f [xx11]−f
−x0
f [x1 ,x2 ]−f [x0 ,x1 ]
hi
x1 f [x1 ] d f [x0 , x1 , x2 ] = x2 −x0
[x1 ] f [x1 ,x2 ,x3 ]−f [x0 ,x1 ,x2 ]
f [x1 , x2 ] = f [xx22]−f
−x1 & f [x0 , x1 , x2 , x3 ] = x3 −x0
f [x2 ,x3 ]−f [x1 ,x2 ]
x2 f [x2 ] f [x1 , x2 , x3 ] = x3 −x1
[x2 ] f [x2 ,x3 ,x4 ]−f [x1 ,x2 ,x3 ]
97
f [x2 , x3 ] = f [xx33]−f
−x2
D f [x1 , x2 , x3 , x4 ] = x4 −x1
f [x3 ,x4 ]−f [x2 ,x3 ]
x3 f [x3 ] f [x2 , x3 , x4 ] = x4 −x2
r.
[x3 ] f [x3 ,x4 ,x5 ]−f [x2 ,x3 ,x4 ]
f [x3 , x4 ] = f [xx44]−f
−x3
M f [x2 , x3 , x4 , x5 ] = x5 −x2
f [x4 ,x5 ]−f [x3 ,x4 ]
x4 f [x4 ] f [x3 , x4 , x5 ] = x5 −x3
[x4 ]
oh
f [x4 , x5 ] = f [xx55]−f
−x4
x5 f [x5 ]
an
ad
N
aa
af
and find the interpolating value of x = 1.5.
N
ad
Solution:
an
The first divided difference involving x0 and x1 is
f [x0 , x1 ] =
oh
f [x1 ] − f [x0 ]
x1 − x0
M
0.6200860 − 0.7651977
=
r.
1.3 − 1.0
D
= −0.4837057
&
manner and are shown in the fourth column in Table 4.2. The
hi
f [x1 , x2 ] − f [x0 , x1 ]
R
f [x0 , x1 , x2 ] =
x2 − x1
il
0.5489460 − (−0.4837057)
Ad
=
1.6 − 1.0
= −0.1087339
r.
D
98
D
r.
Ad
il
R
as
i xi f [xi ] i−1 i i−2 i−1 i i−3 i i−4 i
0 1.0 0.7651977
hif [x , x ] f [x , x , x ] f [x , · · · , x ] f [x , · · · , x ]
d
−0.4837057
1 1.3 0.6200860 & −0.1087339
−0.5489460
99
2 1.6 0.4554022
D −0.0494433 0.0658784 0.0018251
−0.5786120 0.0680685
r.
3 1.9 0.2818186 0.0118183
M
−0.5715210
4 2.2 0.1103623
oh
an
ad
N
aa
f [x1 , x2 , x3 , x4 ] − f [x0 , x1 , x2 , x3 ]
af
f [x0 , x1 , x2 , x3 , x4 ] =
x4 − x0
N
0.0680685 − 0.0658784
=
ad
2.2 − 1.0
= 0.0018251
an
All the entries are given in Table 4.2. oh
The coefficients of the Newton divided difference of the inter-
M
polating polynomial are along the diagonal in the table. This
polynomial is
r.
D
4.6 EXERCISE
il
Ad
100
4.7 Curve Fitting
Curve fitting is the process of finding equations to approx-
imate straight lines and curves that best fit given sets of
data. For example, for the data of Figure 4.7, we can use
the equation of a straight line, that is:
y = mx + b
aa
af
N
ad
an
oh
M
Figure 4.7: Straight line approximation
r.
D
For Figure 4.8, we can use the equation for the quadratic
or parabolic curve of the form
&
d
y = ax2 + bx + c
hi
101
Figure 4.8: Parabolic line approximation
aa
af
property that
N
d21 + d22 + · · · + d2n = minimum (4.19)
ad
an
and it is referred to as the least squares curve. Thus, a
straight line that satisfies equation (4.19) is called a least
oh
squares line. If it is a parabola, we call it a least squares
M
parabola.
r.
D
y = mx + b (4.20)
R
such that the sum of the squares of the errors will be min-
il
102
(xn , yn ) approximate a straight line. We denote the straight
line equations passing through these points as
y1 = mx1 + b
y2 = mx2 + b
y3 = mx3 + b (4.21)
··· ···
aa
yn = mxn + b
af
In equations (4.21), the slope m and y-intercept b are the
N
same in all equations since we have assumed that all points
ad
lie close to one straight line. However, we need to determine
the values of the unknowns m and b from all n equations.
an
The error (difference) between the observed value y1 , and
oh
the value that lies on the straight line, is y1 − (mx1 + b). This
difference could be positive or negative, depending on the
M
served value y2 and the value that lies on the straight line is
&
y2 −(mx2 +b) and so on. The straight line that we choose must
be a straight line such that the distances between the ob-
d
hi
squares.
103
Let the sum of the squares of the errors be
X
squares = [y1 − (mx1 + b)]2 + [y2 − (mx2 + b)]2
+ · · · + [yn − (mxn + b)]2 (4.22)
P
Since ( squares) is a function of two variables m and b,
to minimize (4.22) we must equate to zero its two partial
derivatives with respect to m and b. Then
aa
∂ X
squares = −2x1 [y1 − (mx1 + b)] − 2x2 [y2 − (mx2 + b)]
af
∂m
N
− · · · − 2xn [yn − (mxn + b)] = 0 (4.23)
ad
and
an
∂ X
squares = −2[y1 − (mx1 + b)] − 2[y2 − (mx2 + b)]
∂b oh
− · · · − 2[yn − (mxn + b)] = 0 (4.24)
M
P
thus ( squares) will have its minimum value.
D
obtain
d
hi
Xi=n i=n
X i=n
X
as
2
( xi )m + ( xi )b = xi yi
R
X X
( xi )m + nb = yi (4.25)
Ad
i=1 i=1
or by matrix notation
r.
D
104
P Pi=n
are computed as: (for simplicity we right as i=1 , x as xi
and y as yi )
D1
m=
∆
D2
b= (4.27)
∆
where
aa
P 2 P
x x
af
∆ = det P (4.28)
x n
N
ad
P P
xy x
D1 = det P (4.29)
an
y n
P 2 P
oh
x Pxy
M
D2 = det P (4.30)
x y
r.
D
x 0 10 20 30 40 50 60 70 80 90 100
y 27.6 31.0 34.0 37 40 42.6 45.5 48.3 51.1 54 56.7
as
R
Solution:
il
Ad
105
x y x2 xy
0 27.6
10 31
20 34
30 37
40 40
50 42.6
aa
60 45.5
70 48.3
af
80 51.1
N
90 54
ad
100 56.7
an
x2 = 38500
P P P P
x = 550 y = 467.8 xy = 26559
oh
Now we can compute the values of equations (4.28), (4.29)
and (4.30):
M
P 2 P
x x 38500 550
r.
P P
xy x 26559 550
D1 = det P = det = 34859
d
y n 467.8 11
hi
P 2 P
as
x xy 38500 26559
D2 = det P P = det = 3402850
x y 550 467.8
R
this lead to
il
Ad
D1
m= = 0.288
∆
r.
D2
b= = 28.123
D
∆
then the linear approximation of the data is:
y = mx + b = 0.288x + 28.123
see figure 4.9.
106
aa
af
N
ad
an
oh
Figure 4.9: Plot of the straight line for Example 4.9
M
x 770 677 428 410 371 504 1136 695 551 550
d
y 54 47 28 38 29 38 80 52 45 40
hi
x 568 504 560 512 448 538 410 409 504 777
as
y 49 33 50 40 31 40 27 31 35 57
x 496 386 530 360 355 1250 802 741 739 650
R
y 31 26 39 25 23 102 72 57 54 56
il
Ad
x 592 577 500 469 320 441 845 435 435 375
y 45 42 36 30 22 31 52 29 34 20
r.
x 364 340 375 450 529 412 722 574 498 493
D
y 33 18 23 30 38 31 62 48 29 40
x 379 579 458 454 952 784 476 453 440 428
y 30 42 36 33 72 57 34 46 30 21
107
Solution:
There are 60 sets of data and thus n = 60. by the same proce-
dure in example 4.9 we find:
P 2 P
x x 19954638 32780
∆ = det P = det
x n 32780 60
P P
aa
xy x 1487462 32780
D1 = det P = det
y n 2423 60
af
N
P 2 P
x xy 19954638 1487462
D2 = det P = det
ad
P
x y 32780 2423
an
this lead to
m=
D1
= 0.08
oh
∆
M
D2
b= = −3.3313
r.
∆
D
y = mx + b = 0.08x − 3.3313
d
hi
%
Ad
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
r.
D
2 % ∗∗∗∗ Linear F i t t i n g
∗∗∗∗
3 %
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
108
aa
af
N
ad
an
oh
Figure 4.10: Plot of the straight line for Example 4.10
M
clc
r.
4
D
5 clear
close a l l
&
x
hi
=[770,677,428,410,371,504,1136,695,551,550,568,504,560,51
as
R
9 y
il
=[54 ,47 ,28 ,38 ,29 ,38 ,80 ,52 ,45 ,40 ,49 ,33 ,50 ,40 ,31 ,40 ,27 ,31 ,3
Ad
10
r.
11 % x =[0 ,10 ,20 ,30 ,40 ,50 ,60 ,70 ,80 ,90 ,100];
D
12 % y
=[27.6 ,31.0 ,34.0 ,37 ,40 ,42.6 ,45.5 ,48.3 ,51.1 ,54 ,56.7];
13
109
14 % x = 1:5;
15 % y = [ 1 2 1.3 3.75 2 . 2 5 ] ;
16
aa
21
22 sumxiyi = sum( x . ∗ y ) ;
af
23 sumxi2 = sum( x . ˆ 2 ) ;
N
24 % comput m and b
ad
25 m= ( sumxi∗sumyi−n∗ sumxiyi ) / ( sumxiˆ2−n∗sumxi2 )
b = ( sumxiyi ∗sumxi−sumyi∗sumxi2 ) / ( sumxiˆ2−n∗sumxi2
an
26
)
27 % y=mx+b oh
xmin=min ( x ) ; xmax=max( x ) ;
M
28
29 dx = (xmax−xmin ) /100;
r.
30 w=xmin : dx : xmax ;
D
31 fw=m∗w+b ;
&
33 hold on
p l o t ( x , y , ’ ∗ r ’ , ’ linewidth ’ , 1 )
as
34
xlabel ( ’x ’ ) ;
R
35
36 ylabel ( ’ y ’ ) ;
il
37
r.
110
where the coefficients a, b and c are found from
X X X
( x2 )a + ( x)b + nc = y
X X X X
( x3 )a + ( x2 )b + ( x)c = xy (4.32)
X X X X
( x4 )a + ( x3 )b + ( x2 )c = x2 y
aa
Example 4.12. Compute the straight line equation that best
af
fits the following data
N
x 1.2 1.5 1.8 2.6 3.1 4.3 4.9 5.3
ad
y 4.5 5.1 5.8 6.7 7.0 7.3 7.6 7.4
an
x 5.7 6.4 7.1 7.6 8.6 9.2 9.8
y 7.2 6.9 6.6 5.1 4.5 oh
3.4 2.7
M
Solution:
r.
D
n = 15
hi
X
x = 79.1
as
X
x2 = 530.15
R
X
il
x3 = 4004.50
Ad
X
x4 = 32331.49
r.
X
y = 87.8
D
X
xy = 437.72
X
x2 y = 2698.37
111
By substitution into equations (4.32) to get
X X X
( x2 )a + ( x)b + nc = y
530.15a + 79.1b + 15c = 87.8
X X X X
( x3 )a + ( x2 )b + ( x)c = xy
4004.50a + 530.15b + 79.1c = 437.72
X X X X
4 3 2
( x )a + ( x )b + ( x )c = x2 y
aa
32331.49a + 4004.50b + 530.15c = 2698.37
af
N
Solve these equations with any method from previous chapter
to get a = −0.2, b = 1.94, and c = 2.78. Therefore, the least
ad
squares parabola is
an
y = −0.2x2 + 1.9x + 2.78
oh
The plot for this parabola is shown in Figure 4.11.
M
the Matlab code for parabola regression is:
r.
1 %
&
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
d
hi
∗∗∗∗
%
R
∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
il
Ad
4 clc
clear
r.
5
D
6 close a l l
7 % i n t e r the dasta points
8 x
=[1.2 ,1.5 ,1.8 ,2.6 ,3.1 ,4.3 ,4.9 ,5.3 ,5.7 ,6.4 ,7.1 ,7.6 ,8.6 ,9.2
112
aa
af
N
ad
an
oh
M
Figure 4.11: Plot of the least squares parabola for Example 4.12
r.
D
9 y
&
=[4.5 ,5.1 ,5.8 ,6.7 ,7.0 ,7.3 ,7.6 ,7.4 ,7.2 ,6.9 ,6.6 ,5.1 ,4.5 ,3.4
d
hi
10
as
11
R
12 % x
=[770,677,428,410,371,504,1136,695,551,550,568,504,560,51
il
Ad
13 % y
r.
=[54 ,47 ,28 ,38 ,29 ,38 ,80 ,52 ,45 ,40 ,49 ,33 ,50 ,40 ,31 ,40 ,27 ,31 ,3
D
14
15 % x =[0 ,10 ,20 ,30 ,40 ,50 ,60 ,70 ,80 ,90 ,100];
16 % y
113
=[27.6 ,31.0 ,34.0 ,37 ,40 ,42.6 ,45.5 ,48.3 ,51.1 ,54 ,56.7];
17
18 % x = 1:5;
19 % y = [ 1 2 1.3 3.75 2 . 2 5 ] ;
20
aa
22
af
24 sumxi = sum( x ) ;
N
25 sumyi = sum( y ) ;
ad
26 sumxi2 = sum( x . ˆ 2 ) ;
sumxi3 = sum( x . ˆ 3 ) ;
an
27
28 sumxi4 = sum( x . ˆ 4 ) ;
29 sumxiyi = sum( x . ∗ y ) ; oh
sumxi2yi = sum( x . ∗ x . ∗ y )
M
30
B
D
32 A= [ sumxi2 , sumxi , n
&
34
hi
36
S=inv ( A ) ∗B;
R
37
38 xmin=min ( x ) ; xmax=max( x ) ;
il
dx = (xmax−xmin ) /100;
Ad
39
40 w=xmin : dx : xmax ;
fw=S ( 1 ) ∗w.ˆ2+S ( 2 ) ∗w+S ( 3 ) ;
r.
41
42
i n t e r p o l a t i o n polynomial
43 hold on
44 p l o t ( x , y , ’ ∗ r ’ , ’ linewidth ’ , 1 )
45 xlabel ( ’x ’ ) ;
114
46 ylabel ( ’ y ’ ) ;
47 t i t l e ( ’ Curve F i t t i n g using parabola Regression ’ )
aa
af
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
115
Chapter 5
aa
Integration
af
N
ad
5.1 Numerical Differentiation: Finite Differ-
an
ences oh
M
The first questions that comes up to mind is: why do we
need to approximate derivatives at all? After all, we do know
r.
approximate derivatives:
d
hi
derivatives.
• There are times in which exact formulas are available
but they are very complicated to the point that an ex-
act computation of the derivative requires a lot of func-
116
tion evaluations. It might be significantly simpler to ap-
proximate the derivative instead of computing its exact
value.
• When approximating solutions to ordinary (or partial)
differential equations, we typically represent the solu-
tion as a discrete approximation that is defined on a
grid. Since we then have to evaluate derivatives at the
aa
grid points, we need to be able to come up with meth-
ods for approximating the derivatives at these points,
af
and again, this will typically be done using only values
N
that are defined on a lattice. The underlying function it-
ad
self (which in this cased is the solution of the equation)
an
is unknown.
oh
Suppose that a variable f (x) depends on another variable x
but we only know the values of f at a finite set of points,
M
e.g., as data from an experiment or a simulation:
r.
0 h2 00 h3 000 h4 (4)
f (x+h) = f (x)+hf (x)+ f (x)+ f (x)+ f (x)+· · · (5.1)
R
2 6 24
il
2 3
h 00 h 000 h4 (4)
Ad
0
f (x−h) = f (x)−hf (x)+ f (x)− f (x)+ f (x)+· · · (5.2)
2 6 24
2 3
h 00 h 000 h4 (4)
r.
0
f (x+2h) = f (x)+2hf (x)+4 f (x)+8 f (x)+16 f (x)+· · ·
2 6 24
D
(5.3)
2 3 4
h h h
f (x−2h) = f (x)−2hf 0 (x)+4 f 00 (x)−8 f 000 (x)+16 f (4) (x)−· · ·
2 6 24
(5.4)
and so on.
117
aa
af
N
Figure 5.1: digram of the forward-difference approximation of the func-
tion f (x)
ad
an
5.1.1 Finite Difference Formulas for f 0 (x):
oh
To derive a formula for f 0 (x) there are many formulas, as
example: from equation (5.1):
M
h2 00
r.
2
Note that we have replaced terms in h2 by corresponding
&
f (x + h) − f (x) h 00
f 0 (x) = − f (ξ+ ); ξ+ ∈ [x, x + h]
as
h 2
or
R
f (x + h) − f (x)
f 0 (x) = (5.6)
il
h
Ad
5.1.
By the same procedure we can get from (5.2):
0 h2 00
f (x − h) = f (x) − hf (x) + f (ξ− ) (5.7)
2
118
aa
af
N
Figure 5.2: digram of the central-difference approximation of the func-
ad
tion f (x)
an
and
f (x) − f (x − h) oh
f 0 (x) = (5.8)
h
M
This formula have error of O(h) and called a backward-
difference approximation f 0 (x).
r.
D
3 2
hi
f (x + h) − f (x − h) h2 000
D
0
f (x) = − f (ξ); ξ+ ∈ [x − h, x + h]
2h 6
(5.9)
or
f (x + h) − f (x − h)
f 0 (x) = (5.10)
2h
119
The error in the central-difference formula is of O(h2 ), it is
ultimately more accurate than a forward difference scheme.
By the same procedure we can get (Homework):
−3f (x) + 4f (x + h) − f (x + 2h)
f 0 (x) = + O(h2 ) (5.11)
2h
this is forward difference approximation, and
aa
3f (x) − 4f (x − h) + f (x − 2h)
f 0 (x) = + O(h2 ) (5.12)
2h
af
N
this is backward difference approximation, and
ad
−f (x + 2h) + 8f (x + h) − 8f (x − h) + f (x − 2h)
f 0 (x) = + O(h4 )
an
12h
(5.13)
oh
this a central finite difference formula. There are many
other formulas for the finite difference approximation and
M
Solution:
x = 2.0, h = 0.1
120
aa
af
N
ad
an
Figure 5.3: digram of the solution of example 5.1, m is the tangent line
oh
and the others are the approximated tangent lines
M
then from formula (5.6):
r.
f (x + h) − f (x)
f 0 (x) =
D
h
&
f (2.1) − f (2)
f 0 (2) =
0.1
d
17.14895682 − 14.7781122
hi
=
0.1
as
= 23.70844619
R
121
from formula (5.8):
f (x) − f (x − h)
f 0 (2) =
h
f (2) − f (1.9)
=
0.1
14.7781122 − 12.70319944
=
0.1
= 20.74912758
aa
the absolute error is |22.1672 − 20.74912758| = 1.41807242.
af
the relative error is | 22.1672−20.74912758 | = 0.068343713.
N
23.70844619
see the tangent line m2 in figure 5.3.
ad
an
from formula (5.10):
f (x + h) − f (x − h)
oh
f 0 (2) =
2h
M
f (2.1) − f (1.9)
=
0.2
r.
17.14895682 − 12.70319944
D
=
0.2
&
= 22.22878688
d
22.22878688
see the tangent line m3 in figure 5.3.
R
il
=
0.2
−3(14.7781122) + 4(17.14895682) − (19.8550297)
=
0.2
= 22.03230487
122
the absolute error is |22.1672 − 22.03230487| = 0.13489513.
the relative error is | 22.1672−22.03230487
22.03230487 | = 0.006122606.
see the tangent line m4 in figure 5.3.
aa
3f (2) − 4f (1.9) + f (1.8)
=
0.2
af
3(14.7781122) − 4(12.70319944) + (10.88936544)
N
=
0.2
ad
= 22.05452134
an
the absolute error is |22.1672 − 22.05452134| = 0.11267866.
22.05452134
oh
the relative error is | 22.1672−22.05452134 | = 0.005109096.
see the tangent line m5 in figure 5.3.
M
r.
−f (x + 2h) + 8f (x + h) − 8f (x − h) + f (x − 2h)
&
f 0 (2) =
12h
d
=
1.2
as
0.2
= 22.16699562
il
Ad
22.16699562
see the tangent line m6 in figure 5.3.
D
123
5.1.2 Finite Difference Formulas for f 00 (x):
To get a formula for the second derivative, we choose the
coefficients to pick off the first two terms of the Taylor ex-
pansion (5.1) and (5.2):
h2 00 h3 h4
f (x + h) = f (x) + hf 0 (x) + f (x) + f 000 (x) + f (4) (ξ+ )
2 6 24
aa
0h2 00 h3 000 h4 (4)
f (x − h) = f (x) − hf (x) + f (x) − f (x) + f (ξ− )
2 6 24
af
then
N
h4 f (4) (ξ+ ) + f (4) (ξ− )
ad
2 00
f (x + h) − 2f (x) + f (x − h) = h f (x) +
6 2
an
where ξ+ ∈ [x, x + h] and ξ− ∈ [x − h, x]. It follows that
00 f (x + h) − 2f (x) + f (x − h) h2 (4)
oh
f (x) = − f (ξ) ξ+ ∈ [x−h, x+h]
M
h2 6
or
r.
f (x + h) − 2f (x) + f (x − h)
f 00 (x) = + O(h2 )
D
h 2
f 00 (x) = + O(h2 )
Ad
h3
f 00 (x) = 3
+ O(h2 )
h
and centered difference approximations:
−f (x + 2h) + 16f (x + h) − 30f (x) + 16f (x − h) − f (x − 2h)
f 00 (x) = 2
+O(h4 )
12h
124
Example 5.2. Consider the same values given in Example
5.1 and use all the applicable formulas to approximate f 00 (2.0).
aa
af
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
125
aa
af
N
ad
Figure 5.4: Integration by the trapezoidal rule
an
5.2 Numerical Integration
oh
The need often arises for evaluating the definite integral of
M
a function that has no explicit antiderivative or whose an-
tiderivative is not easy to obtain. The basic method involved
r.
Rb
D
proximate a f (x)dx.
d
hi
Rb
a f (x)dx, we divide the interval a ≤ x ≤ b into n subinter-
Ad
126
and finally from xn−1 to b. The total area is
Z b Z x1 Z x2 Z b
f (x)dx = f (x)dx + f (x)dx + · · · + f (x)dx
a a x1 xn−1
n
XZ k
= f (x)dx
k=1 xk−1
aa
imated by the area of the trapezoid a P0 P1 x1 that is equal
af
to 21 (y0 + y1 )∆x plus the area of the trapezoid x1 P1 P2 x2 that
N
is equal to 12 (y1 + y2 )∆x, and so on. Then, the trapezoidal
ad
approximation becomes
an
1 1 1
T = (y0 + y1 )∆x + (y1 + y2 )∆x + · · · + (yn−1 + yn )∆x
2 2 2 oh
Or
M
1 1
T = y0 + y1 + y2 + · · · + yn−1 + yn ∆x (5.14)
2 2
r.
D
Z 2
x2 dx
hi
1
as
Compare with the exact value, and compute the Absolute Er-
R
Solution:
r.
Z 2 3 2
2 x 7
x dx = = = 2.3333333 (5.15)
1 3 1 3
127
For the trapezoidal rule approximation we have
x0 = a = 1; xn = b = 2; n = 4
b−a 2−1
∆x = = = 0.25
n 4
y = f (x) = x2
Then,
x0 = a = 1; y0 = f (x0 ) = 12 = 1
aa
2
5 5 25
af
x1 = a + ∆x = ; y1 = f (x1 ) = =
4 4 16
N
2
6 6 36
ad
x2 = a + 2∆x = ; y2 = f (x2 ) = =
4 4 16
an
2
7 7 49
x3 = a + 3∆x = ; y3 = f (x3 ) =oh =
4 4 16
2
M
8 64
x4 = b = 2; y4 = f (x4 ) = =
4 16
r.
1 1
&
T = y0 + y1 + y2 + y3 + y4 ∆x
2 2
d
1 25 36 49 1 64 1
hi
= ×1+ + + + × ×
2 16 16 16 2 16 4
as
75
= = 2.34375 (5.16)
R
32
il
From (5.14) and (5.16), we find that the absolute and relative
Ad
Solution:
Homework (The analytical value of this definite integral is
ln 2 = 0.6931).
aa
For Trapezoidal Rule we can use the following Matlab
code:
af
N
Matlab Code 5.5. Trapezoidal Rule
ad
1 % ∗∗∗∗∗∗∗∗∗∗∗ Trapzodal Rule ∗∗∗∗∗∗∗∗∗∗∗∗∗∗
an
2 % estimates the value o f the i n t e g r a l o f y= f ( x )
3 % from a t o b by using t r a p e z o i d a l r u l e
4 clc
oh
M
5 clear
6 close a l l
r.
a=1; % the s t a r t o f i n t e g r a l i n t e r v a l
D
10 h = ( b−a ) /n ;
hi
11 Area=0;
as
13
14
Ad
15 f o r i = 2:n ,
16 Area = Area + 2∗y ( i ) ;
r.
17 end
D
129
aa
af
N
ad
Figure 5.5: Simpson’s rule of integration
an
5.2.2 Simpson’s Rule oh
M
For numerical integration, let the curve of Figure 5.5 be rep-
resented by the parabola
r.
D
y = αx2 + βx + γ (5.17)
&
Z h
h
αx2 + βx + γ dx
Area|−h =
as
−h 3
R
βx2
αx
= + + γx |h−h
il
3 2
Ad
3
βh2 αh3 βh2
αh
= + + γh − − + − γh
3 2 3 2
r.
2αh3
D
= + 2γh (5.18)
3
1
= h 2αh2 + 6γ
3
130
The curve passes through the three points (−h, y0 ), (0, y1 ),
and (h, y2 ). Then, by equation (5.17) we have:
y0 = αh2 − βh + γ (5.19)
y1 = γ (5.20)
y2 = αh2 + βh + γ (5.21)
We can now evaluate the coefficients α, β and γ express
aa
(5.18) in terms of y0 , y1 and y2 . This is done with the follow-
ing procedure.
af
By substitution of (5.20) into (5.19) and (5.21) and rearrang-
N
ing we obtain
ad
αh2 − βh = y0 − y1 (5.22)
an
αh2 + βh = y2 − y1 (5.23)
Addition of (5.22) with (5.23) yields oh
M
2αh2 = y0 − 2y1 + y2 (5.24)
r.
1
Area|h−h = h 2αh2 + 6γ
&
3
1
d
or
as
1
Area|h−h = h (y0 + 4y1 + y2 ) (5.26)
R
3
il
parabola through its ends and its midpoint. Thus, the area
under segment AB is
1
Area|AB = h (y0 + 4y1 + y2 ) (5.27)
3
131
aa
af
N
Figure 5.6: Simpson’s rule of integration by successive segments
ad
Likewise, the area under segment BC is
an
1
Area|BC = h (y2 + 4y3 + y4 )
oh (5.28)
3
and so on. When the areas under each segment are added,
M
we obtain
r.
h
D
(5.29)
This is the Simpson’s Rule of Numerical Integration. Since
d
from
Ad
b−a
h= ; n = even (5.30)
n
r.
132
5.2.3 Solution:
b−a
a = x0 = 1; b = x4 = 2; n = 4; T hen h = = 0.25
n
x0 = 1 x1 = a + h = 1.25 x2 = a + 2h = 1.5 x3 = a + 3h = 1.75 x4 = 2.0
y0 = 1 y1 = 0.8 y2 = 0.66667 y3 = 0.57143 y4 = 0.5
From equation 5.29 we have
aa
h
Area = [y0 + 4y1 + 2y2 + 4y3 + y4 ]
3
af
0.25
= [1 + 4(0.8) + 2(0.66667) + 4(0.57143) + 0.5]
N
3
ad
= 0.69325
an
The Absolute Error = |0.6931 − 0.69325| = 0.00015
and Relative Error = | 0.6931−0.69325
0.69325 | = 0.00021637216.
oh
M
For Simpson’s Rule we can use the following Matlab
code:
r.
D
2
hi
4 clc
clear
R
6 close a l l
il
a=1; % the s t a r t o f i n t e g r a l i n t e r v a l
Ad
9
D
10 h = ( b−a ) /n ;
11 Area=0;
12 x = a : h : b ; % t o comput the x i values
13 % t h i s Example o f f ( x ) =xˆ2
14 y=x . ˆ 2 ; % t o comput the y i values
133
15
16 for i = 2:2:n ,
17 Area = Area + 4∗y ( i ) ;
18 end
19 f o r i = 3 : 2 : n−1,
20 Area = Area +2∗y ( i ) ;
21 end
Area = Area + y ( 1 ) + y ( n+1) ;
aa
22
23 Area = Area∗h/3
af
N
5.2.4 EXERCISE
ad
Use the trapezoidal approximation and Simpson’s rule to
an
compute the values the following definite integrals with n =
oh
4; n = 8 and compare your results with the analytical values.
M
R2 2
1. y = 0 e−x dx.
r.
R4√
2. y = 2 xdx.
D
R4√
&
3. y = 2 xdx.
d
R2
4. y = 0 x2 dx.
hi
as
Rπ
5. y = 0 sin(x)dx.
R
R1
6. y = 0 x21+1 dx.
il
Ad
r.
D
134
aa
af
N
ad
an
oh
Figure 5.7: Simpson’s 3/8 rule.
M
5.3 Simpson’s 3/8 Rule
r.
D
Z b Z b
R
f (x)dx h p3 (x)dx
il
a a
Ad
Z b
3h b−a
p3 (x)dx = [f (x0 ) + 3f (x1 ) + 3f (x2 ) + f (x3 )]+O(h4 ), h =
D
a 8 3
The method is known as the 3/8 rule because h is multiplied
by 3/8. To apply Simpson’s 3/8 rule the interval [a, b] must
be divided into a number n of subintervals must be a
135
multiple of 3 and the Composite Simpson’s 3/8 Rule will
be
Z b
3h
p3 (x)dx = [y0 + 3y1 + 3y2 + y3 ]
a 8
3h
+ [y3 + 3y4 + 3y5 + y6 ]
8
3h
+ [y6 + 3y7 + 3y8 + y9 ]
aa
8
+ ···
af
3h
N
+ [yn−3 + 3yn−2 + 3yn−1 + yn ]
8
ad
an
3h
= [y0 + 3y1 + 3y2 + 2y3 + 3y4 + 3y5 + 2y6 + · · · + 3yn−2 + 3yn−1 + yn ]
8 oh
M
r.
Z b Z b
R
f (x)dx ∼
= p4 (x)dx
il
a a
Ad
Z b
2h
p4 (x)dx = [7f (x0 ) + 32f (x1 ) + 12f (x2 ) + 32f (x3 ) + 7f (x4 )]+O(h7 )
D
a 45
To apply Boole’s rule the interval [a, b] must be divided
into a number n of subintervals must be a multiple of 4 .
136
5.3.2 Weddle’s Rule
Weddle’s Rule is seven points rule, so we need seven points
to form this rule. Weddle’s rule is given by
Z b
3h
p4 (x)dx = [f (x0 ) + 5f (x1 ) + f (x2 ) + 6f (x3 ) + f (x4 ) + 5f (x5 ) + f (x6 )]+O(h7 )
a 10
aa
af
5.3.3 EXERCISE
N
ad
Use the trapezoidal approximation and Simpson’s rule to
compute the values the following definite integrals with n =
an
4; n = 8 and compare your results with the analytical values.
R2 2
1. y = 0 e−x dx.
oh
M
R4√
2. y = 2 xdx.
r.
R4√
D
3. y = 2 xdx.
&
R2
4. y = 0 x2 dx.
d
Rπ
hi
5. y = 0 sin(x)dx.
as
R1
6. y = 0 x21+1 dx.
R
il
Ad
r.
D
137
Chapter 6
aa
Differential Equations
af
N
ad
This chapter is an introduction to several methods that can
an
be used to obtain approximate solutions of differential equa-
oh
tions. Such approximations are necessary when no exact
solution can be found. The Taylor Series, Euler’s and Runge
M
h2 00 h3 000 h4 (4)
as
0
f (x + h) = f (x) + hf (x) + f (x) + f (x) + f (x) + · · ·
2 6 24
R
f (x) by y0 to get
h2 00 h3 000 h4 (4)
hy00
r.
y1 = y 0 + + y0 + y0 + y0 + · · ·
2 6 24
D
aa
for values x0 = 0.0, x1 = 0.1, x2 = 0.2, x3 = 0.3, x4 = 0.4, and
x5 = 0.5 with the initial condition y(0) = 1. (use the Taylor
af
series up to y (4) ).
N
ad
Solution:
an
For this example, h = x1 − x0 = 0.1, and by substitution into
equation 6.1 we have:
oh
M
0.01 00 0.001 000 0.0001 (4)
yi+1 = yi + 0.1yi0 + y + y + y + ··· (6.3)
2 i 6 i 24 i
r.
D
of 6.2:
d
y 0 = −xy
hi
yi0 = −xi yi
D
139
where xi represents x0 = 0.0, x1 = 0.1, x2 = 0.2, x3 = 0.3, and
x4 = 0.4. Substitution these the values of the coefficients of
yi in 6.4 we obtain the following relations:
y00 = −x0 y0 = −0y0 = 0
y10 = −x1 y1 = −0.1y1
y20 = −x2 y2 = 0.2y2 (6.5)
y30 = −x3 y3 = 0.3y3
aa
y40 = −x4 y4 = 0.4y4
af
N
ad
y000 = (x20 − 1)y0 = −y0
y100 = (x21 − 1)y1
an
= −0.99y1
y200 = (x22 − 1)y2 = −0.96y2
oh (6.6)
y300 = (x23 − 1)y3 = −0.91y3
M
y400 = (x24 − 1)y4 = −0.84y4
r.
D
(4)
y0 = (x40 − 6x20 + 3)y0 = 3y0
r.
(4)
y1 = (x41 − 6x21 + 3)y1 = 2.9401y1
D
(4)
y2 = (x42 − 6x2i 2 + 3)yi 2 = 2.7616y2 (6.8)
(4)
y3 = (x43 − 6x23 + 3)y3 = 2.4681y3
(4)
y4 = (x44 − 6x24 + 3)y4 = 2.0656y4
140
By substitution of 6.5 through 6.8 into 6.3, and using the
given initial condition y0 = 1, we obtain:
0.01 00 0.001 000 0.0001 (4)
y1 = y0 + 0.1y00 + y + y + y
2 0 6 0 24 0
0.01 0.001 0.0001
= 1 + 0.1(0) + (−1) + (0) + (3)
2 6 24
= 0.99501
aa
Similarly
af
0.01 00 0.001 000 0.0001 (4)
y2 = y1 + 0.1y10 + y + y + y
2 1 6 1 24 1
N
= (1 − 0.01–0.00495–0.00005 + 0.00001)y1
ad
= 0.98511(0.99501)
an
= 0.980194
oh
0.01 00 0.001 000 0.0001 (4)
y3 = y2 + 0.1y20 +
M
y + y + y
2 2 6 2 24 2
= (1 − 0.02–0.0048–0.0001 + 0.00001)y2
r.
D
= (0.97531)0.980194
&
= 0.955993
d
hi
y + y + y
2 3 6 3 24 3
R
= (0.9656)0.955993
Ad
= 0.923107
r.
y + y + y
2 4 6 4 24 4
= (1 − 0.04–0.0042 + 0.00019 + 0.00001)y4
= (0.95600)0.923107
= 0.88249
141
We can compare
h between
i the approximated and the analyt-
−x dy
ical solution y = e 2 for the differential equation dx = −xy.
Homework
aa
h2 00 h3 000 h4 (4)
af
yi+1 = yi + hyi0 + yi + yi + yi + · · ·
2 6 24
N
Retaining the linear terms only of Taylor expansion gives
ad
h2 00
an
y(x1 ) = y(x0 ) + hy 0 (x0 ) + y (ξ0 ) (6.9)
oh 2
for some ξ0 between x0 and xi+1 . In general, expanding y(xi+1
M
about xi yields
r.
h2 00
D
0
y(xi+1 ) = y(xi ) + hy (xi ) + y (ξi )
2
&
142
aa
af
N
ad
Figure 6.1: Comparison of Euler’s and exact solutions in Example 6.2
an
Solution: oh
we have f (x, y) = −y + 2x. Starting with x = 0, y = 1, as
M
r.
y1 = y(0.1) = y0 + hf (x0 , y0 )
D
= 1 + 0.1f (0, 1)
&
= 1 + 0.1(−1) = 0.9
d
y2 = y1 + hf (x1 , y1 )
R
and so on · · · .
r.
143
6.3 Runge Kutta Method
The Runge Kutta method is the most widely used method
of solving differential equations with numerical methods. It
differs from the Taylor series method in that we use values
of the first derivative of f (x, y) at several points instead of
the values of successive derivatives at a single point.
For a Runge Kutta method of order 2, the following formulas
aa
are applicable
af
Runge-Kutta Method of Order 2:
N
k1 = hf (xn , yn )
ad
k2 = hf (xn + h, yn + h) (6.12)
an
1
yn+1 = yn + (k1 + k2 )
2 oh
When higher accuracy is desired, we can use order 3 or or-
M
der 4. The applicable formulas are as follows
Runge-Kutta Method of Order 3:
r.
D
I1 = hf (xn , yn )
&
h I1
I2 = hf (xn + , yn + )
2 2
d
hi
1
yn+1 = yn + (I1 + 4I2 + I3 )
6
R
m1 = hf (xn , yn )
h m1
m2 = hf (xn + , yn + )
r.
2 2
D
h m2
m3 = hf (xn + , yn + ) (6.14)
2 2
m4 = hf (xn + h, yn + m3 )
1
yn+1 = yn + (m1 + 2m2 + 2m3 + m4 )
6
144
Example 6.3. Compute the approximate value of y at x = 0.2
from the solution y(x) of the differential equation
y0 = x + y2 (6.15)
aa
Solution:
af
For order 2, we use 6.12. Since we are given that y(0) = 1,
N
we begin with x = 0, and y = 1. Then
ad
k1 = hf (xn , yn ) = hf (0, 1)
an
= 0.2(0 + 12 ) = 0.2
oh
k2 = hf (xn + h, yn + h) = hf (0.2, 1.2)
M
= 0.2 0.2 + (1.2)2 = 0.328
1
r.
y1 = y0 + (k1 + k2 )
2
D
1
= 1 + (0.2 + 0.328) = 1.264
&
2
d
hi
as
R
il
Ad
r.
D
145
For order 3, we use 6.13. Then
I1 = hf (xn , yn ) = 0.2
h I1 0.2 0.2
I2 = hf (xn + , yn + ) = hf (0 + ,1 + )
2 2 2 2
= 0.262
I3 = hf (xn + h, yn + 2(I2 − I1 ))
= 0.2f (0 + 0.2, 1 + 2(0.262 − 0.2))
aa
= 0.2 0.2 + (1 + 0.124)2
af
= 0.391
N
1
y1 = y0 + (I1 + 4I2 + I3 )
ad
6
1
an
= 1 + (0.2 + 4(0.262 + 0.391)
6
= 1.273 oh
M
For Order 4: we use 6.14. Then
r.
m1 = hf (xn , yn ) = 0.2
D
h m1 0.2 0.2
m2 = hf (xn + , yn + ) = 0.2f (0 + ,1 + )
&
2 2 2 2
= 0.262
d
hi
h m2
m3 = hf (xn + , yn + )
as
2 2
0.2 0.262
R
= 0.2f (0 + ,1 + )
2 2
il
= 0.276
Ad
m4 = hf (xn + h, yn + m3 )
r.
1
y1 = y0 + (m1 + 2m2 + 2m3 + m4 )
6
1
= 1 + (0.2 + 2(0.262) + 2(0.276) + 0.366)
6
= 1.274
146
6.3.1 EXERCISE
Compute the approximate value of y(x) of the following dif-
ferential equations using Taylor series, Euler’s, and Runge
Kutta Method of Order 2, 3, and 4.
1. y 0 = 3x2 . with h=0.1 and the initial condition y(2) = 0.5
2. y 0 = −y 3 + 0.2 sin(x). with h=0.1 and the initial condition
aa
y(0) = 0.707
af
3. y 0 = x2 − y. with h=0.1 and the initial condition y(0) = 1
N
ad
an
oh
M
r.
D
&
d
hi
as
R
il
Ad
r.
D
147