Collocation Methods in The
Collocation Methods in The
1 Introduction 6
5 Some Proofs 19
6 Numerical Results 25
6.1 Introduction ............................................................................................................. 25
6.2 Test Problems ......................................................................................................... 25
6.3 Results ..................................................................................................................... 27
Bibliography 56
Abstract
The aim of this dissertation is to study and program some methods used for the numerical solution of
Fredholm integral equations of Hammerstein type.
First of all we describe the types of integral equations. So we have two types of integral equations,
Volterra and Fredholm. Volterra and Fredholm integral equations are of first kind if the unknown
function is under the integral sign only and are of second kind if the unknown function is also outside
the integral.
Four methods are studied for equations with continuous kernels: the Trapezoidal and the Simpson's
method following the book by Baker (1978) ([3]), collocation methods following the paper by Kumar
and Sloan (1987)([14]) (cf. also Kumar (1988) ([15]), Kumar (1990)([16])) and a pseudospectral
method which is described following the paper by Elnagar and Razzaghi (1996) ([5]).
Finally, MATLAB programs are provided for some of the methods and the numerical results obtained
from testing several examples.
-
Chapter 1
Introduction
In this dissertation, first of all, we describe the different types of integral equations. So we have
Volterra integral equations where the upper limit of the integral that we use is a variable. On the other
hand, if the upper limit of the integral used is a constant then the integral equation is Fredholm. We
also distinguish between first kind integral equations where the unknown function occurs under the
integral sign only and second kind integral equations where the unknown function occurs also outside
the integral sign. The general form of a nonlinear Fredholm integral equation is:
, b
y ( x ) = f (x) + / K ( x , s , y ( s ) ) d s , a < x < b , (1.1)
a
where the kernel K(x, s,y(s)) is continuous in all its three variables, y(x) is the unknown function and
the kernel K(x, s,y(s)) and f (x) are the given functions. We are going to work on Hammerstein
integral equations that have the following form:
r b
y ( x ) = f (x) + / K ( x , s ) g ( s , y ( s ) ) ds. (1.2)
Classes of methods which have been applied for the numerical solution of Fred- holm Integral
equations (FIEs) include: the so called degenerate kernel method, where the kernel K(x, s) is analysed
as a sum of products of functions of x and s, Galerkin methods, collocation and iterated collocation
methods, methods based on use of Chebyshev polynomials, defect correction methods (of. the
CHAPTER 1. INTRODUCTION
The methods which are analysed in the thesis are collocation type methods applied to
Hammerstein Fredholm Integral equations of the form (1.2). The particular methods are these of
Kumar and Sloan (1987) ([14]) and Elnagar and Razzaghi (1996) ([5]).
To facilitate the understanding of the methods of numerical solution of Fred- holm Integral
equations two simple quadrature type methods are also considered applied to Fredholm Integral
equations of the form (1.1). These quadrature methods are based on the use of the Trapezoidal
and Simpson's quadrature rules.
The analysis of the methods considered consists of the description of the methods, their
programming in Matlab and consideration of convergence analysis.
CHAPTER 1. INTRODUCTION
The typing of the thesis was done in Latex and the programming in Matlab.
In the Appendix is given a list with the names of the Adobe Acrobat and postscript files with
the contents of the thesis and the names of the Matlab programs that are stored in the CD that
accompanies the project.
Chapter 2
Classification Of Integral Equations
y(x) = f (x)
+ K ( x , s ) y ( s ) ds.
What we can see is that the unknown function y(x) is under the integral sign [7, p. 1]. In this
general form we can also see the function K(x, s) that is the kernel of the integral equation and is
a function of two variables x and s. Integral equations can be presented in two ways, with
variable limits of integration:
(2.1)
(2.2)
Integral equations with variable limits are called Volterra integral equations and equations with
fixed limits of integration are called Fredholm integral equations. We can have Volterra and
Fredholm integral equations of first and second kind. Volterra equations of the first kind have the
following form:
(2.3)
C
CHAPTER 2. CLASSIFICATION OF INTEGRAL EQUATIONS 10
and Volterra equations of the second kind have the form (2.1). Fredholm integral equations of
the first kind have the form:
b
f (x) K(x,s)
= I y(s) ds
(2.4)
a
a
and Fredholm equations of the second kind have the form of (2.2). We can call a Volterra or a
Fredholm integral equation homogeneous if the function f (x) = 0 [7, p. 17].
An integral equation is called linear in y(x) if when yi(x) and y 2(x) are solutions of y(x) then
the linear combination of y1(x) and y2(x), cy1(x) + dy2(x) is also a solution [7, p. 17].
Finally, we can call an integral equation singular if the range of integration is infinite or its
kernel K(x, s) becomes infinite in the range of integration [7, p.
17-18].
What we can see is that Volterra and Fredholm integral equations are similar and their only
difference is the limits of integration, however, we cannot use the same methods to solve these
two classes of integral equations.
Chapter 3
In this chapter we are going to describe two basic numerical integration rules, the Trapezoidal
rule in section 3.1 and the Simpson's rule in section 3.2 [7, p. 38-41]. Also, we are going to show
how these methods approximate the integral of Fredholm equation (1.1) following Baker (1978)
[3, p. 686-687].
, b
y ( x ) = f ( x ) + K ( x , s , y ( s ) ) ds, x E [ a , b ] (3.1)
a
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS13
fb 1 1 Ja f (x) dx - h(2 f (x1) + f (x2) + ... + f (xi) + ... + f (x—) + 2 f (xn)) (3.3)
(3.4)
(3.5)
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS15
yi = f (xi ) + hJ2 w
i K (xi ,xj , y j ) i, j = 1, 2 , . . . , n j =i
/ c f w /1 ,+f x
i r / x
i — 1 + x
i \ \ / c f w
(
))
x
(x
i
f
- 1 , f (xi -1)), (
----"--------
(
---------"--------
, i, ( x
i ) )
2
2
(3.6)
j =1
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS16
h
yi = f (xi ) + 3K(xi, x1, y1) + 4K(xi, x2 , y2 ) + 2K (xi, xa, ya) + ... +
4K (xi ,xn _ 1 ,yn _ 1) +
K ( x i , x n , y n ))
or
n
yi = f (xi) + hJ^Wj K (xi ,xj , y j ) i , j = 1, 2 , . . . , n (3.8)
j=1
, b
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES
ds, x E [a,b]
y(x) = f (x)
+/ K(x,s)
g(s,y(s)) (4 1)
.
J a
where the functions f (x), K(x,s), g(s,y(s)) are known and a and b are real numbers.
z (x) = g ( x , y ( x ) ) , x E [ a , b ] . (4.2)
b
y(x) = f (x)+ K(x,s)z(s) ds, x E [a,b]. (4.3)
a
, b
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES
where j = 1, . . . , n and an j are unknowns and are determined by collocating equation (4.4) at the points
Tni, where i = 1,... ,n and unj(x) are basis functions. So from equation (4.4) we take that:
n n b
J^an j U n j (Tn i ) = g(Tn i , f (Tn i ) + YI an j K(Tn i , s)Un j
(s) d s ) , i = 1,..., n. (4.7) j=1 j=1 J a
The collocation points can be chosen to be T n j = n—1, where j = 1, 2,... ,n [14, p. 592]. The functions
Un1 , . . . , Unn can be defined [14, p. 592]:
, S , (x — Tn
(x)
2 )/(Tn 1 — Tn 2) , if x E [Tn1, Tn2]|
U
n1 =I
0, otherwise
(x T
n , j _ 1)/\ n,j
T T
n,j _1), if x E ( T
n , j _ 1, T
n,j U
n j
(x)
= ^ (x —
T
n,j+1)/(Tn,j — Tn,j+1), if x E (T
n,j
,T
n,j+1];
otherwise
CHAPTER 4. NUMERICAL METHODS FOR
After system (4.7) is solved for the a n , j for j = 1, 2,... ,n we have to use again (4.3) and
substitute zn(s) with formula (4.5). We are then going to take the approximation of y that is yn:
, b
y n ( x ) = f (x) + K ( x , s ) z n ( s ) d s
a
or
n
b
(4.8)
N
zN(x) = £ aiMx), x E [—1,1] (4.13)
i=0
(x2 — 1) L (x)
//\ n
W(x) = ---------------rr—;—r-7---------r, l =0, 1, . . . ,N,
FK
* N(N + 1)Ln(xi) (x — xi )' ''''
Using (4.13) and the fact that zN(xk) = ak, with k = 0,1,..., N we obtain:
1 N
ak = g(xk ,f (xk ) + / K (xk ,t)J2 ai Wi (t) dt)
1
i =0
N , 1
N N
or
or
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES
Since
Mx j )=S j
= {i ; i f l = j
we find
N
ak = g ( x k , f ( x k ) + J2 a i W i K ( x k , x i )) (4.15)
=0
where xi are Legendre - Gauss - Lobatto nodes and Wj 's are weights given by the following
formula[5, p. 585]:
W
j = NN+I)-(Lj j = 0' l— N <4J6)
r1
N N
y (x) = f (x) + y 1 K(x, s)zN(s) ds
N
or
N f 1
N
N
y (x) = f (x) + K ( x , s ) W i (s) ds)ai
=0 1
or
N
N
N
y (x) « f (x) + £ K(x,xi)Wiai. (4.17)
=0
Chapter 5 Some Proofs
In chapter three, the application of quadrature methods to Fredholm Integral Equations of the form
, b
y ( x ) = f ( x ) + K ( x , s , y ( s ) ) ds, x E [ a , b ]
a
was presented.
Here a convergence proof is presented for the quadrature methods applied to linear Fredholm
Integral Equations following Baker (1978) [3, p. 423-443]. Consider the linear Fredholm Integral
Equation
, b
y ( x ) = f (x) + A K ( x , s ) y ( s ) ds. (5.1)
a
One theorem is going to be proved Theorem (5.1) which shows that the error
lly(x) — y(x)IU ^ 0.
CHAPTER 5. SOME PROOFS
Before the theorem is stated and proved we need the following preliminaries and notation following
Baker (1978) ([3]):
J(W)
= X W
j W ( x
j
)
j =0
such that we have positive weights Wj and nodes xj E [a,b] [3, p. 123]
J R
is the class of quadrature rules which are Riemann sums [3, p. 124] where a Riemann sum
is defined in Definition (5.1).
DEFINITION 5.1: A Riemann sum [3, p. 123] for the integrall j O b W(x) dx is a sum of the
form
n
S(P,Q; 0) = X(Ui+1 — Ui )W(xi ) i =0
related to two partitions, P = {a = U0 < U1 < ... < Un < Un+1 = b} and Q = {x0 < x1 < ... < xn_ 1 < xn}
which are such that a = U0 < x0 < U1 < x1 < U2 < ... < Un < xn < Un+1 = b.
J(W) is a Riemann sum if the weights Woi and nodes xi in (5.4) correspond to such partitions
with vji = Ui+1 — U^.Thus we require Y_ Wui + a < xk < Ek=0Wji + a, k = 0,1,... ,n and Yn=0 Wi = b
— a.
J m ( W ) = J 2 W j W ( x j ) j =0
where n = n(m), Wj = Wj (m), xj = xj (m), a < xj < b [3, p. 125]. Then lim„woo Jm(<p) =
I(<p) for every function <p(x) in C[a, b] if and only if
(5.4)
(5.5)
CHAPTER 5. SOME PROOFS
(i) limm^^ Jm(0) = I(0) for every function 0(x) in some set S which is dense in the uniform norm in
C[a, b]
(ii) There is a constant W such that J2n=0 Wj I < W, m = 0,1, 2,....
DEFINITION 5.2: The local truncation error [3, p. 440] associated with method (5.2) is given by:
A is the measure of J(W) [3, p. 433] and we have that A = max{max(Ui+1 — yi,yi — Ui)}.
Wi(x) are step functions for i = 0,1, 2,... ,n [3, p. 433-434] such that:
W i ( x j ) = 5ij, i , j = 0,1, 2 , . . . , n ,
CHAPTER 5. SOME PROOFS
= { 1' f (5*7)
0, if x e [Ui ,Ui+1]
and
THEOREM 5.1:( Theorem (4.10) [3, p. 437]) Suppose that K(x,s) and f (x) in equation (5.1) are
continuous and Jm(W) is a family of rules in JR such that limm^^ Am = 0. Let y(x) be the Nystrom
extension given by (5.3) of the vector y = [yo,y1,... ,yn ] T
. Then limm^ ||y(x) — y(x)||^ = 0.
PROOF: Consider the Fredholm Integral equation (5.1) and the Nystroom extension y(x) of
[yo ,y1,..., yn ] T
given by (5.3).
Consider also the step functions 0i(x) as characterised by (5.7) and the kernel
n
Kn(x,s) = X K (x,xi )Wi (s). (5.8)
i =0
CHAPTER 5. SOME PROOFS
We are going to find a bound of the error ||e(x) and show that it tends to zero.
Since,
as A 0 we have that:
(5.10)
lim ||K n ( x , s ) — K ( x , s) = 0
In order to prove the theorem, we take equation (5.8) and we multiply both sides with y(s)
and we integrate. So we have:
CHAPTER 5. SOME PROOFS
>^ , - b Kn (x,s)y(s) ds =
n
or
n
r b
or
A K n ( x , s)y(s) d s = A X K ( x , x i ) W i y , Ja
i=0
CHAPTER 5. SOME PROOFS
c b ,
Kn (x, s)y(s) ds = y(x) — f (x).
a
So we have:
f b -
y(x) = f (x) + A Kn (x,s)y(s) ds. (5.11)
a
or
b b y ( x ) — y ( x ) = A [ K n ( x , s)y(s) d s — K ( x , s ) y ( s ) d s ] ,
a a
or using (5.9)
CHAPTER 5. SOME PROOFS
f b „ f b e(x) = A[ Kn(x,s)y(s) ds —
K(x,s)y(s) ds]. (5.12)
a a
By adding and subtracting A /0" K(x, s)y(s) ds in the right hand side of (5.12) we find:
/ b
ds —
„ f b Kn(x, s)y(s) ds + A K(x, s)y(s)
! ■ b ! ■ b A K(x,
s)y(s) ds — A K(x, s)y(s) ds
a a
or
r b c b
b b e(x) = A[ K(x,s)(y(s) — y(s)) ds + / (Kn (x,s) —
K (x,s))y(s) ds]. (5.13)
a a
Thus
or
CHAPTER 5. SOME PROOFS
Thus since from (5.10) HKn — K^ 0 we have the required result, that is
||e(x)|U ^ 0.
Chapter 6 Numerical
Results
6.1 Introduction
In this chapter we will look at some examples using the methods described in chapters three and
four. The first example is taken from Kaneko, Noren and Padilla (1997) ([10]), the second from
Kaneko, Padilla and Xu (2002) ([12]), the third from Atkinson (1997)([2]), and the fourth
example is taken from Baker (1978) ([3]). A fifth example is included so that comparisons with
results given about maximum absolute errors in the paper by Kumar and Sloan (1987) ([14]) are
made. This example is from Kumar and Sloan (1987) ([14]).
1
y(x)— Jo1 K(x,s)
g(s,y(s))ds = f(x)
' x E [0,1]
where the kernel is K(x, s) = ex_s, the function g(s,y(s)) = cos(s + y(s)) and the true solution is
y*(x) = 1. We do not have the function f (x) but we can find it by substituting the exact solution
and integrating. So we have:
CHAPTER 6. NUMERICAL RESULTS
t1 ex 1 ex
ex_s cos(s + 1)ds = —— (sin 2 — cos 2) + — (cos 1 — sin 1) J o
2 2
so
x 1
ee f (x) = 1 ^(sin 2 — cos 2) ——
(cos 1 — sin 1).
2
. y(x) — Jo1 K(x,s)
g(s,y(s))ds = f(x)
' x E [0,1]
where the kernel is K(x, s) = exs, the function g(s,y(s)) = cos(s + y(s)) and the true solution is
y*(x) = 1. We do not have the function f (x) but we can find it by substituting the exact solution
and integrating
f1 ex 1
xs
e cos(s + 1)ds =--------------o (sin 2 + x cos 2)--------------- (sin 1 + x cos 1)
Jo 1 + x2 1 + x2
so
ex 1
f (x) = 1----------------2 (sin 2 + x cos 2) +------------o2 (sin 1 + x cos 1).
1+x 1+x
r1
x x s s
f (x) = ex — f exses ds o
CHAPTER 6. NUMERICAL RESULTS
but
x 4
j 1- e+ _ 1
xs s
e e ds =----------------
Jo 1+x
CHAPTER 6. NUMERICAL RESULTS
ex+1 __ 1
xe 1
f (x) = e x —
1+x
4. y(x) = f (x) + /o1 K(x, s)y(s) ds, x E [0,1] where the kernel K(x, s) = min(x, s)es, f (x) = 4e2x
+ ex — 4xe2 — 4 and the true solution is y*(x) = ex.
Baker (1978) ([3], p. 384-385).
6.3 Results
First we are going to examine test problem 1 using the Trapezoidal rule that we have described in
chapter three. We have computed the error for n =11, n = 21 and n = 41 where n is the number of
nodes x4,... ,xn for x E [0,1]. We have the following results:
CHAPTER 6. NUMERICAL RESULTS
y
x Computed y Error
0.00 1.000 1.001 5.12e — 004
0.10 1.000 1.001 5.65e — 004
0.20 1.000 1.001 6.25e — 004
0.30 1.000 1.001 6.90e — 004
0.40 1.000 1.001 7.63e — 004
0.50 1.000 1.001 8.43e — 004
0.60 1.000 1.001 9.32e — 004
0.70 1.000 1.001 1.03e — 003
0.80 1.000 1.001 1.14e — 003
0.90 1.000 1.001 1.26e — 003
1.00 1.000 1.001 1.39e - 003
x y Computed y Error
0.00 1.000 1.000 1.28e 004
—
0.10 1.000 1.000 1.41e 004
—
0.20 1.000 1.000 1.56e 004
—
0.30 1.000 1.000 1.73e 004
—
0.40 1.000 1.000 1.91e 004
—
0.50 1.000 1.000 2.11e 004
—
0.60 1.000 1.000 2.33e 004
—
0.70 1.000 1.000 2.57e 004
—
0.80 1.000 1.000 2.85e 004
—
0.90 1.000 1.000 3.14e 004
—
1.00 1.000 1.000 3.47e - 004
Now we are going to examine test problem 2 using the Trapezoidal rule. Again we have
computed the error for n = 1 1 , n = 21 and n = 41 and for x E [0,1] .
y
x Computed y Error
CHAPTER 6. NUMERICAL RESULTS
Now we are going to examine test problem 3 using the Trapezoidal rule. TRAPEZOIDAL
RULE, EXAMPLE 3, N=11
y
x Computed y Error
0.00 1.000 0.986 1.36e - 002
0.10 1.105 1.091 1.40e - 002
0.20 1.221 1.207 1.44e - 002
CHAPTER 6. NUMERICAL RESULTS
x y Computed y Error
0.00 1.000 0.997 3.41e - 003
0.10 1.105 1.102 3.50e - 003
0.20 1.221 1.218 3.60e - 003
0.30 1.350 1.346 3.68e - 003
0.40 1.492 1.488 3.76e - 003
0.50 1.649 1.645 3.82e - 003
0.60 1.822 1.818 3.88e - 003
0.70 2.014 2.010 3.91e - 003
0.80 2.226 2.222 3.93e - 003
0.90 2.460 2.456 3.92e - 003
1.00 2.718 2.714 3.88e - 003
Finally, test problem 4 gives us the following results for n = 1 1 , n = 21 and n = 41 for x
from zero to 1 using the Trapezoidal rule:
y
x Computed y Error
Now we are going to examine the first set of data (test problem 1) using the Simpson's rule
that we have also described it in chapter three. Again, we have computed the error for n = 11, n
= 21 and n = 41 and for x from zero to one.
y
x Computed y Error
0.00 1.000 1.000 1.04e - 007
0.10 1.000 1.000 1.15e - 007
CHAPTER 6. NUMERICAL RESULTS
x y Computed y Error
0.00 1.000 1.000 6.59e - 009
0.10 1.000 1.000 7.29e - 009
0.20 1.000 1.000 8.05e - 009
0.30 1.000 1.000 8.90e - 009
0.40 1.000 1.000 9.83e - 009
0.50 1.000 1.000 1.09e - 008
0.60 1.000 1.000 1.20e - 008
0.70 1.000 1.000 1.33e - 008
0.80 1.000 1.000 1.47e - 008
0.90 1.000 1.000 1.62e - 008
1.00 1.000 1.000 1.79e - 008
y
x Computed y Error
For the second set of data (test problem 2), for n = 1 1 , n = 21 and n = 41 we take the
following results using Simpson's rule:
y
x Computed y Error
Now we are going to examine test problem 3 using the Simpson's rule. SIMPSON'S RULE,
EXAMPLE 3, N=11
y
x Computed y Error
0.00 1.000 1.000 2.87e - 005
0.10 1.105 1.105 2.96e - 005
0.20 1.221 1.221 3.05e - 005
0.30 1.350 1.350 3.12e - 005
0.40 1.492 1.492 3.16e - 005
0.50 1.649 1.649 3.17e - 005
CHAPTER 6. NUMERICAL RESULTS
y
x Computed y Error
Finally, using Simpson's rule the test problem 4 gives us the following results for n = 1 1 , n
= 21 and n = 41 for x from zero to 1:
y
x Computed y Error
x y Computed y Error
0.00 1.000 1.000 0.00e - 000
0.10 1.105 1.105 2.76e - 004
0.20 1.221 1.222 5.48e - 004
0.30 1.350 1.351 8.13e - 004
0.40 1.492 1.493 1.06e - 003
0.50 1.649 1.650 1.30e - 003
0.60 1.822 1.824 1.51e - 003
0.70 2.014 2.015 1.68e - 003
0.80 2.226 2.227 1.82e - 003
0.90 2.460 2.462 1.91e - 003
1.00 2.718 2.720 1.94e - 003
CHAPTER 6. NUMERICAL RESULTS
The next method is the collocation method of Kumar and Sloan (1987) ([14]) that was
described in chapter four and we are going to give the results obtained from this method. The
evaluation of the integrals involved was done using the trapezoidal method with number of
panels ntrap = 100. We worked on this method using the paper by Kumar and Sloan (1987).
The results of test problem 1 for n = 6, n = 11 and n = 21 are given below.
x y Computed y Error
CHAPTER 6. NUMERICAL RESULTS
Now, we are going to examine test problem 2 again for n = 6, n = 1 1 and n = 21.
CHAPTER 6. NUMERICAL RESULTS
x y Computed y Error
0.00 1.000 1.000 6.81e - 006
0.05 1.000 1.000 6.92e - 006
CHAPTER 6. NUMERICAL RESULTS
Now we are going to examine the third set of data (test problem 3) using collocation method
(Atkinson (1997) [2]).
x y Computed y Error
0.00 1.000 0.983 1.66e - 002
0.20 1.221 1.203 1.86e - 002
0.40 1.492 1.471 2.08e - 002
0.60 1.822 1.799 2.33e - 002
0.80 2.226 2.199 2.62e - 002
1.00 2.718 2.689 2.96e 002
x y Computed y Error
0.00 1.000 0.999 1.18e - 003
0.05 1.051 1.050 1.21e - 003
0.10 1.105 1.104 1.24e - 003
0.15 1.162 1.161 1.27e - 003
CHAPTER 6. NUMERICAL RESULTS
Finally, using collocation method we have the following results for the fourth set of data
(test problem 4):
The maximum absolute errors for the z values were obtained evaluating z(x) at points ti = (i -
1)/256, i = 1, 2,..., 256. The next table has the maximum z errors for the four examples for n = 6,
n = 11 and n = 21.
68
Tables with the collocation coefficients for the four examples for n = 6 and n = 1 1 are
below.
We can also examine the example that was used in Kumar and Sloan (1987) [14, p. 591-593]
using the collocation method in order to compare the results that we have with the results of Kumar
and Sloan and also with those in Elnagar and Razzaghi (1996) ([5]).
CHAPTER 6. NUMERICAL RESULTS
and the true solution is y*(x) = - l n 2 + 2ln(cos(c(x-1/2)/2)), where c is the solution of cos(cc/4) = V2
given by c = 1.3360556949... (Kumar (1990), [16], p. 327).
Table of maximum z and y errors using Kumar and Sloan Collocation method, Example 5
n=5 n=9
Evaluation of Results:
In the next tables we are going to give the maximum errors of y's for the methods that we have
examined for different n's. Also, we are going to give the theoretical order of convergence (th.or.of
conv.) and the observed order of convergence (ob.or.of conv.). Also, E\, E2 and E3 are the absolute
errors for n = 11, n = 21 and n = 41. Finally, E6, EN and E2I are the absolute errors for n = 6, n = 1 1
and n = 21.
Example 1:
CHAPTER 6. NUMERICAL RESULTS
We can see here that for the Trapezoidal rule and for the Simpson's rule the theoretical order of
convergence is the same with the observed order of convergence. We have that Ex/E2 = 4.0058 and
E2/E3 = 3.9931 for the Trapezoidal rule, for the Simpson's rule we have Ex/E2 = 15.8101 and E2/E3 =
15.9821 and for the Collocation method we have E5/En = 4.5325 and Eu/E2\ = 6.8346 We can also
understand that the smallest absolute errors are obtained using the Simpson's rule.
Example 2:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 1.70e - 003 1.00 O(h2)
21 4.25e - 004 1.00 O(h2) 2
41 1.06e - 004 1.00 O(h2) 2
SIMPSON'S RULE
11 6.78e - 007 0.60 O(h4)
21 4.23e - 008 0.60 O(h4) 4
41 2.64e - 009 0.60 O(h4) 4
COLLOCATION METHOD
6 1.70e - 004 0.00 O(h2)
11 3.95e - 005 0.00 O(h2) 2
CHAPTER 6. NUMERICAL RESULTS
In this example as in the first example if we use Trapezoidal rule the observed order of convergence
is equal to the theoretical order of convergence. If we use Simpson's rule we obtain the same order
of convergence and also we have the most accurate results. What we have is that if we use the
Trapezoidal rule we have that the ratio is Ex/E2 = 4 and E2/E3 = 4.0094, if we use Simpson's rule the
ratio is Ex/E2 = 16.0284 and E2/E3 = 16.0227 and finally if we use Collocation method the ratio is
E5/En = 4.3038 and E2/E3 = 3.7981.
Example 3:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 1.57e - 002 0.80 O(h2)
21 3.93e - 003 0.80 O(h2) 2
41 9.79e - 004 0.80 O(h2) 2
SIMPSON'S RULE
11 3.17e - 005 0.50 O(h4)
21 1.98e - 006 0.50 O(h4) 4
41 1.24e - 007 0.50 O(h4) 4
COLLOCATION METHOD
6 2.96e - 002 1.00 O(h2)
11 7.56e - 003 1.00 O(h2) 2
21 2.01e - 003 1.00 O(h2) 2
In this example we have again that the theoretical order of convergence is the same with the
observed order of convergence. The ratio for this example for the Trapezoidal rule is Ex/E2 = 3.9949
CHAPTER 6. NUMERICAL RESULTS
and E2/E3 = 4.0143, for the Simpson's rule the ratio is E\/E2 = 16.0101 and E2/E3 = 15.9677 and
finally for the collocation method the ratio is E5/Eu = 3.9153 and E2/E3 = 3.7612.
Example 4:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 9.25e - 002 1.00 O(h2)
21 2.27e - 1.00 O(h2) 2
002
41 5.64e - 003 1.00 O(h2) 2
SIMPSON'S RULE
11 3.10e - 002 1.00 O(h4)
21 7.76e - 003 1.00 O(h4) 2
41 1.94e - 003 1.00 O(h4) 2
COLLOCATION METHOD
6 4.19e - 002 1.00 O(h2)
11 1.14e - 002 1.00 O(h2) 2
2
21 3.41e - 003 1.00 O(h ) 2
The ratio here using the Trapezoidal rule is E1/E2 = 4.0749 and E2/E3 = 4.0248. If we use Simpson's
rule the ratio is E1/E2 = 3.9948 and E2/E3 = 4 and if we use the collocation method the ratio is E5/E11
= 3.7748 and En/E2i = 3.2551. As we can see we do not have accurate results using Simpson's rule
and this because we have a discontinuity in the first derivative of the kernel with respect to s at x =
s.
CHAPTER 6. NUMERICAL RESULTS
Finally, for example five the results that we have found are practically the same with the results
in the table in Kumar and Sloan (1987) [14, p. 593].
We can also see from the results obtained that the collocation method is more accurate that the
Trapezoidal rule but the most accurate method is Simpson's rule that gives us the smallest errors.
Elnagar and Razzaghi (1996) ([5]) claim more accurate results than those corresponding to the
Kumar and Sloan (1987) [14]) collocation method.
Bibliography
[1] Atkinson, K.E., A survey of numerical methods for solving nonlinear integral equations, J.
Integral Equations Appl., 4 (1992), 15-46.
[2] Atkinson, K.E., The Numerical Solution of Integral Equations of the Second Kind,
Cambridge Univ. Press, 1997.
[3] Baker, C.T.H. , The Numerical Treatment of Integral Equations, Clarendon Press, 1978.
[4] Delves L.M., Mohamed J.L., Computational Methods for Integral Equations, Cambridge
University Press, (1985).
[5] Elnagar, G.N.; Mohsen Razzaghi, A Pseudospectral Method for Hammer- stein Equations, J.
Mathematical Analysis and Applications, 199 (1996), 579-591.
[6] Hammerlin, G., Developments in solving integral equations numerically, pp. 187-201, in:
Numerical Integration. Recent Developments Proc. NATO Adv. Res. Workshop,
Bergen/Norw., 1991, NATO ASI Applications, 1985. Ser., Ser. C 357, 1992.
[7] Jerri A.J.(1985), Introduction to Integral Equations With and applications, Wiley, 1985.
[8] Kaneko, H.; Xu, Y., Degenerate kernel methods for Hammerstein equations, Math. Comp.,
56 (1991), 141-148.
[9] Kaneko, H.; Noren R.D.; Xu Y., Numerical solutions for weakly singular hammerstein
equations and their superconvergence, J. Integral Equations Appl., 4 (1992), 391-407.
CHAPTER 6. NUMERICAL RESULTS
BIBLIOGRAPHY
[10] Kaneko H.; Noren R.D.; Padilla P.A., Superconvergence of the iterated collocation methods
for Hammerstein equations, J. Comput. Appl. Math. 80 (1997), No 2, 335-349.
[11] Kaneko, H. and Noren, R. D., Numerical solutions of Hammerstein equations, pp. 257-288,
in: Boundary Integral Methods: Numerical and Mathematical Aspects, Golberg, M. (Ed.),
WIT Press/Computational Mechanics Publications, 1999.
[12] Kaneko, H., Padilla, P. and Xu, Y., Superconvergence of the iterated degenerate kernel
method, Applicable Analysis, 80 (2002), No. 3, 331-351.
[13] Kumar, S., Superconvergence of a Collocation Method for Hammerstein Equations, IMA J.
Numer. Anal., 7 (1987), 313-325.
[14] Kumar S.; Sloan I.H., A new collocation-type method for Hammerstein integral equations,
Math. Comp. 48 (1987), 585-593.
[15] Kumar, S., A New Collocation-type Method for the Numerical Solution of
Hammerstein Equations, Bull. Aust. Math. 38 (1988), 151-152.
[16] Kumar, S., The Numerical Solution of Hammerstein Equations by a Method based on 57
Polynomial Collocation, J.Aust. Math. Soc., Ser. B 31, (1990), 319-329.
[17] Some, B., Some Recent Numerical Methods for Solving Nonlinear Hammerstein
Integral Equations, Math. Comp. Modelling, 18, (1993), p. 55-62.