0% found this document useful (0 votes)
210 views

Collocation Methods in The

This document discusses numerical methods for solving Fredholm integral equations of Hammerstein type. It begins by classifying integral equations as Volterra or Fredholm based on whether the limits of integration are variable or fixed. It then describes two basic numerical integration rules - the trapezoidal rule and Simpson's rule - for approximating the integral in Fredholm equations. The document goes on to explain two collocation methods - the Kumar and Sloan method and the pseudospectral Legendre method - for solving Hammerstein Fredholm integral equations numerically. It provides some proofs and describes implementing the methods in MATLAB to test them on example problems and analyze the results.

Uploaded by

pawanmonu2009
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
210 views

Collocation Methods in The

This document discusses numerical methods for solving Fredholm integral equations of Hammerstein type. It begins by classifying integral equations as Volterra or Fredholm based on whether the limits of integration are variable or fixed. It then describes two basic numerical integration rules - the trapezoidal rule and Simpson's rule - for approximating the integral in Fredholm equations. The document goes on to explain two collocation methods - the Kumar and Sloan method and the pseudospectral Legendre method - for solving Hammerstein Fredholm integral equations numerically. It provides some proofs and describes implementing the methods in MATLAB to test them on example problems and analyze the results.

Uploaded by

pawanmonu2009
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 75

COLLOCATION METHODS IN THE

NUMERICAL SOLUTION OF FREDHOLM


INTEGRAL EQUATIONS OF
HAMMERSTEIN TYPE
Contents

1 Introduction 6

2 Classification Of Integral Equations 9

3 Numerical Methods For Fredholm Integral Equations 11


3.1 The Trapezoidal Rule.............................................................................................. 11
3.2 The Simpson's Rule................................................................................................. 12

4 Numerical Methods For Hammerstein FIEs 14


4.1 The Kumar and Sloan Collocation Method ([14], (1987)) .................................... 14
4.2 The Pseudospectral Legendre Method ([5], (1996))............................................... 16

5 Some Proofs 19

6 Numerical Results 25
6.1 Introduction ............................................................................................................. 25
6.2 Test Problems ......................................................................................................... 25
6.3 Results ..................................................................................................................... 27

Bibliography 56
Abstract

The aim of this dissertation is to study and program some methods used for the numerical solution of
Fredholm integral equations of Hammerstein type.

First of all we describe the types of integral equations. So we have two types of integral equations,
Volterra and Fredholm. Volterra and Fredholm integral equations are of first kind if the unknown
function is under the integral sign only and are of second kind if the unknown function is also outside
the integral.

Four methods are studied for equations with continuous kernels: the Trapezoidal and the Simpson's
method following the book by Baker (1978) ([3]), collocation methods following the paper by Kumar
and Sloan (1987)([14]) (cf. also Kumar (1988) ([15]), Kumar (1990)([16])) and a pseudospectral
method which is described following the paper by Elnagar and Razzaghi (1996) ([5]).

We then give the work done on some derivations and proofs.

Finally, MATLAB programs are provided for some of the methods and the numerical results obtained
from testing several examples.

-
Chapter 1

Introduction
In this dissertation, first of all, we describe the different types of integral equations. So we have
Volterra integral equations where the upper limit of the integral that we use is a variable. On the other
hand, if the upper limit of the integral used is a constant then the integral equation is Fredholm. We
also distinguish between first kind integral equations where the unknown function occurs under the
integral sign only and second kind integral equations where the unknown function occurs also outside
the integral sign. The general form of a nonlinear Fredholm integral equation is:
, b
y ( x ) = f (x) + / K ( x , s , y ( s ) ) d s , a < x < b , (1.1)
a

where the kernel K(x, s,y(s)) is continuous in all its three variables, y(x) is the unknown function and
the kernel K(x, s,y(s)) and f (x) are the given functions. We are going to work on Hammerstein
integral equations that have the following form:
r b
y ( x ) = f (x) + / K ( x , s ) g ( s , y ( s ) ) ds. (1.2)

Classes of methods which have been applied for the numerical solution of Fred- holm Integral
equations (FIEs) include: the so called degenerate kernel method, where the kernel K(x, s) is analysed
as a sum of products of functions of x and s, Galerkin methods, collocation and iterated collocation
methods, methods based on use of Chebyshev polynomials, defect correction methods (of. the
CHAPTER 1. INTRODUCTION

books by Baker (1978) ([3]) and Atkinson


(1997)([2]) and also the survey papers by
Kaneko and Noren (1999) ([11]), Hammerlin
(1992)([6]), Some (1993)([17]), Atkinson (1992)
([1]) and the references therein for an
introduction to the subject).

The methods which are analysed in the thesis are collocation type methods applied to
Hammerstein Fredholm Integral equations of the form (1.2). The particular methods are these of
Kumar and Sloan (1987) ([14]) and Elnagar and Razzaghi (1996) ([5]).
To facilitate the understanding of the methods of numerical solution of Fred- holm Integral
equations two simple quadrature type methods are also considered applied to Fredholm Integral
equations of the form (1.1). These quadrature methods are based on the use of the Trapezoidal
and Simpson's quadrature rules.
The analysis of the methods considered consists of the description of the methods, their
programming in Matlab and consideration of convergence analysis.
CHAPTER 1. INTRODUCTION

The typing of the thesis was done in Latex and the programming in Matlab.

In the Appendix is given a list with the names of the Adobe Acrobat and postscript files with
the contents of the thesis and the names of the Matlab programs that are stored in the CD that
accompanies the project.
Chapter 2
Classification Of Integral Equations

A general form of a linear integral equation is:

y(x) = f (x)
+ K ( x , s ) y ( s ) ds.

What we can see is that the unknown function y(x) is under the integral sign [7, p. 1]. In this
general form we can also see the function K(x, s) that is the kernel of the integral equation and is
a function of two variables x and s. Integral equations can be presented in two ways, with
variable limits of integration:

(2.1)

and with fixed limits of integration:

(2.2)
Integral equations with variable limits are called Volterra integral equations and equations with
fixed limits of integration are called Fredholm integral equations. We can have Volterra and
Fredholm integral equations of first and second kind. Volterra equations of the first kind have the
following form:

(2.3)

C
CHAPTER 2. CLASSIFICATION OF INTEGRAL EQUATIONS 10

and Volterra equations of the second kind have the form (2.1). Fredholm integral equations of
the first kind have the form:

b
f (x) K(x,s)
= I y(s) ds
(2.4)
a
a

and Fredholm equations of the second kind have the form of (2.2). We can call a Volterra or a
Fredholm integral equation homogeneous if the function f (x) = 0 [7, p. 17].
An integral equation is called linear in y(x) if when yi(x) and y 2(x) are solutions of y(x) then
the linear combination of y1(x) and y2(x), cy1(x) + dy2(x) is also a solution [7, p. 17].
Finally, we can call an integral equation singular if the range of integration is infinite or its
kernel K(x, s) becomes infinite in the range of integration [7, p.
17-18].
What we can see is that Volterra and Fredholm integral equations are similar and their only
difference is the limits of integration, however, we cannot use the same methods to solve these
two classes of integral equations.
Chapter 3

Numerical Methods For Fredholm


Integral Equations

In this chapter we are going to describe two basic numerical integration rules, the Trapezoidal
rule in section 3.1 and the Simpson's rule in section 3.2 [7, p. 38-41]. Also, we are going to show
how these methods approximate the integral of Fredholm equation (1.1) following Baker (1978)
[3, p. 686-687].

, b
y ( x ) = f ( x ) + K ( x , s , y ( s ) ) ds, x E [ a , b ] (3.1)
a
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS13

3.1 The Trapezoidal Rule


The Trapezoidal rule approximates the given function on (x i_1,xi) using a line that passes through
(xi-1, f (xi-1)) and (xi , f (xi)). So for the approximation it calculates the area of the trapezoid
— i
produced that its height is f (x
~ f ( x
. If the graph of the function is a straight line then
we are going to take an exact solution otherwise if we divide the integral into a small number of
pieces we can not expect trapezoidal rule to be accurate. The general formula that trapezoidal
rule uses is the following:
n
b
/ f (x) dx - hJ2 w
j
f ( x
j
) (3 2)
.
J a
j =1

where W j s are the weights and W j = 1/2 if j = 1 or n and W j =1


if j = 2,3,..., n — 1.
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS14

So the general formula now becomes [7, p. 40-41]:

fb 1 1 Ja f (x) dx - h(2 f (x1) + f (x2) + ... + f (xi) + ... + f (x—) + 2 f (xn)) (3.3)

(3.4)

where h = ( b — a ) / ( n — 1) and x i = a + (i — 1)h, i = 1,... ,n

The next step is to discretize equation (3.1) at x = xi:


, b
y(xi ) = f (xi ) + K(xi ,s,y(s)) ds
a

If we use (3.3) and (3.4) we take:

yi = f (xi ) + h^ K (xi,xi,yi) + K (xi ,x2,y2) + ... + K (xi,xn-1,yn-l) + 1K

(xi ,xn ,yn ))


or

(3.5)
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS15

yi = f (xi ) + hJ2 w
i K (xi ,xj , y j ) i, j = 1, 2 , . . . , n j =i

where yi denotes an approximation to y(xi).


Equation (3.5) gives a nonlinear system of n equations in yi, i = 1, 2,... ,n.

3.2 The Simpson's Rule


Simpson's rule approximates the given function again on (xi-1,xi) using a parabola that passes trough three
points. These points are:

/ c f w /1 ,+f x
i r / x
i — 1 + x
i \ \ / c f w
(
))
x
(x
i
f
- 1 , f (xi -1)), (
----"--------
(
---------"--------
, i, ( x
i ) )
2
2

The general formula that Simpson's rule uses is the following


f (x) dx h f
b
a
n
- / J2w
j
) ( x

(3.6)
j =1
CHAPTER 3. NUMERICAL METHODS FOR FREDHOLM INTEGRAL
EQUATIONS16

where Wj 's are the weights and we have that Wj = 3, if j = 1 or n, Wj = 4 if j E [2, n — 1]


and if j is even and Wj = |, if j E [3, n — 2] and if j is odd. So the general formula now
becomes [7, p. 40-41]:
r - b h
f (x) dx - - (f (x1) + 4f (x2) + 2f (xa) + ... + 4f (xn_ 1) + f (xn ) ) (3.7)
a 3

where h = (b — a)/(n — 1).


If we use (3.7) in (3.4) we are going to take:

h
yi = f (xi ) + 3K(xi, x1, y1) + 4K(xi, x2 , y2 ) + 2K (xi, xa, ya) + ... +
4K (xi ,xn _ 1 ,yn _ 1) +

K ( x i , x n , y n ))

or
n
yi = f (xi) + hJ^Wj K (xi ,xj , y j ) i , j = 1, 2 , . . . , n (3.8)
j=1

where yi denotes an approximation to y(xi), i = 0,1,... ,n.


CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

Equation (3.8) is also a nonlinear system of n equations in yi, i = 1, 2,... ,n.


Chapter 4

Numerical Methods For


Hammerstein FIEs

4.1 The Kumar and Sloan Collocation Method ([14], (1987))


We are going to describe some methods used for the numerical solution of Fred holm integral equations of
Hammerstein type of the form (1.2), i.e of the form:

, b
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

ds, x E [a,b]
y(x) = f (x)
+/ K(x,s)
g(s,y(s)) (4 1)
.
J a

where the functions f (x), K(x,s), g(s,y(s)) are known and a and b are real numbers.

We define a function z(x):

z (x) = g ( x , y ( x ) ) , x E [ a , b ] . (4.2)

If we substitute (4.2) in (4.1) we take:

b
y(x) = f (x)+ K(x,s)z(s) ds, x E [a,b]. (4.3)
a

If now we substitute (4.3) into (4.2) we have:

, b
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

z(x) = g(x,f (x)+ K(x,s)z(s) ds), x E [a,b]. (4.4)


a

If we use the standard collocation method we are going to have:


zn(x) = Y1 anjUnj (x), x E [a, b] (4.5)
j =1

where j = 1, . . . , n and an j are unknowns and are determined by collocating equation (4.4) at the points
Tni, where i = 1,... ,n and unj(x) are basis functions. So from equation (4.4) we take that:

zn(Tni) = g ( T n i , f(Tn i ) + K(Tn i , s ) z n ( s ) d s ) , i = 1 , . . . , n (4.6)


a

and from equations (4.5) and (4.6) we finally take that:

n n b
J^an j U n j (Tn i ) = g(Tn i , f (Tn i ) + YI an j K(Tn i , s)Un j
(s) d s ) , i = 1,..., n. (4.7) j=1 j=1 J a

The collocation points can be chosen to be T n j = n—1, where j = 1, 2,... ,n [14, p. 592]. The functions
Un1 , . . . , Unn can be defined [14, p. 592]:

, S , (x — Tn
(x)
2 )/(Tn 1 — Tn 2) , if x E [Tn1, Tn2]|
U
n1 =I
0, otherwise

(x T
n , j _ 1)/\ n,j
T T
n,j _1), if x E ( T
n , j _ 1, T
n,j U
n j
(x)
= ^ (x —
T
n,j+1)/(Tn,j — Tn,j+1), if x E (T
n,j
,T
n,j+1];
otherwise
CHAPTER 4. NUMERICAL METHODS FOR

HAMMERSTEIN FIES 16 and finally


Unn(x) = Un1 (1 — x).

After system (4.7) is solved for the a n , j for j = 1, 2,... ,n we have to use again (4.3) and
substitute zn(s) with formula (4.5). We are then going to take the approximation of y that is yn:

, b
y n ( x ) = f (x) + K ( x , s ) z n ( s ) d s
a

or

n
b
(4.8)

y n ( x ) = f (x) + J 2 a n j K ( x , s)Un j (s) ds. j=1 Ja


CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

4.2 The Pseudospectral Legendre Method ([5], (1996))


Now, we are going to consider Hammerstein equations of the form [5, p. 584-586]:

y(x) = f ( x ) + J ^ K ( x , s ) g ( s , y ( s ) ) ds, x E [—1,1]. (4.9)

Again, we are going to use that

z(x) = g(x,y(x)), x E [—1,1] (4.10)

and zN is a polynomial of degree N.


We determine the coefficients of the polynomial by collocating z(x):

y ( x ) = f ( x ) + J 1 K ( x , s ) z ( s ) d s , are [-1,1] (4.11)


CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

where z(s) is given by:

z(s) = g ( s , f (s) + _ K ( s , t ) z ( t ) dt). (4.12)

The approximation of zN to z using the pseudospectral Legendre method is given by:

N
zN(x) = £ aiMx), x E [—1,1] (4.13)
i=0

where 0l(x) is [5, p. 582]:

(x2 — 1) L (x)
//\ n
W(x) = ---------------rr—;—r-7---------r, l =0, 1, . . . ,N,
FK
* N(N + 1)Ln(xi) (x — xi )' ''''

where LN(x) is the Legendre polynomial of order N [5, p. 581].


We find al, with 0 < l < N, by collocating (4.12) at Legendre - Gauss - Lobatto nodes xk. The Legendre -
Gauss - Lobatto nodes are: x0 = —1, xN = 1 and xm with 1 < m < N — 1 the zeros of L N (x) [5, p. 581] and so
we have:
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

zN(xk) = g(xk,f(xk ) + J k ( x k , t ) z N (t) dt, xk E [—1,1]. (4.14)

Using (4.13) and the fact that zN(xk) = ak, with k = 0,1,..., N we obtain:

1 N
ak = g(xk ,f (xk ) + / K (xk ,t)J2 ai Wi (t) dt)
1
i =0

N , 1

ak = g(xk ,f (xk ) + £ ai K ( x k ,t)Wi (t) dt) i =0 1

N N

or

or
CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

ak ~ g(xk , f (xk ) + J2 J2 K(xk ,xj )Wi (xj )Wj ai ) . i =0 j =0


CHAPTER 4. NUMERICAL METHODS FOR HAMMERSTEIN FIES

Since

Mx j )=S j
= {i ; i f l = j

we find
N
ak = g ( x k , f ( x k ) + J2 a i W i K ( x k , x i )) (4.15)
=0

where xi are Legendre - Gauss - Lobatto nodes and Wj 's are weights given by the following
formula[5, p. 585]:

W
j = NN+I)-(Lj j = 0' l— N <4J6)

where LN is the Legendre polynomial of degree N.


After the unknowns ak are found from (4.15), we return to (4.11) to find yN as:

r1
N N
y (x) = f (x) + y 1 K(x, s)zN(s) ds
N

or

N f 1
N
N
y (x) = f (x) + K ( x , s ) W i (s) ds)ai
=0 1
or
N
N
N
y (x) « f (x) + £ K(x,xi)Wiai. (4.17)
=0
Chapter 5 Some Proofs
In chapter three, the application of quadrature methods to Fredholm Integral Equations of the form

, b
y ( x ) = f ( x ) + K ( x , s , y ( s ) ) ds, x E [ a , b ]
a

was presented.
Here a convergence proof is presented for the quadrature methods applied to linear Fredholm
Integral Equations following Baker (1978) [3, p. 423-443]. Consider the linear Fredholm Integral
Equation
, b
y ( x ) = f (x) + A K ( x , s ) y ( s ) ds. (5.1)
a

Consider also the approximations:


n
CHAPTER 5. SOME PROOFS

yj = f (xj ) + A J2 WK(xj ,xi )yi (5.2)


i =0

where voi = -Wi.


Let also y(x) denote the Nystrom extension of the vector [y0,..., yn]T given by
n
y(x) = f (x) + Xj^Wi K (x,xi )yi . (5.3)
i =0

One theorem is going to be proved Theorem (5.1) which shows that the error
lly(x) — y(x)IU ^ 0.
CHAPTER 5. SOME PROOFS

Before the theorem is stated and proved we need the following preliminaries and notation following
Baker (1978) ([3]):

J+ is the class of quadrature rules of the form


n

J(W)
= X W
j W ( x
j
)
j =0

which approximate integrals of the form


, b
I (W) = W ( x ) d x
a

such that we have positive weights Wj and nodes xj E [a,b] [3, p. 123]

J R
is the class of quadrature rules which are Riemann sums [3, p. 124] where a Riemann sum
is defined in Definition (5.1).
DEFINITION 5.1: A Riemann sum [3, p. 123] for the integrall j O b W(x) dx is a sum of the
form
n
S(P,Q; 0) = X(Ui+1 — Ui )W(xi ) i =0

related to two partitions, P = {a = U0 < U1 < ... < Un < Un+1 = b} and Q = {x0 < x1 < ... < xn_ 1 < xn}
which are such that a = U0 < x0 < U1 < x1 < U2 < ... < Un < xn < Un+1 = b.

J(W) is a Riemann sum if the weights Woi and nodes xi in (5.4) correspond to such partitions
with vji = Ui+1 — U^.Thus we require Y_ Wui + a < xk < Ek=0Wji + a, k = 0,1,... ,n and Yn=0 Wi = b
— a.

Jm(W) is a family of quadrature rules approximating I(W) such that


n
CHAPTER 5. SOME PROOFS

J m ( W ) = J 2 W j W ( x j ) j =0

where n = n(m), Wj = Wj (m), xj = xj (m), a < xj < b [3, p. 125]. Then lim„woo Jm(<p) =
I(<p) for every function <p(x) in C[a, b] if and only if

(5.4)

(5.5)
CHAPTER 5. SOME PROOFS

(i) limm^^ Jm(0) = I(0) for every function 0(x) in some set S which is dense in the uniform norm in
C[a, b]
(ii) There is a constant W such that J2n=0 Wj I < W, m = 0,1, 2,....

DEFINITION 5.2: The local truncation error [3, p. 440] associated with method (5.2) is given by:

T (xi) = y(xi ) — K (xi , xj )y(xj ) i = 0,1,...,n. (5.6)


j =0

A is the measure of J(W) [3, p. 433] and we have that A = max{max(Ui+1 — yi,yi — Ui)}.

Wi(x) are step functions for i = 0,1, 2,... ,n [3, p. 433-434] such that:

W i ( x j ) = 5ij, i , j = 0,1, 2 , . . . , n ,
CHAPTER 5. SOME PROOFS

= { 1' f (5*7)
0, if x e [Ui ,Ui+1]

and

Wi (Ui ) = 1, if Ui E {xk }; Wi (Uj ) = 0,i = j or i = n and Uj E {xk }; Wn (b) = 1.

THEOREM 5.1:( Theorem (4.10) [3, p. 437]) Suppose that K(x,s) and f (x) in equation (5.1) are
continuous and Jm(W) is a family of rules in JR such that limm^^ Am = 0. Let y(x) be the Nystrom
extension given by (5.3) of the vector y = [yo,y1,... ,yn ] T
. Then limm^ ||y(x) — y(x)||^ = 0.

PROOF: Consider the Fredholm Integral equation (5.1) and the Nystroom extension y(x) of
[yo ,y1,..., yn ] T
given by (5.3).
Consider also the step functions 0i(x) as characterised by (5.7) and the kernel
n
Kn(x,s) = X K (x,xi )Wi (s). (5.8)
i =0
CHAPTER 5. SOME PROOFS

e(x) = y(x) — y(x).

We are going to find a bound of the error ||e(x) and show that it tends to zero.

Since,

|iKn(x,s) — K(x,s)|< max sup K(x,xi) — K(x,s)


i
\s_

as A 0 we have that:

(5.10)

lim ||K n ( x , s ) — K ( x , s) = 0

In order to prove the theorem, we take equation (5.8) and we multiply both sides with y(s)
and we integrate. So we have:
CHAPTER 5. SOME PROOFS

>^ , - b Kn (x,s)y(s) ds =
n

J^K(x,xi )Wi (s)y(s) ds i =0 a

or
n
r b

Kn (x,s)y(s) ds = Y K(x,xi ) Wi (s)y(s) ds i =0 a

or

Kn (x, s)y(s) ds = Y K(x, xi) £ Wj Wi (xj )y(xj )


i =0 j =0

Since from (5.3) and (5.2) y(xj) = yj


b n n
Finally, let / Kn (x, s)y(s) ds = X K(x, xi )Y Wj 8i j yj
a
i =0 j =0
or n
(5.9)
r b
CHAPTER 5. SOME PROOFS

A K n ( x , s)y(s) d s = A X K ( x , x i ) W i y , Ja
i=0
CHAPTER 5. SOME PROOFS

or, using (5.3)

c b ,
Kn (x, s)y(s) ds = y(x) — f (x).
a

So we have:

f b -
y(x) = f (x) + A Kn (x,s)y(s) ds. (5.11)
a

Now subtracting (5.1) from (5.11) we obtain

f b „ f b y(x) — y(x) = f (x) + A


K n ( x , s ) y ( s ) d s — f (x) — A K ( x , s ) y ( s ) d s

or
b b y ( x ) — y ( x ) = A [ K n ( x , s)y(s) d s — K ( x , s ) y ( s ) d s ] ,
a a

or using (5.9)
CHAPTER 5. SOME PROOFS

f b „ f b e(x) = A[ Kn(x,s)y(s) ds —
K(x,s)y(s) ds]. (5.12)
a a

By adding and subtracting A /0" K(x, s)y(s) ds in the right hand side of (5.12) we find:

/ b
ds —
„ f b Kn(x, s)y(s) ds + A K(x, s)y(s)

! ■ b ! ■ b A K(x,
s)y(s) ds — A K(x, s)y(s) ds
a a

or
r b c b
b b e(x) = A[ K(x,s)(y(s) — y(s)) ds + / (Kn (x,s) —
K (x,s))y(s) ds]. (5.13)
a a

Thus

||e(x)|U < A[||K|U|y(x) — y(x)|U + UKn — K|U||y|U]

or
CHAPTER 5. SOME PROOFS

( 1 - A | | A :||00)||e(a;)||00 < ||K n - I < ||oo,,y,,oo


CHAPTER 5. SOME PROOFS

Thus since from (5.10) HKn — K^ 0 we have the required result, that is
||e(x)|U ^ 0.
Chapter 6 Numerical

Results

6.1 Introduction
In this chapter we will look at some examples using the methods described in chapters three and
four. The first example is taken from Kaneko, Noren and Padilla (1997) ([10]), the second from
Kaneko, Padilla and Xu (2002) ([12]), the third from Atkinson (1997)([2]), and the fourth
example is taken from Baker (1978) ([3]). A fifth example is included so that comparisons with
results given about maximum absolute errors in the paper by Kumar and Sloan (1987) ([14]) are
made. This example is from Kumar and Sloan (1987) ([14]).

6.2 Test Problems


Numerical results are going to be presented for the following examples:
CHAPTER 6. NUMERICAL RESULTS

1
y(x)— Jo1 K(x,s)
g(s,y(s))ds = f(x)
' x E [0,1]

where the kernel is K(x, s) = ex_s, the function g(s,y(s)) = cos(s + y(s)) and the true solution is
y*(x) = 1. We do not have the function f (x) but we can find it by substituting the exact solution
and integrating. So we have:
CHAPTER 6. NUMERICAL RESULTS

but the integral is equal to:

t1 ex 1 ex
ex_s cos(s + 1)ds = —— (sin 2 — cos 2) + — (cos 1 — sin 1) J o
2 2

so
x 1
ee f (x) = 1 ^(sin 2 — cos 2) ——
(cos 1 — sin 1).

Kaneko, Noren, and Padilla (1997), ([10], p. 335-349).

2
. y(x) — Jo1 K(x,s)
g(s,y(s))ds = f(x)
' x E [0,1]

where the kernel is K(x, s) = exs, the function g(s,y(s)) = cos(s + y(s)) and the true solution is
y*(x) = 1. We do not have the function f (x) but we can find it by substituting the exact solution
and integrating

1 — f1 exs cos(s + 1)ds = f (x) J o


CHAPTER 6. NUMERICAL RESULTS

but the integral is equal to:

f1 ex 1
xs
e cos(s + 1)ds =--------------o (sin 2 + x cos 2)--------------- (sin 1 + x cos 1)
Jo 1 + x2 1 + x2
so

ex 1
f (x) = 1----------------2 (sin 2 + x cos 2) +------------o2 (sin 1 + x cos 1).
1+x 1+x

Kaneko, Padilla, Xu (2002) ([12]).

3. y(x) = f (x) + /o1 K(x, s)y(s) ds, x E [0,1]


where the kernel K(x, s) = exs and the exact solution is y*(x) = e x. We do not have the f (x)
function, but we can find it by substituting the true solution and integrating. So we have:

r1
x x s s

f (x) = ex — f exses ds o
CHAPTER 6. NUMERICAL RESULTS

but

x 4
j 1- e+ _ 1
xs s
e e ds =----------------

Jo 1+x
CHAPTER 6. NUMERICAL RESULTS

ex+1 __ 1

xe 1

f (x) = e x —
1+x

Atkinson (1997) ([2], p. 102).

4. y(x) = f (x) + /o1 K(x, s)y(s) ds, x E [0,1] where the kernel K(x, s) = min(x, s)es, f (x) = 4e2x
+ ex — 4xe2 — 4 and the true solution is y*(x) = ex.
Baker (1978) ([3], p. 384-385).

6.3 Results
First we are going to examine test problem 1 using the Trapezoidal rule that we have described in
chapter three. We have computed the error for n =11, n = 21 and n = 41 where n is the number of
nodes x4,... ,xn for x E [0,1]. We have the following results:
CHAPTER 6. NUMERICAL RESULTS

Trapezoidal rule, Example 1, n=11

y
x Computed y Error
0.00 1.000 1.001 5.12e — 004
0.10 1.000 1.001 5.65e — 004
0.20 1.000 1.001 6.25e — 004
0.30 1.000 1.001 6.90e — 004
0.40 1.000 1.001 7.63e — 004
0.50 1.000 1.001 8.43e — 004
0.60 1.000 1.001 9.32e — 004
0.70 1.000 1.001 1.03e — 003
0.80 1.000 1.001 1.14e — 003
0.90 1.000 1.001 1.26e — 003
1.00 1.000 1.001 1.39e - 003

x y Computed y Error
0.00 1.000 1.000 1.28e 004

0.10 1.000 1.000 1.41e 004

0.20 1.000 1.000 1.56e 004

0.30 1.000 1.000 1.73e 004

0.40 1.000 1.000 1.91e 004

0.50 1.000 1.000 2.11e 004

0.60 1.000 1.000 2.33e 004

0.70 1.000 1.000 2.57e 004

0.80 1.000 1.000 2.85e 004

0.90 1.000 1.000 3.14e 004

1.00 1.000 1.000 3.47e - 004

Trapezoidal rule, Example 1, n=41


y
x Computed y Error
0.00 1.000 1.000 3.20e — 005
0.10 1.000 1.000 3.53e — 005
0.20 1.000 1.000 3.90e — 005
CHAPTER 6. NUMERICAL RESULTS

0.30 1.000 1.000 4.31e — 005


0.40 1.000 1.000 4.77e — 005
0.50 1.000 1.000 5.27e — 005
0.60 1.000 1.000 5.82e — 005
0.70 1.000 1.000 6.44e — 005
0.80 1.000 1.000 7.11e — 005
0.90 1.000 1.000 7.86e — 005
1.00 1.000 1.000 8.69e 005

Now we are going to examine test problem 2 using the Trapezoidal rule. Again we have
computed the error for n = 1 1 , n = 21 and n = 41 and for x E [0,1] .

Trapezoidal rule, Example 2, n=11


x y Computed y Error
0.00 1.000 1.000 3.94e - 004
0.10 1.000 1.000 2.72e - 004
0.20 1.000 1.000 1.36e - 004
0.30 1.000 1.000 1.41e - 005
0.40 1.000 1.000 1.81e - 004
0.50 1.000 1.000 3.68e - 004
0.60 1.000 1.999 5.76e - 004
0.70 1.000 1.999 8.10e - 004
0.80 1.000 1.999 1.07e - 003
0.90 1.000 1.999 1.36e - 003
1.00 1.000 1.998 1.70e - 003

Trapezoidal rule, Example 2, n=21

y
x Computed y Error
CHAPTER 6. NUMERICAL RESULTS

0.00 1.000 1.000 9.86e - 005


0.10 1.000 1.000 6.79e - 005
0.20 1.000 1.000 3.40e - 005
0.30 1.000 1.000 3.64e - 006
0.40 1.000 1.000 4.55e - 005
0.50 1.000 1.000 9.22e - 005
0.60 1.000 1.000 1.44e - 004
0.70 1.000 1.000 2.03e - 004
0.80 1.000 1.000 2.68e - 004
0.90 1.000 1.000 3.42e - 004
1.00 1.000 1.000 4.25e - 004

Trapezoidal rule, Example 2, n=41


x y Computed y Error
0.00 1.000 1.000 2.47e - 005
0.10 1.000 1.000 1.70e - 005
0.20 1.000 1.000 8.50e - 006
0.30 1.000 1.000 9.16e - 007
0.40 1.000 1.000 1.14e - 005
0.50 1.000 1.000 2.31e - 005
0.60 1.000 1.000 3.61e - 005
0.70 1.000 1.000 5.07e - 005
0.80 1.000 1.000 6.71e - 005
0.90 1.000 1.000 8.56e - 005
1.00 1.000 1.000 1.06e — 004

Now we are going to examine test problem 3 using the Trapezoidal rule. TRAPEZOIDAL
RULE, EXAMPLE 3, N=11

y
x Computed y Error
0.00 1.000 0.986 1.36e - 002
0.10 1.105 1.091 1.40e - 002
0.20 1.221 1.207 1.44e - 002
CHAPTER 6. NUMERICAL RESULTS

0.30 1.350 1.335 1.47e - 002


0.40 1.492 1.477 1.50e - 002
0.50 1.649 1.633 1.53e - 002
0.60 1.822 1.807 1.55e - 002
0.70 2.014 1.998 1.56e - 002
0.80 2.226 2.210 1.57e - 002
0.90 2.460 2.444 1.57e - 002
1.00 2.718 2.703 1.55e 002

x y Computed y Error
0.00 1.000 0.997 3.41e - 003
0.10 1.105 1.102 3.50e - 003
0.20 1.221 1.218 3.60e - 003
0.30 1.350 1.346 3.68e - 003
0.40 1.492 1.488 3.76e - 003
0.50 1.649 1.645 3.82e - 003
0.60 1.822 1.818 3.88e - 003
0.70 2.014 2.010 3.91e - 003
0.80 2.226 2.222 3.93e - 003
0.90 2.460 2.456 3.92e - 003
1.00 2.718 2.714 3.88e - 003

Trapezoidal rule, Example 3, n=41


y
x Computed y Error
0.00 1.000 0.999 8.52e - 004
0.10 1.105 1.104 8.76e - 004
0.20 1.221 1.221 9.00e - 004
0.30 1.350 1.349 9.21e - 004
0.40 1.492 1.491 9.40e - 004
CHAPTER 6. NUMERICAL RESULTS

0.50 1.649 1.648 9.56e - 004


0.60 1.822 1.821 9.69e - 004
0.70 2.014 2.013 9.78e - 004
0.80 2.226 2.225 9.81e - 004
0.90 2.460 2.459 9.79e - 004
1.00 2.718 2.717 9.70e - 004

Finally, test problem 4 gives us the following results for n = 1 1 , n = 21 and n = 41 for x
from zero to 1 using the Trapezoidal rule:

Trapezoidal rule, Example 4, n=11


x y Computed y Error
0.00 1.000 1.000 0.00e - 000
0.10 1.105 1.117 1.20e - 002
0.20 1.221 1.245 2.40e - 002
0.30 1.350 1.386 3.56e - 002
0.40 1.492 1.539 4.69e - 002
0.50 1.649 1.706 5.76e - 002
0.60 1.822 1.889 6.73e - 002
0.70 2.014 2.090 7.60e - 002
0.80 2.226 2.309 8.33e - 002
0.90 2.460 2.548 8.89e - 002
1.00 2.718 2.811 9.25e - 002

Trapezoidal rule, Example 4, n=21

y
x Computed y Error

0.00 1.000 1.000 0.00e - 000


CHAPTER 6. NUMERICAL RESULTS

0.10 1.105 1.108 2.95e - 003


0.20 1.221 1.227 5.87e - 003
0.30 1.350 1.359 8.74e - 003
0.40 1.492 1.503 1.15e - 002
0.50 1.649 1.663 1.41e - 002
0.60 1.822 1.839 1.65e - 002
0.70 2.014 2.032 1.86e - 002
0.80 2.226 2.246 2.04e - 002
0.90 2.460 2.481 2.18e - 002
1.00 2.718 2.741 2.27e - 002

Trapezoidal rule, Example 4, n=41


x y Computed y Error
0.00 1.000 1.000 0.00e - 000
0.10 1.105 1.106 7.33e - 004
0.20 1.221 1.223 1.46e - 003
0.30 1.350 1.352 2.17e - 003
0.40 1.492 1.495 2.86e - 003
0.50 1.649 1.652 3.51e - 003
0.60 1.822 1.826 4.11e - 003
0.70 2.014 2.018 4.63e - 003
0.80 2.226 2.231 5.08e - 003
0.90 2.460 2.465 5.42e - 003
1.00 2.718 2.724 5.64e - 003

Now we are going to examine the first set of data (test problem 1) using the Simpson's rule
that we have also described it in chapter three. Again, we have computed the error for n = 11, n
= 21 and n = 41 and for x from zero to one.

Simpson's rule, Example 1, n=11

y
x Computed y Error
0.00 1.000 1.000 1.04e - 007
0.10 1.000 1.000 1.15e - 007
CHAPTER 6. NUMERICAL RESULTS

0.20 1.000 1.000 1.27e - 007


0.30 1.000 1.000 1.41e - 007
0.40 1.000 1.000 1.56e - 007
0.50 1.000 1.000 1.72e - 007
0.60 1.000 1.000 1.90e - 007
0.70 1.000 1.000 2.10e - 007
0.80 1.000 1.000 2.32e - 007
0.90 1.000 1.000 2.56e - 007
1.00 1.000 1.000 2.83e- 007

x y Computed y Error
0.00 1.000 1.000 6.59e - 009
0.10 1.000 1.000 7.29e - 009
0.20 1.000 1.000 8.05e - 009
0.30 1.000 1.000 8.90e - 009
0.40 1.000 1.000 9.83e - 009
0.50 1.000 1.000 1.09e - 008
0.60 1.000 1.000 1.20e - 008
0.70 1.000 1.000 1.33e - 008
0.80 1.000 1.000 1.47e - 008
0.90 1.000 1.000 1.62e - 008
1.00 1.000 1.000 1.79e - 008

Simpson's rule, Example 1, n =41

y
x Computed y Error

0.00 1.000 1.000 4.13e - 0010


0.10 1.000 1.000 4.57e - 0010
0.20 1.000 1.000 5.05e - 0010
0.30 1.000 1.000 5.58e - 0010
0.40 1.000 1.000 6.16e - 0010
0.50 1.000 1.000 6.81e - 0010
0.60 1.000 1.000 7.53e - 0010
CHAPTER 6. NUMERICAL RESULTS

0.70 1.000 1.000 8.32e - 0010


0.80 1.000 1.000 9.20e - 0010
0.90 1.000 1.000 1.02e - 009
1.00 1.000 1.000 1.12e - 009

For the second set of data (test problem 2), for n = 1 1 , n = 21 and n = 41 we take the
following results using Simpson's rule:

Simpson's rule, Example 2, n=11


x y Computed y Error
0.00 1.000 1.000 2.67e - 007
0.10 1.000 1.000 6.81e - 008
0.20 1.000 1.000 1.36e - 007
0.30 1.000 1.000 3.30e - 007
0.40 1.000 1.000 5.00e - 007
0.50 1.000 1.000 6.24e - 007
0.60 1.000 1.000 6.78e - 007
0.70 1.000 1.000 6.32e - 007
0.80 1.000 1.000 4.50e - 007
0.90 1.000 1.000 8.95e - 008
1.00 1.000 1.000 5.02e - 007

Simpson's rule, Example 2, n= 21

y
x Computed y Error

0.00 1.000 1.000 1.67e - 008


0.10 1.000 1.000 4.29e - 009
0.20 1.000 1.000 8.42e - 009
CHAPTER 6. NUMERICAL RESULTS

0.30 1.000 1.000 2.06e - 008


0.40 1.000 1.000 3.12e - 008
0.50 1.000 1.000 3.89e - 008
0.60 1.000 1.000 4.23e - 008
0.70 1.000 1.000 3.95e - 008
0.80 1.000 1.000 2.82e - 008
0.90 1.000 1.000 5.80e - 009
1.00 1.000 1.000 3.10e — 008

Simpson's rule, Example 2, n=41


x y Computed y Error
0.00 1.000 1.000 1.05e - 009
0.10 1.000 1.000 2.69e - 0010
0.20 1.000 1.000 5.25e - 0010
0.30 1.000 1.000 1.29e - 009
0.40 1.000 1.000 1.95e - 009
0.50 1.000 1.000 2.43e - 009
0.60 1.000 1.000 2.64e - 009
0.70 1.000 1.000 2.47e - 009
0.80 1.000 1.000 1.77e - 009
0.90 1.000 1.000 3.66e - 0010
1.00 1.000 1.000 1.93e 009

Now we are going to examine test problem 3 using the Simpson's rule. SIMPSON'S RULE,
EXAMPLE 3, N=11

y
x Computed y Error
0.00 1.000 1.000 2.87e - 005
0.10 1.105 1.105 2.96e - 005
0.20 1.221 1.221 3.05e - 005
0.30 1.350 1.350 3.12e - 005
0.40 1.492 1.492 3.16e - 005
0.50 1.649 1.649 3.17e - 005
CHAPTER 6. NUMERICAL RESULTS

0.60 1.822 1.822 3.13e - 005


0.70 2.014 2.014 3.03e - 005
0.80 2.226 2.226 2.86e - 005
0.90 2.460 2.460 2.58e - 005
1.00 2.718 2.718 2.19e 005

Simpson's rule, Example 3, n =21


x y Computed y Error

0.00 1.000 1.000 1.80e - 006


0.10 1.105 1.105 1.86e - 006
0.20 1.221 1.221 1.91e - 006
0.30 1.350 1.350 1.95e - 006
0.40 1.492 1.492 1.98e - 006
0.50 1.649 1.649 1.98e - 006
0.60 1.822 1.822 1.96e - 006
0.70 2.014 2.014 1.90e - 006
0.80 2.226 2.226 1.79e - 006
0.90 2.460 2.460 1.62e - 006
1.00 2.718 2.718 1.37e - 006

Simpson's rule, Example 3, n =41

y
x Computed y Error

0.00 1.000 1.000 1.12e - 007


0.10 1.105 1.105 1.16e - 007
0.20 1.221 1.221 1.20e - 007
0.30 1.350 1.350 1.22e - 007
0.40 1.492 1.492 1.24e - 007
0.50 1.649 1.649 1.24e - 007
CHAPTER 6. NUMERICAL RESULTS

0.60 1.822 1.822 1.23e - 007


0.70 2.014 2.014 1.19e - 007
0.80 2.226 2.226 1.12e - 007
0.90 2.460 2.460 1.01e - 007
1.00 2.718 2.718 8.57e - 008

Finally, using Simpson's rule the test problem 4 gives us the following results for n = 1 1 , n
= 21 and n = 41 for x from zero to 1:

Simpson's rule, Example 4, n=11


x y Computed y Error
0.0 1.000 1.000 0.00e - 000
0.10 1.105 1.112 6.44e - 003
0.20 1.221 1.230 8.76e - 003
0.30 1.350 1.366 1.60e - 002
0.40 1.492 1.509 1.70e - 002
0.50 1.649 1.674 2.53e - 002
0.60 1.822 1.846 2.40e - 002
0.70 2.014 2.047 3.37e - 002
0.80 2.226 2.255 2.91e - 002
0.90 2.460 2.500 4.07e - 002
1.00 2.718 2.749 3.10e - 002

Simpson's rule, Example 4, n= :21

y
x Computed y Error

0.00 1.000 1.000 0.00e - 000


0.10 1.105 1.106 1.10e - 003
0.20 1.221 1.224 2.19e - 003
0.30 1.350 1.353 3.25e - 003
CHAPTER 6. NUMERICAL RESULTS

0.40 1.492 1.496 4.25e - 003


0.50 1.649 1.654 5.19e - 003
0.60 1.822 1.828 6.02e - 003
0.70 2.014 2.020 6.73e - 003
0.80 2.226 2.233 7.28e - 003
0.90 2.460 2.467 7.63e - 003
1.00 2.718 2.726 7.76e - 003

x y Computed y Error
0.00 1.000 1.000 0.00e - 000
0.10 1.105 1.105 2.76e - 004
0.20 1.221 1.222 5.48e - 004
0.30 1.350 1.351 8.13e - 004
0.40 1.492 1.493 1.06e - 003
0.50 1.649 1.650 1.30e - 003
0.60 1.822 1.824 1.51e - 003
0.70 2.014 2.015 1.68e - 003
0.80 2.226 2.227 1.82e - 003
0.90 2.460 2.462 1.91e - 003
1.00 2.718 2.720 1.94e - 003
CHAPTER 6. NUMERICAL RESULTS

The next method is the collocation method of Kumar and Sloan (1987) ([14]) that was
described in chapter four and we are going to give the results obtained from this method. The
evaluation of the integrals involved was done using the trapezoidal method with number of
panels ntrap = 100. We worked on this method using the paper by Kumar and Sloan (1987).
The results of test problem 1 for n = 6, n = 11 and n = 21 are given below.

Collocation method, Example 1, n=6


x y Computed y Error
0.00 1.000 1.000 1.52e - 004
0.20 1.000 1.000 1.85e - 004
0.40 1.000 1.000 2.26e - 004
0.60 1.000 1.000 2.76e - 004
0.80 1.000 1.000 3.38e - 004
1.00 1.000 1.000 4.12e - 004

Collocation method, Example 1, n=11


y
x Computed y Error
0.00 1.000 1.000 3.34e - 005
0.10 1.000 1.000 3.70e - 005
0.20 1.000 1.000 4.09e - 005
0.30 1.000 1.000 4.51e - 005
0.40 1.000 1.000 4.99e - 005
0.50 1.000 1.000 5.51e - 005
0.60 1.000 1.000 6.09e - 005
0.70 1.000 1.000 6.74e - 005
0.80 1.000 1.000 7.44e - 005
0.90 1.000 1.000 8.23e - 005
1.00 1.000 1.000 9.09e 005

x y Computed y Error
CHAPTER 6. NUMERICAL RESULTS

0.00 1.000 1.000 4.88e - 006


0.05 1.000 1.000 5.13e - 006
0.10 1.000 1.000 5.40e - 006
0.15 1.000 1.000 5.67e - 006
0.20 1.000 1.000 5.96e - 006
0.25 1.000 1.000 6.27e - 006
0.30 1.000 1.000 6.59e - 006
0.35 1.000 1.000 6.93e - 006
0.40 1.000 1.000 7.28e - 006
0.45 1.000 1.000 7.66e - 006
0.50 1.000 1.000 8.05e - 006
0.55 1.000 1.000 8.46e - 006
0.60 1.000 1.000 8.90e - 006
0.65 1.000 1.000 9.35e - 006
0.70 1.000 1.000 9.83e - 006
0.75 1.000 1.000 1.03e - 005
0.80 1.000 1.000 1.09e - 005
0.85 1.000 1.000 1.14e - 005
0.90 1.000 1.000 1.20e - 005
0.95 1.000 1.000 1.26e - 005
1.00 1.000 1.000 1.33e — 005

Now, we are going to examine test problem 2 again for n = 6, n = 1 1 and n = 21.
CHAPTER 6. NUMERICAL RESULTS

Collocation method, Example 2, n=6


x y Computed y Error
0.00 1.000 1.000 1.70e - 004
0.20 1.000 1.000 1.38e - 004
0.40 1.000 1.000 9.77e - 005
0.60 1.000 1.000 4.76e - 005
0.80 1.000 1.000 1.42e - 005
1.00 1.000 1.000 9.00e 005

Collocation method, Example 2, n=11


y
x Computed y Error
0.00 1.000 1.000 3.95e - 005
0.10 1.000 1.000 3.67e - 005
0.20 1.000 1.000 3.34e - 005
0.30 1.000 1.000 2.97e - 005
0.40 1.000 1.000 2.56e - 005
0.50 1.000 1.000 2.10e - 005
0.60 1.000 1.000 1.60e - 005
0.70 1.000 1.000 1.03e - 005
0.80 1.000 1.000 4.09e - 006
0.90 1.000 1.000 2.77e - 006
1.00 1.000 1.000 1.03e 005

x y Computed y Error
0.00 1.000 1.000 6.81e - 006
0.05 1.000 1.000 6.92e - 006
CHAPTER 6. NUMERICAL RESULTS

0.10 1.000 1.000 7.03e - 006


0.15 1.000 1.000 7.14e - 006
0.20 1.000 1.000 7.26e - 006
0.25 1.000 1.000 7.38e - 006
0.30 1.000 1.000 7.50e - 006
0.35 1.000 1.000 7.63e - 006
0.40 1.000 1.000 7.76e - 006
0.45 1.000 1.000 7.90e - 006
0.50 1.000 1.000 8.05e - 006
0.55 1.000 1.000 8.21e - 006
0.60 1.000 1.000 8.38e - 006
0.65 1.000 1.000 8.57e - 006
0.70 1.000 1.000 8.77e - 006
0.75 1.000 1.000 8.99e - 006
0.80 1.000 1.000 9.22e - 006
0.85 1.000 1.000 9.48e - 006
0.90 1.000 1.000 9.77e - 006
0.95 1.000 1.000 1.01e - 005
1.00 1.000 1.000 1.04e — 005

Now we are going to examine the third set of data (test problem 3) using collocation method
(Atkinson (1997) [2]).

Collocation method, Example 3, n=6


CHAPTER 6. NUMERICAL RESULTS

x y Computed y Error
0.00 1.000 0.983 1.66e - 002
0.20 1.221 1.203 1.86e - 002
0.40 1.492 1.471 2.08e - 002
0.60 1.822 1.799 2.33e - 002
0.80 2.226 2.199 2.62e - 002
1.00 2.718 2.689 2.96e 002

Collocation method, Example 3, n=11


y
x Computed y Error
0.00 1.000 0.996 4.30e - 003
0.10 1.105 1.101 4.53e - 003
0.20 1.221 1.217 4.78e - 003
0.30 1.350 1.345 5.05e - 003
0.40 1.492 1.486 5.34e - 003
0.50 1.649 1.643 5.65e - 003
0.60 1.822 1.816 5.98e - 003
0.70 2.014 2.007 6.33e - 003
0.80 2.226 2.219 6.71e - 003
0.90 2.460 2.452 7.12e - 003
1.00 2.718 2.711 7.56e 003

x y Computed y Error
0.00 1.000 0.999 1.18e - 003
0.05 1.051 1.050 1.21e - 003
0.10 1.105 1.104 1.24e - 003
0.15 1.162 1.161 1.27e - 003
CHAPTER 6. NUMERICAL RESULTS

0.20 1.221 1.220 1.30e - 003


0.25 1.284 1.283 1.34e - 003
0.30 1.350 1.348 1.37e - 003
0.35 1.419 1.418 1.41e - 003
0.40 1.492 1.490 1.45e - 003
0.45 1.568 1.567 1.49e - 003
0.50 1.649 1.647 1.53e - 003
0.55 1.733 1.732 1.57e - 003
0.60 1.822 1.821 1.61e - 003
0.65 1.916 1.914 1.66e - 003
0.70 2.014 2.012 1.70e - 003
0.75 2.117 2.115 1.75e - 003
0.80 2.226 2.224 1.80e - 003
0.85 2.340 2.338 1.85e - 003
0.90 2.460 2.458 1.90e - 003
0.95 2.586 2.584 1.95e - 003
1.00 2.718 2.716 2.01e 003

Finally, using collocation method we have the following results for the fourth set of data
(test problem 4):

Collocation method, Example 4, n=6


x y Computed y Error
0.0 1.000 1.000 0.00e - 000
CHAPTER 6. NUMERICAL RESULTS

0.20 1.221 1.233 1.20e - 002


0.40 1.492 1.515 2.31e - 002
0.60 1.822 1.855 3.26e - 002
0.80 2.226 2.265 3.93e - 002
1.00 2.718 2.760 4.19e - 002

Collocation method, Example 4, n=11


y
x Computed y Error
0.0 1.000 1.000 0.00e - 000
0.10 1.105 1.107 1.63e - 003
0.20 1.221 1.225 3.22e - 003
0.30 1.350 1.355 4.77e - 003
0.40 1.492 1.498 6.23e - 003
0.50 1.649 1.656 7.58e - 003
0.60 1.822 1.831 8.79e - 003
0.70 2.014 2.024 9.82e - 003
0.80 2.226 2.236 1.06e - 002
0.90 2.460 2.471 1.12e - 002
1.00 2.718 2.730 1.14e 002

Collocation method, Example 4, n=21


x y Computed y Error
0.00 1.000 1.000 0.00e - 000
0.05 1.051 1.052 2.37e - 004
0.10 1.105 1.106 4.73e - 004
0.15 1.162 1.163 7.07e - 004
0.20 1.221 1.222 9.38e - 004
0.25 1.284 1.285 1.17e - 003
CHAPTER 6. NUMERICAL RESULTS

0.30 1.350 1.351 1.39e - 003


0.35 1.419 1.421 1.61e - 003
0.40 1.492 1.494 1.82e - 003
0.45 1.568 1.570 2.02e - 003
0.50 1.649 1.651 2.21e - 003
0.55 1.733 1.736 2.40e - 003
0.60 1.822 1.825 2.57e - 003
0.65 1.916 1.918 2.73e - 003
0.70 2.014 2.017 2.88e - 003
0.75 2.117 2.120 3.01e - 003
0.80 2.226 2.229 3.12e - 003
0.85 2.340 2.343 3.22e - 003
0.90 2.460 2.463 3.30e - 003
0.95 2.586 2.589 3.37e - 003
1.00 2.718 2.722 3.41e 003

The maximum absolute errors for the z values were obtained evaluating z(x) at points ti = (i -
1)/256, i = 1, 2,..., 256. The next table has the maximum z errors for the four examples for n = 6,
n = 11 and n = 21.
68

CHAPTER 6. NUMERICAL RESULTS Table of maximum z errors

n Example 1 Example 2 Example 3 Example 4

6 2.12e - 003 2.13e - 003 2.96e - 002 5.30e - 002


11 5.90e - 004 5.84e - 004 7.56e - 003 1.45e - 002
21 1.54e - 004 1.55e - 004 2.02e - 003 4.21e - 003

Tables with the collocation coefficients for the four examples for n = 6 and n = 1 1 are
below.

Collocation Coefficients, n=6


Example 1 Example 2 Example 3 Example 4

5.404e - 001 5.404e - 001 9.834e - 001 1.000e + 000


3.625e - 001 3.625e - 001 1.203e - 001 1.233e + 000
1.702e - 001 1.701e - 001 1.471e - 001 1.515e + 000
-2.893e - 001 -2.915e - 001 1.799e - 001 1.855e + 000
-2.269e - 001 -2.272e - 001 2.199e - 001 2.265e + 000
-4.158e - 001 -4.162e - 001 2.689e - 001 2.760e + 000

Collocation Coefficients, n=11


Example 1 Example 2 Example 3 Example 4

5.403e - 001 5.403e - 001 9.834e - 001 1.000e + 000


4.536e - 001 4.536e - 001 1.101e + 000 1.107e + 000
3.624e - 001 3.624e - 001 1.217e + 000 1.225e + 000
2.675e - 001 2.675e - 001 1.345e + 000 1.355 + 000
1.700e - 001 1.700e - 001 1.486e + 000 1.498e + 000
7.080e - 002 7.076e - 002 1.643e + 000 1.656e + 000
-2.914e - 002 -2.918e - 002 1.816e + 000 1.831e + 000
-1.288e - 001 -1.288e - 001 2.007e + 000 2.024e + 000
-2.271e - 001 -2.272e - 001 2.219e + 000 2.236e + 000
-3.232e - 001 -3.233e - 001 2.452e + 000 2.471e + 000
-4.161e - 001 -4.162e - 001 2.711e + 000 2.730e + 000

We can also examine the example that was used in Kumar and Sloan (1987) [14, p. 591-593]
using the collocation method in order to compare the results that we have with the results of Kumar
and Sloan and also with those in Elnagar and Razzaghi (1996) ([5]).
CHAPTER 6. NUMERICAL RESULTS

The example is the following: (Kumar, Sloan (1987) ([14], p. 591-593)).

5. y ( x ) = Jo1 k ( x , s ) e y ( s ) ds, x E [0,1]


where the kernel is

, I -s(1 - x), if s < x;


k(x, s) = I
I -x(1 - s), if s > x

and the true solution is y*(x) = - l n 2 + 2ln(cos(c(x-1/2)/2)), where c is the solution of cos(cc/4) = V2
given by c = 1.3360556949... (Kumar (1990), [16], p. 327).
Table of maximum z and y errors using Kumar and Sloan Collocation method, Example 5

n max z error max y error

5 7.81e - 003 5.19e - 004


9 2.14e - 003 1.28e - 004
17 5.65e - 004 3.15e - 005

Kaneko and Sloan (87) collocation coefficients, Example 5, n=5


and n=9

n=5 n=9

1.000e + 000 1.000e + 000


9.175e - 001 9.509e - 001
8.921e - 001 9.178e - 001
9.175e - 001 8.987e - 001
1.000e + 000 8.924e - 001
8.987e - 001
9.178e - 001
9.509e - 001
1.000e + 000

Evaluation of Results:

In the next tables we are going to give the maximum errors of y's for the methods that we have
examined for different n's. Also, we are going to give the theoretical order of convergence (th.or.of
conv.) and the observed order of convergence (ob.or.of conv.). Also, E\, E2 and E3 are the absolute
errors for n = 11, n = 21 and n = 41. Finally, E6, EN and E2I are the absolute errors for n = 6, n = 1 1
and n = 21.

Example 1:
CHAPTER 6. NUMERICAL RESULTS

n max error x th.or.of conv ob.or.of conv


TRAPEZOIDAL RULE
11 1.39e - 003 1.00 O(h2)
21 3.47e - 004 1.00 O(h2) 2
41 8.69e - 005 1.00 O(h2) 2
SIMPSON'S RULE
11 2.83e - 007 1.00 O(h4)
21 1.79e - 008 1.00 O(h4) 4
4
41 1.12e - 009 1.00 O(h ) 4
COLLOCATION METHOD
6 4.12e - 004 1.00 O(h2)
11 9.09e - 005 1.00 O(h2) 2
2
21 1.33e - 005 1.00 O(h ) 2

We can see here that for the Trapezoidal rule and for the Simpson's rule the theoretical order of
convergence is the same with the observed order of convergence. We have that Ex/E2 = 4.0058 and
E2/E3 = 3.9931 for the Trapezoidal rule, for the Simpson's rule we have Ex/E2 = 15.8101 and E2/E3 =
15.9821 and for the Collocation method we have E5/En = 4.5325 and Eu/E2\ = 6.8346 We can also
understand that the smallest absolute errors are obtained using the Simpson's rule.

Example 2:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 1.70e - 003 1.00 O(h2)
21 4.25e - 004 1.00 O(h2) 2
41 1.06e - 004 1.00 O(h2) 2
SIMPSON'S RULE
11 6.78e - 007 0.60 O(h4)
21 4.23e - 008 0.60 O(h4) 4
41 2.64e - 009 0.60 O(h4) 4
COLLOCATION METHOD
6 1.70e - 004 0.00 O(h2)
11 3.95e - 005 0.00 O(h2) 2
CHAPTER 6. NUMERICAL RESULTS

21 1.04e - 005 0.00 O(h2) 2

In this example as in the first example if we use Trapezoidal rule the observed order of convergence
is equal to the theoretical order of convergence. If we use Simpson's rule we obtain the same order
of convergence and also we have the most accurate results. What we have is that if we use the
Trapezoidal rule we have that the ratio is Ex/E2 = 4 and E2/E3 = 4.0094, if we use Simpson's rule the
ratio is Ex/E2 = 16.0284 and E2/E3 = 16.0227 and finally if we use Collocation method the ratio is
E5/En = 4.3038 and E2/E3 = 3.7981.

Example 3:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 1.57e - 002 0.80 O(h2)
21 3.93e - 003 0.80 O(h2) 2
41 9.79e - 004 0.80 O(h2) 2
SIMPSON'S RULE
11 3.17e - 005 0.50 O(h4)
21 1.98e - 006 0.50 O(h4) 4
41 1.24e - 007 0.50 O(h4) 4
COLLOCATION METHOD
6 2.96e - 002 1.00 O(h2)
11 7.56e - 003 1.00 O(h2) 2
21 2.01e - 003 1.00 O(h2) 2

In this example we have again that the theoretical order of convergence is the same with the
observed order of convergence. The ratio for this example for the Trapezoidal rule is Ex/E2 = 3.9949
CHAPTER 6. NUMERICAL RESULTS

and E2/E3 = 4.0143, for the Simpson's rule the ratio is E\/E2 = 16.0101 and E2/E3 = 15.9677 and
finally for the collocation method the ratio is E5/Eu = 3.9153 and E2/E3 = 3.7612.

Example 4:
n max error x th.or.of conv ob.or.of conv
TRAPEZOIDAL RULE
11 9.25e - 002 1.00 O(h2)
21 2.27e - 1.00 O(h2) 2
002
41 5.64e - 003 1.00 O(h2) 2
SIMPSON'S RULE
11 3.10e - 002 1.00 O(h4)
21 7.76e - 003 1.00 O(h4) 2
41 1.94e - 003 1.00 O(h4) 2
COLLOCATION METHOD
6 4.19e - 002 1.00 O(h2)
11 1.14e - 002 1.00 O(h2) 2
2
21 3.41e - 003 1.00 O(h ) 2

The ratio here using the Trapezoidal rule is E1/E2 = 4.0749 and E2/E3 = 4.0248. If we use Simpson's
rule the ratio is E1/E2 = 3.9948 and E2/E3 = 4 and if we use the collocation method the ratio is E5/E11
= 3.7748 and En/E2i = 3.2551. As we can see we do not have accurate results using Simpson's rule
and this because we have a discontinuity in the first derivative of the kernel with respect to s at x =
s.
CHAPTER 6. NUMERICAL RESULTS

Finally, for example five the results that we have found are practically the same with the results
in the table in Kumar and Sloan (1987) [14, p. 593].

We can also see from the results obtained that the collocation method is more accurate that the
Trapezoidal rule but the most accurate method is Simpson's rule that gives us the smallest errors.
Elnagar and Razzaghi (1996) ([5]) claim more accurate results than those corresponding to the
Kumar and Sloan (1987) [14]) collocation method.

Bibliography

[1] Atkinson, K.E., A survey of numerical methods for solving nonlinear integral equations, J.
Integral Equations Appl., 4 (1992), 15-46.

[2] Atkinson, K.E., The Numerical Solution of Integral Equations of the Second Kind,
Cambridge Univ. Press, 1997.

[3] Baker, C.T.H. , The Numerical Treatment of Integral Equations, Clarendon Press, 1978.

[4] Delves L.M., Mohamed J.L., Computational Methods for Integral Equations, Cambridge
University Press, (1985).

[5] Elnagar, G.N.; Mohsen Razzaghi, A Pseudospectral Method for Hammer- stein Equations, J.
Mathematical Analysis and Applications, 199 (1996), 579-591.

[6] Hammerlin, G., Developments in solving integral equations numerically, pp. 187-201, in:
Numerical Integration. Recent Developments Proc. NATO Adv. Res. Workshop,
Bergen/Norw., 1991, NATO ASI Applications, 1985. Ser., Ser. C 357, 1992.

[7] Jerri A.J.(1985), Introduction to Integral Equations With and applications, Wiley, 1985.

[8] Kaneko, H.; Xu, Y., Degenerate kernel methods for Hammerstein equations, Math. Comp.,
56 (1991), 141-148.

[9] Kaneko, H.; Noren R.D.; Xu Y., Numerical solutions for weakly singular hammerstein
equations and their superconvergence, J. Integral Equations Appl., 4 (1992), 391-407.
CHAPTER 6. NUMERICAL RESULTS

BIBLIOGRAPHY

[10] Kaneko H.; Noren R.D.; Padilla P.A., Superconvergence of the iterated collocation methods
for Hammerstein equations, J. Comput. Appl. Math. 80 (1997), No 2, 335-349.

[11] Kaneko, H. and Noren, R. D., Numerical solutions of Hammerstein equations, pp. 257-288,
in: Boundary Integral Methods: Numerical and Mathematical Aspects, Golberg, M. (Ed.),
WIT Press/Computational Mechanics Publications, 1999.

[12] Kaneko, H., Padilla, P. and Xu, Y., Superconvergence of the iterated degenerate kernel
method, Applicable Analysis, 80 (2002), No. 3, 331-351.

[13] Kumar, S., Superconvergence of a Collocation Method for Hammerstein Equations, IMA J.
Numer. Anal., 7 (1987), 313-325.

[14] Kumar S.; Sloan I.H., A new collocation-type method for Hammerstein integral equations,
Math. Comp. 48 (1987), 585-593.

[15] Kumar, S., A New Collocation-type Method for the Numerical Solution of
Hammerstein Equations, Bull. Aust. Math. 38 (1988), 151-152.

[16] Kumar, S., The Numerical Solution of Hammerstein Equations by a Method based on 57
Polynomial Collocation, J.Aust. Math. Soc., Ser. B 31, (1990), 319-329.

[17] Some, B., Some Recent Numerical Methods for Solving Nonlinear Hammerstein
Integral Equations, Math. Comp. Modelling, 18, (1993), p. 55-62.

You might also like