0% found this document useful (0 votes)
610 views16 pages

Integral Equations PDF

1) Integral equations arise in applications where converting a differential equation into an integral equation may simplify solving the problem or enable proving properties of the solution. 2) Common types of integral equations include Fredholm equations, where the integration is over a fixed interval, and Volterra equations, where the upper limit is a variable. Series solutions involving expanding the unknown function as a power series in the parameter λ are often used. 3) For equations with separable kernels, where the kernel can be written as a product of functions of x and s, the integral equation can be reduced to an algebraic equation to solve for the unknown function.

Uploaded by

SHIVANI SHUKLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
610 views16 pages

Integral Equations PDF

1) Integral equations arise in applications where converting a differential equation into an integral equation may simplify solving the problem or enable proving properties of the solution. 2) Common types of integral equations include Fredholm equations, where the integration is over a fixed interval, and Volterra equations, where the upper limit is a variable. Series solutions involving expanding the unknown function as a power series in the parameter λ are often used. 3) For equations with separable kernels, where the kernel can be written as a product of functions of x and s, the integral equation can be reduced to an algebraic equation to solve for the unknown function.

Uploaded by

SHIVANI SHUKLA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

MT5802 - Integral equations

Introduction

Integral equations occur in a variety of applications, often being obtained from a


differential equation. The reason for doing this is that it may make solution of the
problem easier or, sometimes, enable us to prove fundamental results on the existence
and uniqueness of the solution.
Denoting the unknown function by φ we consider linear integral equations which involve
an integral of the form
b x

∫ K(x,s)φ(s)ds or ∫ K(x,s)φ(s)ds
a a
The type with integration over a fixed interval is called a Fredholm equation, while if the
upper limit is x , a variable, it is a Volterra equation.

The other fundamental division of these equations is into first and second kinds. For
Fredholm equation these are
b

f (x) = ∫ K(x,s)φ(s)ds
a
b

φ(x) = f (x) + λ ∫ K(x,s)φ(s)ds


a
The corresponding Volterra equations have the upper limit b replaced with x . The
numerical parameter λ is introduced in front of the integral for reasons that will become
apparent in due course. We shall mainly deal with equations of the second kind.

Series solutions

One fairly obvious thing to try for the equations of the second kind is to make an
expansion in λ and hope that, at least for small enough values, this might converge. To
illustrate the method let us begin with a simple Volterra equation,
x

φ(x) = x + λ ∫ φ(s)ds .
0

For small λ, φ0 (x) = x is a first approximation. Insert this in the integral term to get a
better approximation
x
1
φ1(x) = x + λ ∫ sds = x + λx 2 .
0
2
Again put this into the integral to get
x
 1  1 1
φ2(x) = x + ∫ s + λs 2 ds = x + λx 2 + λ 2x 3 .
 2  2 6
0
Continuing this process, we get
1 1
φn (x) = x + λx 2 + .......... + λn−1x n
2 n!
1
( )
and, as we let n → ∞ , the series converges to e λx −1 . Substituting into the equation
λ
verifies that this is the correct solution. So, for this Volterra equation the technique of
expansion in series gives a result which is convergent for all λ.

We now show that this works for all Volterra equations, subject to some fairly general
conditions. Suppose that we look for a solution with x in some finite interval [a,b] and
that on this interval f (x) is bounded with f (x) < m . We also suppose that, on the
interval [a,b]×[a,b] , K(x,s) < M .
Then

φ (x ) = f (x ) < m
0

φ (x ) = f (x ) + λ
1
∫ K(x, s)f (s)ds <m+ λ (
mM x − a )
a

x x
 1 2 
∫ K(x, s)φ (x ) ∫ M (m + mM (s − a))ds = m
1 + λ λ 
2 2
φ (x ) = f (x ) + <m+ M (x − a) + M (x − a)
2

a
1

a
 2 
Carrying on like this we get
 1 2 1 n n 
φn (x) < m 1 + λ M(x −a) + λ M 2(x −a)2 + ....... + λ M (x −a)n 
 2 n! 
Since the series here is of exponential type we get convergence for all values of λ.

Now see whether the same thing works for Fredholm equations. Again we look at an
example -
1

φ(x) = x + λ ∫ φ(s)ds
0
similar to the previous one, except for the fixed range of integration.
Now we get
φ0 (x) = x
1
1
φ1(x) = x + λ ∫ sds = x + λ
0
2
1
 1  1 1
φ2(x) = x + λ ∫ s + λds = x + λ + λ 2
0
 2  2 2
and, continuing in this way
1
(
φn (x) = x + λ + λ 2 + ....... + λn .
2
)
Now we get a geometric series which only converges if λ < 1 . In this simple problem
the series can be summed and letting n → ∞ we get the solution
λ
φ(x) = x + .
2 (1 − λ)
Since we have managed to sum the series, we get a solution valid for all λ ≠ 1 . In more
general problems, the series will not be easily summable and will only be valid for a
restricted range of values of λ.
The series obtained in this way, for either type of equation is known as the Neumann
series.

Example: Find the first two terms of the Neumann series for the equation
π/2

φ(x) = sin x + λ ∫ cos(x 2s)φ(s)ds.


0
We get
φ0 (x) = sin x
π/2

φ1(x) = sin x + λ ∫ cos(x 2s)sin sds


0
π/2
λ
∫ (sin((1 + x )s ) + sin((1 − x
2 2
= sin x + )s)ds
2 0
 
 cos((1 + x 2 ) π ) −1 cos((1 − x 2 ) π ) −1 
λ 2 2 
= sin x +  + 
2 1 + x2 1−x2 
 
 

Separable kernels

Suppose the kernel of a Fredholm equation is a product of a function of x and a function


of s, to the equation is of the form
b

φ(x) = f (x) + λ ∫ u(x)v(s)φ(s)ds


a
Clearly the u(x) can come outside the integral and if we write
b

c= ∫ v(s)φ(s)ds
a
then we have
φ(x) = f (x) + λcu(x) .
All we need to find is the constant c and this can be done by multiplying the last equation
by v(x) and integrating from a to b. This gives
b b

c= ∫ f (x)v(x)dx + λc ∫ u(x)v(x)dx
a a
so
b

∫ v(x)f (x)dx
c= a
b
.
1 − λ ∫ u(x)v(x)dx
a
This leads to the solution, so long as the value of λ is not such as to make the
denominator vanish.

Example: Solve
1

φ(x) = x + λ ∫ x 3s 2φ(s)ds.
2

0
The solution is
φ(x) = x 2 + λx 3c
with
1

∫ s φ(s)ds .
2
c=
0
from the last two equations we get
1 1
1 1
∫ x dx + λc ∫ x dx = 5 + 6 λc .
4 5
c=
0 0
So, unless λ = 6 we have
1 6λ
φ(x) = x 2 + x 3 .
5 6 −λ
The value of λ for which the solution breaks down is an eigenvalue of the problem.

Now, go back to the general form of the equation and look what happens at this value.
Define the homogeneous equation corresponding to our original equation by
b

φ(x) = λ ∫ u(x)v(s)φ(s)ds
a
(ie put f=0). Multiplying this by v(x) and integrating gives
b b b

∫ v(x)φ(x)dx = λ ∫ u(x)v(x)dx ∫ v(s)φ(s)ds


a a a
which implies that
b

λ ∫ v(x)u(x)dx = 1 .
a
There can be no solution of the homogeneous equation unless this condition is satisfied.
If it is satisfied then it is easily verified that φ(x) = ku(x) is a solution for arbitrary k.
Going back to the inhomogeneous equation, it can be seen that if λ is equal to the
eigenvalue, and a solution exists, then the condition
b

∫ v(x)f (x)dx = 0
a
must be satisfied. Thus the solution can only exist for certain functions f. If such a
solution exists then any multiple of the solution of the homogeneous equation can be
added to it. To summarise, we have,
(a) λ ≠ eigenvalue - unique solution exists, or
(b) λ = eigenvalue - solution may or may not exist and if it does it is not unique.
Homogeneous equation has a solution, any multiple of which can be added.
There are clear parallels between this and the matrix equation
X = F + λAX
where X and F are column vectors and A a matrix. This is a discrete analogue of the
Fredholm equation. This has unique solution
X = (I − λA)−1 X
if the inverse exists, which it does except for certain values of λ (the inverses of the
matrix eigenvalues). At these values of λ the homogeneous equation (F=0) has a non-
trivial solution while the inhomogeneous equation may or may not have a solution, and if
it does arbitrary multiples of the solution of the homogeneous equation can be added to it.

Degenerate Kernels

The method outlined above can be extended to the situation where


K(x,y) = ∑ ui (x)vi (y)
i
over some finite range of i. To see this we look at an example -
1

φ(x) = x + λ ∫ (xy + x 2y 2 )φ(y)dy .


0
If we let
1

c1 = ∫ yφ(y)
0
1

∫ y φ(y)
2
c2 =
0
then the solution is
φ(x) = x + λ(c1x + c2x 2 ) .
If we multiply this by x then by x2 and integrate from 0 to 1 in each case we obtain
1 1 1
c1 = + λ( c1 + c2 )
3 3 4
1 1 1
c2 = + λ( c1 + c2 )
4 4 5

or
   1 
1 − 1 λ − 1 λ   
    
 3 4  1  =  4 
 c 
 
 1 1 c2   1 
 − λ 1 − λ  
4 5 5

This gives
c  1 −48(λ − 5) 60λ 1 / 3
 1    
  =
c2  240 −128λ + λ 2  60λ −80(λ − 3)1 / 4
and hence a uniquely defined solution unless
λ 2 −128λ + 240 = 0
ie λ = 64 ± 4 241 .
These two values are the eigenvalues for this problem. If λ takes one of these values
then the equation
 
1 − 1 λ − 1 λ 
    
 3 4 c1  = 0
  
 1 1 c2  0
 − λ 1 − λ
4 5
has a non-trivial solution. This gives a non-trivial solution to the homogeneous integral
equation (which may be multiplied by an arbitrary constant).
If we consider a kernel of the general form
n
K(x,y) = ∑ ui (x)vi (y)
i =1
then we can follow the same procedure and will end up with an n ×n matrix to invert.
The determinant will be an nth degree polynomial in λ with n solutions (possibly
complex and possibly repeated) which are the eigenvalues. If λ is not an eigenvalue then
the matrix can be inverted and a unique solution found. If it is an eigenvalue then the
homogeneous equation has a non-trivial solution. In this case, the original equation may
or may not have a solution. To obtain the condition under which a solution exists we
introduce the transposed equation
b

φ(x) = f (x) + ∫ K(y,x)φ(y)dy


a
where the arguments in K are swapped over. This produces a matrix which is the
transpose of the original and has the same eigenvalues. If ψ(x) is a solution of the
homogeneous transposed equation for the eigenvalue λ then if we multiply the orginal
equation by ψ(x) and integrate from a to b we get
b b b b

∫ φ(x)ψ(x) = ∫ ψ(x)f (x) + λ ∫ ∫ K(x,y)ψ(x)φ(y)dxdy .


a a a a
From the definition of ψ(x) it satisfies
b

ψ(x) = λ ∫ K(t,x)ψ(t)dt
a
from which it can be seen that the x integral in the last term above just gives ψ(y) and the
whole term is
b

∫ φ(y)ψ(y)dy
a
which, of course, just cancels the left hand side. So, we conclude that if a solution exists
we must have
b

∫ ψ(x)f (x) = 0 .
a

Example: Solve
π/2

φ(x) = sin(x) + λ ∫ cos(x − y)φ(y)dy .


0
The first thing to note is that using the formula for cos(x − y) brings this into the form
we want, namely
π/2

φ(x) = sin x + λ ∫ (cos x cosy + sin x sin y )φ(y)dy .


0
With
π/2

c1 = ∫ cos(x)φ(x)dx
0
π/2

c2 = ∫ sin(x)φ(x)dx
0
we get
1  π 1
c1 = + λ c1 + c2 
2  4 2
π  1 π
c2 = + λ c1 + c2 
4  2 4
or
   1 
1 − λ π − 1 λ  
 c   
 4 2  1  =  2  .
 
 1 π c2   π 
 − λ 1 − λ   
2 4 4
1
The matrix here can be inverted unless λ = , the result being
π 1
±
4 2
 8 
 
c  
 1  16 − 8λπ + λ 2π 2 − 4λ 2 
  =   .
c2   4λ + 4π + λπ 2 
 2
16 − 8λπ + λ π − 4λ 
2 2

Looking at what happens at the eigenvalues, with the + sign we get c1 = c2 if we put the
right hand side equal to zero, giving the solution cos x + sin x (or any multiple of this) for
the homogeneous equation. This is also the solution of the homogeneous transposed
equation (which is the same as the original). For our equation to have a solution for this
eigenvalue we need
π/2

∫ sin x sin x + cos x  = 0


0
which is not true, so no solution exists. This can also be seen from the above matrix
equation, the two equations being incompatible if λ has this value. The condition for the
existence of a solution with the inhomogeneous term replaced with a general function f(x)
is
π/2

∫ f (x) cos x + sin x  = 0 .


 
0
It can be seen that what this does is ensure that the vector on the right hand side of the
matrix equation is such as to be compatible with the left hand side, the two equations then
being identical. A similar analysis holds for the other eigenvalue.

Resolvent kernel

Consider the Fredholm equation in the usual form and suppose for the moment that the
kernel is degenerate,
n
K(x,y) = ∑ ui (x)vi (y) .
i =1
Then, as we have seen
φ(x) = f (x) + λ∑ ciui (x)
b
.
ci = ∫ v(x)φ(x)dx
a
Multiplying the integral equation by each vi (x) in turn, we get the system of linear
equations,
ci = fi + λ∑ aijc j
j
b

fi = ∫ v (x)f (x)dx
i
a
b

aij = ∫ v (x)u (x)i j


a
the solution of which is
ci = ∑ bij f j
j

with the matrix B given by B = (I − λA)−1 . This produces a solution of the form
b

φ(x) = f (x) + λ ∫ R(x,y;λ)f (y)dy


a
with
R(x,y;λ) = ∑ bij ui (x)v j (y) .
i, j

R is called the resolvent kernel and is uniquely defined except when λ is an


eigenvalue, ie A − λI is singular. If λ is an eigenvalue then the homogeneous solution
has a non-trivial solution and the full equation may or may not have a solution, as
discussed previously. If a solution exists it is not unique, since any multiple of the
solution of the homogeneous equation can be added to it.

Fredholm theory

Fredholm obtained a general expression for the resolvent kernel, valid even if the kernel
is not degenerate. We shall not go into the details, but the theory says that
D(x,y;λ)
R(x,y;λ) =
d(λ)
where the numerator and denominator can both be expressed as infinite series

(−1)n
D(x,y;λ) = ∑ Dn (x,y)λn
n =0 n!
n

(−1)
d(λ) = ∑ dnλn .
n =0 n !
We have already seen how to construct a single Neumann series for the solution. It,
however, is only convergent for sufficiently small λ , while the series here are convergent
for all values. The solution only breaks down when d(λ) = 0 , a condition which
determines the eigenvalues. The functions Dn (x,y) and the constants dn are found from
the recurrence relations
D0 (x,y) = K(x,y)
d0 = 1
b

dn = ∫D n−1
(x,x)dx
a
b

Dn (x,y) = K(x,y)dn − n ∫ K(x,z)Dn−1(z,y)dz.


a
To illustrate how this works we look at the equation
1

φ(x) = x + λ ∫ x 3y 2φ(y)dy
2

0
which we have already solved as an example of an equation with a separable kernel.
Here we get

D0 (x,y) = x 3y 2
1
1
∫ x dx = 6
5
d1 =
0
1
1 3 2 1 1
D1(x,y) = x y − ∫ (x 3z 2 )(z 3y 2 )dz = x 3y 2 − x 3z 2 = 0.
6 0
6 6
and all subsequent terms vanish.
The resolvent kernel is thus
x 3y 2 6x 3y 2
R(x,y;λ) = = .
1 6 −λ
1− λ
6
The solution is
1
x 3y 2 2 1 6λx 3
φ(x) = x + λ ∫ 2
y dy = x 2 + ,
0
6 − λ 5 6 − λ
the same as found before by a different method.

Solution of a Volterra equation by differentiation.

A Volterra equation with a simple separable kernel can be solved by reducing it to a


differential equation. To illustrate this consider the example
x

φ(x) = x + ∫ xs 2φ(s)ds.
5

0
We can divide this through by x , so that the integral term does not depend on x , getting
x
φ(x)
= x 4 + ∫ s 2φ(s)ds .
x 0
Differentiating with respect to x gives
d  φ(x)  φ(x)
  = 4x 3 + x 2φ(x) = 4x 3 + x 3  
dx  x   x 
which is a simple linear differential equation. We get
d  φ(x) − 4 x 4 
1 1
− x4
 e  = 4x 3
e 4
dx  x 
and so
1
− x4
φ(x) = −4x +Cxe 4 .
This involves an arbitrary constant, whereas a Volterra integral equation has a unique
solution. We can evaluate the constant by going back to the integral equation. The
condition that φ(x) = 0 when x = 0 tells us nothing, but if we use the fact that
φ(x) / x → 0 as x → 0 we see that the solution is
1
− x4
φ(x) = 4xe 4
− 4x .

Integral transform methods

Recall that the Laplace transform of a function f (x) is defined by


f%(p) = ∫ f (x)e
−px
dx .
0
Let us now consider an integral of the form
x

F(x) = ∫ g(x − y)f (y)dy


0
and look at its LT.
∞ x
%
∫ e ∫ g(x − y)f (y)dydx.
−px
F(p) =
0 0

The integral is over the shaded area in the diagram below,

x
and if we change the order of integration we get
∞ ∞
%
∫ ∫ g(x − y)f (y)e
−px
F(p) = dxdy.
0 y

Now let u = x − y and get


∞ ∞ ∞ ∞
% du ∫ f (y)e −pydy = g%(p)f%(p).
∫ ∫ g(u)f (y)e dydu =∫ g(u)e
−pu−py −pu
F(p) =
0 0 0 0
Our original integral is called the convolution of the functions f and g, so we have arrived
at the conclusion that the LT of the convolution of two functions is the product of the
LT’s of the individual functions.

This can sometimes be used to solve an integral equation if the integral term takes the
form of a convolution.

Example: Solve
x

φ(x) = x + ∫ (x − y)φ(y)dy .
0
Here g(x) = x and

1
∫ xe
−px
g%(p) = 2
. dx =
0
p
Taking the LT transform of the integral equation we get
% = 1 + 1 φ(p)
φ(p) %
2 2
p p
 
% = 1 = 1  1 − 1 
φ(p) 
p 2 −1 2  p −1 p + 1 
leading to
1
φ(x) = (e x −e −x ) = sinh x .
2
Abel’s Integral Equation:

This is the following Volterra equation of the first find -


x

∫ (x − s)
−@
f (x) = φ(s)ds (0 < @ < 1) .
0
Note that the integrand is singular at the upper limit but that the integral exists. Now
g(x) = x −@ and
∞ ∞

∫e ∫e
−px −@ @−1 −u
g%(p) = x dx = p u −@du = p @−1Γ(1 − @) .
0 0
The gamma function which is used here is defined by

∫e
−u
Γ(x) = u x −1du .
0
So, from the convolution theorem we get
% f%(p)p @−1
φ(p) = .
Γ(1 − @)
It is now convenient to introduce a function ψ(x) such that φ(x) = ψ′(x) and ψ(0) = 0.
%
From the properties of Laplace transforms we then have the result φ)(p) % , so
= p ψ(p)
that
% −@
% = f (p)p .
ψ(p)
Γ(1 − @)
But, since the LT of x −@ is Γ(1 − @)p @−1 , p −@ is the LT of x @−1 / Γ(@) and so it follows
from the convolution theorem that
x

ψ(x) = {Γ(@)Γ(1 − @)}


−1
∫ (x − s)
@−1
f (s)ds.
0
If we use the result
π
Γ(@)Γ(1 − @) =
sin π@
then we get the solution in the form
x
sin π@ d
∫ (x − s)
@−1
φ(x) = f (s)ds.
π dx 0

Numerical methods

Just as is the case for differential equations, there are many integral equations for which a
simple analytic solution is not possible. In this case numerical techniques can be used. It
is quite easy to see how a numerical method can be implemented. If we let φi be the
value of the solution at a series of points x i spaced out along the interval of interest, then,
supposing we are dealing with a Fredholm equation of the second kind, we have
b

φi = φ(x i ) = f (x i ) + ∫ K(x i ,s)φ(s)ds .


a
If we now use some numerical approximation for the integral, using the values at our
finite set of points φi = fi + ∑ cij φj .
j

This is now a matrix equation for the set of function values and we can use a numerical
matrix inversion package to get the required result.

To illustrate this, and see how well it works we consider the equation
π/2

φ(x) = sin(x) + ∫ cos(x − y)φ(y)dy .


0
This, of course, is a separable equation and we have already found its solution with an
arbitrary multiplier λ in front of the integral. We take it so that we can compare our
eventual approximation with the exact answer.
 π
If we divide the interval 0,  into 10 sub-intervals and let
 2
iπ iπ iπ j π
φi = φ( ) fi = sin( ) K ij = cos( − )
2 2 2 2
for i = 0,1.....,10 , then with Simpson’s rule used to approximate the integral we get
π
φi = fi + (K φ + 4K i1φ1 + 2K i 2φ2 + 4K i 3φ3 + ....... + K i10φ10 ) .
60 i 0 0
To find the unknown function values we have to invert an 11×11 matrix which is readily
done using a numerical matrix inversion routine.

1.5

Ri 2

φ(x)

2.5

3
0 0.5 1 1.5 2
i . π
20
x
The above graph shows the solution obtained in this way. On the scale of this graph, the
difference between this numerical solution and the true solution is almost
indistinguishable, even though we have only taken 10 intervals. The maximum error is
about 2.4 ×10−4 .

In the case of a Volterra equation which we want to solve over some range we can use the
same procedure. Suppose, for example, we take the equation
x

φ(x) = x + ∫ (x − y)φ(y)dy
0
(whose solution we have already seen to be sinh(x) ) and we want to solve it over the
range [0,2]. Then we write it as
2

φ(x) = x + ∫ K(x,y)φ(y)dy
0

 x −y x >y
K(x,y) = 

 0 otherwise

and treat it in the same way as the Fredholm equation. With [0,2] divided into 10
intervals we get the following graph of the solution.

Ri 2
φ(x)

0
0 0.5 1 1.5 2
i
x
5
x
Again, with only a small number of points we get a good approximation, the maximum
error over this range being around 0.02.

Further reading
Integral Equations - A Short Course - Ll, G. Chambers
Integral equations B L Moiseiwitsch

You might also like