NumericalHUB 0223
NumericalHUB 0223
Applied
Mathematics
through modern
FORTRAN
Juan A. Hernández Ramos
Javier Escoto López
All rights are reserved. No part of this publication may be reproduced, stored or
transmitted without the permission of authors.
ISBN 979-8604287071
I User Manual 1
1 Systems of equations 3
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 LU solution example . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Newton solution example . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Implicit and explicit equations . . . . . . . . . . . . . . . . . . . . . 6
1.5 Power method example . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Condition number example . . . . . . . . . . . . . . . . . . . . . . . 10
2 Lagrange interpolation 11
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Interpolated value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Interpolant and its derivatives . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Integral of a function . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Lagrange polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 Ill–posed interpolants . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.7 Lebesgue function and error function . . . . . . . . . . . . . . . . . . 18
2.8 Chebyshev polynomials . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 Chebyshev expansion and Lagrange interpolant . . . . . . . . . . . . 22
3 Finite Differences 25
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Derivatives of a 1D function . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Derivatives of a 2D function . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Truncation and Round-off errors of derivatives . . . . . . . . . . . . 31
4 Cauchy Problem 35
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 First order ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Linear spring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 Lorenz Attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Stability regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.6 Richardson extrapolation to calculate error . . . . . . . . . . . . . . 43
i
ii
II Developer guidelines 91
1 Systems of equations 93
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1.2 Linear systems and LU factorization . . . . . . . . . . . . . . . . . . 94
1.3 Non linear systems of equations . . . . . . . . . . . . . . . . . . . . 98
1.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . 102
1.5 Power method and deflation method . . . . . . . . . . . . . . . . . . 103
1.6 Inverse power method . . . . . . . . . . . . . . . . . . . . . . . . . . 108
1.7 SVD decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1.8 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
iii
2 Interpolation 217
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
2.2 Interpolation module . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
2.3 Lagrange interpolation module . . . . . . . . . . . . . . . . . . . . . 220
Bibliography 261
Preface
v
vi
Juan A. Hernández
Javier Escoto
Madrid, Septiembre 2019
viii
Introduction
The following book is intended to serve as a guide for graduate students of engi-
neering and scientific disciplines. Particularly, it has been developed thinking on
the students of the Technical Superior School of Aeronautics and Space Engineer-
ing (ETSIAE) from Polytechnic University of Madrid (UPM). The topics presented
cover many of the mathematical problems that appear on the different subjects of
aerospace engineering problems. Far from being a classical textbook, with proofs
and extended theoretical descriptions, the book is focused on the application and
computation of the different problems. For this, for each type of mathematical
problem, an implementation in Fortran language is presented. A complete library
with different modules for each topic accompanies this book. The goal is to un-
derstand the different methods by directly by plotting numerical results and by
changing parameters to evaluate the effect. Later, the student is advised to modify
or to create his own code by studying the developer part of this book.
A complete set of libraries and the software code which is explained is this book
can be downloaded from the repository:
https://github.com/jahrWork/NumericalHUB.
This repository is in continuous development by the authors. Once the com-
pressed file is downloaded, Fortran sources files comprising different libraries as
well as a Microsoft Visual Studio solution called NumericalHUB.sln can be ex-
tracted. If the reader is not familiar with the Microsoft Integrated Development
Environment (IDE), it is highly recommended to read the book Programming with
Visual Studio: Fortran & Python & C++ & WEB projects. Amazon Kindle Di-
rect Publishing 2019 . This book describes in detail how to manage big software
projects by means of the Microsoft Visual Studio environment. Once the Microsoft
Visual Studio is installed, the software solution NumericalHUB.sln allows running
the book examples very easily.
The software solution NumericalHUB.sln comprises a set of extended examples
of different simulations problems. Once the software solution NumericalHUB.sln
is loaded and run, the following simple menu appears on the Command Prompt:
1
2
Listing 1: main_NumericalHUB.f90
Each option is related to the different chapters of the book explained before. As
was mentioned, the book is divided into three parts: Part I User, Part II Developer
and Part III Application Program Interface (API) which share the same contents.
From the user point of view, it is advised to focus on part I where easy examples
are implemented and numerical results are explained. From the developer’s point
of view, part II explains in detail how different layers or levels of abstraction are
implemented. This philosophy will allow the advanced user to implement his own
codes. Part III of this book is intended to give a detailed API to use this software
code by novel users or advanced users to create new codes for specific purposes.
Part I
User Manual
1
Chapter 1
Systems of equations
1.1 Overview
In this section, solutions of linear problems are obtained as well as the determina-
tion of zeroes of implicit functions. The first problem is the solution of a linear
system of algebraic equations. LU factorization method (subroutine LU_Solution)
and Gauss elimination method are proposed to obtain the solution. The natural
step is to deal with solutions of a non linear system of equations. These systems are
solved by the Newton method in the subroutine Newton_Solution. The eigenval-
ues problem of a given matrix is considered in the subroutine Test_Power_Method
computed by means of the power method. Finally, to introduce the concept of
conditioning of a matrix, the condition number of the Vandermonde matrix is
computed in the subroutine Vandermonde\_condition\_number. All these sub-
routines are called from the subroutine Systems_of_Equations_examples which
can be executed typing the first option of the main menu of the NumericalHUB.sln
environment.
subroutine Systems_of_Equations_examples
call LU_Solution
call Newton_Solution
call Implicit_explicit_equations
call Test_Power_Method
call Test_eigenvalues_PM
call Vandermonde_condition_number
end subroutine
Listing 1.1: API_Example_Systems_of_Equations.f90
3
4 CHAPTER 1. SYSTEMS OF EQUATIONS
4 3 6 9 3
2 5 4 2 1
A=
1
, b=
5 .
(1.1)
3 2 7
2 4 3 8 2
subroutine LU_Solution
real :: A(4,4), b(4), x(4)
integer :: i
A(1,:) = [ 4, 3, 6, 9]
A(2,:) = [ 2, 5, 4, 2]
A(3,:) = [ 1, 3, 2, 7]
A(4,:) = [ 2, 4, 3, 8]
b = [ 3, 1, 5, 2]
−7.811
−0.962
x=
4.943 .
(1.2)
0.830
Another common problem is to obtain the solution of a system of non linear equa-
tions. Iterative methods based on approximate solutions are always required. The
rate of convergence and radius of convergence from an initial condition determine
the election of the iterative method. The highest rate of convergence to the final
solution is obtained with the Newton-Raphson method. However, the initial condi-
tion to iterate must be close to the solution to achieve convergence. Generally, this
method is used when the initial approximation of the solution can be estimated
approximately. To illustrate the use of this method, an example of a function
F : R3 → R3 is defined as follows:
F1 = x2 − y 3 − 2,
F2 = 3 x y − z,
F3 = z 2 − x.
function F(xv)
real, intent(in) :: xv(:)
real:: F(size(xv))
real :: x, y, z
end function
The subroutine Newton gives the solution by means of an iterative method from
the initial approximation x0. The solution is given in the same variable x0.
subroutine Newton_Solution
call Newton( F, x0 )
Sometimes, the equations that governs a problem comprises explicit and implicit
equations. For example, the following system of equations:
F1 = x2 − y 3 − 2,
F2 = 3xy − z,
F3 = z 2 − x,
x = z2,
F1 = x2 − y 3 − 2,
F2 = 3xy − z.
To solve this kind of problems, the subroutine Newtonc has been implemented.
This subroutine takes into account that some equations are zero for all values of
1.4. IMPLICIT AND EXPLICIT EQUATIONS 7
the unknown x. In this case, the function F (x) should provide explicit relationships
for the components of x. An example of this kind of problem together with an initial
approximation for the solution is shown in the following code:
subroutine Implicit_explicit_equations
real :: x(3)
write(*,*) 'Zeros of F(x) by Newton method '
write(*,*) 'F(1) = x**2 - y**3 - 2'
write(*,*) 'F(2) = 3 * x * y - z'
write(*,*) 'F(3) = z**2 - x'
x = 1
call Newtonc( F = F1 , x0 = x )
write(*,'(A35 , 100f8.3)') 'Three implicit equations , x = ', x
x = 1
call Newtonc( F = F2 , x0 = x )
write(*,'(A35 , 100f8.3)') 'Two implicit + one explicit , x = ', x
write(*,*) "press enter "; read(*,*)
contains
function F1(xv) result(F)
real, target :: xv(:)
real :: F(size(xv))
real, pointer :: x, y, z
end function
function F2(xv) result(F)
real, target :: xv(:)
real :: F(size(xv))
real, pointer :: x, y, z
x = z**2
F(1) = 0 ! forall xv
F(2) = x**2 - y**3 - 2
F(3) = 3 * x * y - z
end function
end subroutine
Listing 1.5: API_Example_Systems_of_Equations.f90
8 CHAPTER 1. SYSTEMS OF EQUATIONS
The determination of eigenvalues of square matrix are very valuable and also very
challenging. If the matrix is symmetric, all eigenvalues are real and the determina-
tion of the eigenvalue with the largest module can be obtained easily by the power
method. The power method is an iterative method that allows to determine the
subroutine Test_Power_Method
integer, parameter :: N = 3
real :: A(N, N), lambda , U(N)=1
A(1,:) = [ 7, 4, 1 ]
A(2,:) = [ 4, 4, 4 ]
A(3,:) = [ 1, 4, 7 ]
λ = 12.00
Once the largest eigenvalue is obtained, it can be removed from the matrix A as
well as its associated subspace. The new matrix A is obtained with the following
expression
Aij → Aij − λk Uik Ujk .
Following this procedure, the next largest eigenvalue is obtained. This is done in
the subroutine eigenvalues_PM.
An example of this procedure is shown in the following code where the eigen-
values of the same matrix A are calculated. The subroutine eigenvalues_PM calls
the subroutine power_method and removes the calculated eigenvalues.
subroutine Test_eigenvalues_PM
integer, parameter :: N = 3
real :: A(N, N), lambda(N), U(N, N)
integer :: i
A(1,:) = [ 7, 4, 1 ]
A(2,:) = [ 4, 4, 4 ]
A(3,:) = [ 1, 4, 7 ]
do i=1, N
write(*,'(A8 , f8.3, A15 , 3f8.3)') &
"lambda = ", lambda(i), "eigenvector = ", U(:,i)
end do
end subroutine
Notice that as λ3 is null, the third eigenvector U3 is the same as the first
eigenvector U1 .
10 CHAPTER 1. SYSTEMS OF EQUATIONS
subroutine Vandermonde_condition_number
integer, parameter :: N = 10
real :: A(N, N), kappa
integer :: i, j
do i=1, N; do j=1, N;
A(i,j) = (i/real(N))**j
end do; end do
kappa = Condition_number(A)
Once this code is executed, the result given for the condition number is:
κ(A) = 0.109E + 09,
which indicates that the Vandermonde matrix is ill-conditioned. When solving a
linear system of equations where the system matrix A is the Vandermonde matrix,
a small error in the independent term b will be amplified by the condition number
giving rise to large errors in the solution x.
Chapter 2
Lagrange interpolation
2.1 Overview
subroutine Lagrange_Interpolation_examples
call Interpolated_value_example
call Interpolant_example
call Integral_example
call Lagrange_polynomial_example
call Ill_posed_interpolation_example
call Lebesgue_and_PI_functions
call Chebyshev_polynomials
call Interpolant_versus_Chebyshev
end subroutine
11
12 CHAPTER 2. LAGRANGE INTERPOLATION
All functions and subroutines used in this chapter are gathered in a Fortran
module called: Interpolation. To make use of these functions the statement:
use Interpolation should be included at the beginning of the program.
subroutine Interpolated_value_example
integer, parameter :: N = 6
real :: x(N) = [ 0.0, 0.1, 0.2, 0.5, 0.6, 0.7 ]
real :: f(N) = [ 0.3, 0.5, 0.8, 0.2, 0.3, 0.6 ]
real :: xp = 0.15
real :: yp(N-1)
integer :: i
do i=2, N-1
yp(i) = Interpolated_value( x , f , xp , i )
write (*,'(A10 , i4 , A40 , f10 .3)') 'Order = ', i, 'The interpolated
value at xp is = ', yp(i)
end do
x = {xi | i = 0, . . . , N }, f = {fi | i = 0, . . . , N }.
The interpolant and its derivatives are evaluated in the following set of equispaced
points:
{xpi = a + (b − a)i/M, i = 0, . . . , M }.
Note that in the following example that the number of nodal points is N = 3 and
the number of points where this interpolant and its derivatives are evaluated is
M = 400.
subroutine Interpolant_example
a = x(0); b = x(N)
xp = [ (a + (b-a)*i/M, i=0, M) ]
end subroutine
The third argument of the function Interpolant is the order of the polynomial.
It should be less or equal to N . The fourth argument is the set of points where
the interpolant is evaluated. The function returns a matrix I_N containing the
interpolation values and their derivatives in xp . The first index holds for the order
of the derivative and second index holds for the point xpi . On figure 2.1, the
interpolant and its first derivative are plotted.
14 CHAPTER 2. LAGRANGE INTERPOLATION
0.8 0
dI/dx
I(x)
0.6
−5
0.4
0.2 −10
0 0.2 0.4 0 0.2 0.4
x x
(a) (b)
Figure 2.1: Lagrange interpolation with 4 nodal points. (a) Interpolant function. (b)
First derivative of the interpolant.
In this section, definite integrals are considered. Let’s give the following example:
Z 1
I0 = sin(πx)dx.
0
To carry out the integral, an interpolant is built and later by integrating this
interpolant the required value is obtained. The interpolant can be a piecewise
polynomial interpolation of order q < N or it can be a unique interpolant of order
q = N . The function Integral has three arguments: the nodal points x,
the images of the function f and the order of the interpolant q. In the following
example, N = 6 equispaced nodal points are considered and integral is carried out
with an interpolant of order q = 4. The result is compared with the exact value to
know the error of this numerical integration.
subroutine Integral_example
integer, parameter :: N=6
real :: x(0:N), f(0:N), a = 0, b = 1, I0
integer :: i
x = [ (a + (b-a)*i/N, i=0, N) ]
f = sin ( PI * x )
I0 = Integral( x, f, 4 )
write(*, *) "The integral [0,1] of sin( PI x ) is: ", I0
write(*, *) "Error = ", ( 1 -cos(PI) )/PI - I0
write(*, *) "press enter "; read(*,*)
end subroutine
Listing 2.4: API_Example_Lagrange_interpolation.f90
2.5. LAGRANGE POLYNOMIALS 15
where fj stands for the image of some function f (x) in N + 1 nodal points xj and
`j (x) is a Lagrange polynomial of degree N that is zero in all nodes except in xj
that is one. Besides, the sensitivity to round-off error is measured by the Lebesgue
function and its derivatives defined by:
N N
(k) (k)
X X
λN (x) = |`j (x)|, λN (x) = |`j (x)|.
j=0 j=0
subroutine Lagrange_polynomial_example
integer, parameter :: N=4, M=400
real :: x(0:N), xp(0:M), a=-1, b=1
real :: Lg(-1:N, 0:N, 0:M) ! Lagrange polynomials
! -1:N (integral , lagrange , derivatives)
! 0:N ( L_0(x), L_1(x) ,... L_N(x) )
! 0:M ( points where L_j(x) is evaluated )
real :: Lebesgue_N (-1:N, 0:M)
character(len=2) :: legends (0:N) = [ "l0", "l1", "l2", "l3","l4" ]
integer :: i
x = [ (a + (b-a)*i/N, i=0, N) ]
xp = [ (a + (b-a)*i/M, i=0, M) ]
On figure 2.2 the Lagrange polynomials and the Lebesgue function are shown.
It is observed in figure 2.2a that `j values 1 at x = xj and 0 at x = xi with j 6= i.
In figure 2.2b, the Lebesgue function together with its derivatives are presented.
103
1 λ λ(1 λ(2 λ(3
102
λ(k) (x)
`j (x)
0
101
`0 `1 `2 `3
−1 `4
100
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Figure 2.2: Lagrange polynomials `j (x) with N = 4 and Lebesgue function λ(x) and its
derivatives. (a) Lagrange polynomials `j (x) for j = 0, 1, 2, 3, 4. (b) Lebesgue function
and its derivatives λ(k) (x) for k = 0, 1, 2, 3.
When considering equispaced grid points, the Lagrange interpolation becomes ill–
posed which means that a small perturbation like machine round-off error yields big
errors in the interpolation result. In this section an interpolation example for the
inoffensive f (x) = sin(πx) is analyzed to show the interpolant can have noticeable
errors at boundaries.
The error is defined as the difference between the function and the polynomial
interpolation
f (x) − IN (x) = RN (x) + RL (x),
where RN (x) is the truncation error and RL (x) is the round–off error. Since the
round–off error is present in the computer when any value is calculated, a polyno-
mial interpolation IN (x) of degree N can be expressed in terms of the Lagrange
polynomials `j (x) in the following way:
N
X
IN (x) = (fj + j ) `j (x),
j=0
where j can be considered as the round-off error of the image f (xj ). Note that
when working in double precision this j is of order = 10−15 . Hence, the error
2.6. ILL–POSED INTERPOLANTS 17
of the interpolant has two components. The first one RN (x) associated to the
truncation degree of the polynomial and the second one rL (x) associated to round-
off errors. This error can be expressed by the following equation:
N
X
RL (x) = j `j (x).
j=0
Although the exact values of the round–off errors j are not not known, all values
j can be bounded by . It allows to bound the round–off error by the following
expression:
N
X
|RL (x)| ≤ |`j (x)|
j=0
which introduces naturally the Lebesgue function λN (x). If the Lebesgue function
reaches values of order 1015 , the round-off error becomes order unity.
subroutine Ill_posed_interpolation_example
end subroutine
1
1013
λN (x)
IN (x)
0
107
−1
101
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Figure 2.3: Interpolation for an equispaced grid with N = 64. (a) Ill posed interpolation
IN (x), (b) Lebesgue function λN (x).
As it was mentioned in the preceding section, the interpolation error has two main
contributions: the round-off error and the truncation error. In this section, a
comparison of these two contributions is presented. It can be shown that the
truncation error has the following expression:
f (N +1) (ξ)
RN (x) = πN +1 (x) ,
(N + 1)!
In this section, the Lebesgue function λN (x) and the error function πN +1 (x)
together with their derivatives are plotted to show the origin of the interpolation
error.
In the following code, the π error function as well as the Lebesgue function
λN (x) are calculated for N = 10 interpolation points. A grid of M = 700 points
is used to plot the results. In figure 2.4, the π error function is compared with the
Lebesgue function λN (x). Both the π error function and for the Lebesgue function
show maximum values near the boundaries making clear that error will become
more important near the boundaries. It is also observed that the Lebesgue values
are greater than the π error function. However, it does not mean that the round–off
error is greater than the truncation error because the truncation error depends on
the regularity of f (x) and the round–off error depends on the finite precision .
2.7. LEBESGUE FUNCTION AND ERROR FUNCTION 19
subroutine Lebesgue_and_PI_functions
Lebesgue_N = Lebesgue_functions( x, xp )
PI_N = PI_error_polynomial( x, xp )
end subroutine
Listing 2.7: API_Example_Lagrange_interpolation.f90
What is also true is that the maximum value of the Lebesgue function grows
with N and the maximum value of the π error function goes to zero with N → ∞.
Hence, with N great enough, the round–off error exceeds the truncation error.
10−2
10−3
πN +1 (x)
101
λN (x)
10−4
10−5
100
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Figure 2.4: Error function πN +1 (x) and Lebesgue function λN (x) for N = 10. (a) Function
πN +1 (x). (b) Lebesgue function λN (x)
In figures 2.5 and 2.6, first and second derivatives of the π error function and the
20 CHAPTER 2. LAGRANGE INTERPOLATION
103
10−1
+1 (x)
102
λ0N (x)
10−3
πN
0
101
−5
10
100
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Lebesgue function are shown and compared. It is observed that the first and second
derivative of the Lebesgue function grow exponentially making more relevant the
round–off error for the first and the second derivative of the interpolant. However,
the derivatives of the π error function decreases with the order of the derivative.
104
−4
10
+1 (x)
103
λ00N (x)
102
πN
00
10−11
101
10−18 100
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
where Tk (x) are the Chebyshev polynomials and ĉk are the projections of f (x)
in the Chebyshev basis. The are some special orthogonal basis very noticeable
like Chebyshev polynomials. The first kind Tk (x) and the second kind Uk (x) of
Chebyshev polynomials are defined by:
sin(kθ)
Tk (x) = cos(kθ), Uk (x) = ,
sin θ
with cos θ = x.
In the following code, Chebyshev polynomials of order k of first kind and second
kind are calculated for different values of x.
subroutine Chebyshev_polynomials
2
T1 T2 T3 5 U1 U2 U3
T4 T5 U4 U5
1
Uk (x)
Tk (x)
0
0
−1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Figure 2.7: First kind and second kind Chebyshev polynomials. (a) First kind Chebyshev
polynomials Tk (x). (b) Second kind Chebyshev polynomials Uk (x).
As it was shown in previous sections, when the interpolation points are equispaced,
the error grows at boundaries no matter the regularity of the interpolated function
f (x). Hence, high order polynomial interpolation is prohibited with equispaced
grid points. To cure this problem, concentration of grid points near the boundaries
are usually proposed. One of the most important distribution of points that cure
this bad behavior near the boundaries is the Chebyshev extrema
πi
xi = cos , i = 0, . . . , N.
N
subroutine Interpolant_versus_Chebyshev
integer :: i, k
real :: c_k , a=-1, b = 1, gamma
In figure 2.8a, the truncated Chebyshev expansion P_N is plotted together with
the polynomial interpolation I_N with no appreciable difference between them.
This can be verified in figure 2.8b where the error for these two approximations
are shown. It can be demonstrated that when choosing some specific nodal or
grid points and if the function to be approximated is regular enough, the differ-
ence between the truncated Chebyshev expansion and the polynomial interpolation
becomes very small. This difference is called aliasing error.
24 CHAPTER 2. LAGRANGE INTERPOLATION
·10−2
Discrete Chebyshev Discrete Chebyshev
Truncated Chebyshev 2 Truncated Chebyshev
1
EN (x)
IN (x)
0
0
−2
−1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
Figure 2.8: Chebyshev discrete expansion and truncated series for N = 6. (a) Chebyshev
expansion. (b) Chebyshev expansion error.
Chapter 3
Finite Differences
3.1 Overview
From the numerical point of view, derivatives of some function f (x) are always
obtained by building an interpolant, deriving the interpolant analytically and later
particularizing it at some point. When piecewise polynomial interpolation is con-
sidered, finite difference formulas are built to calculate derivatives.
subroutine Finite_difference_examples
call Derivative_function_x
call Derivative_function_xy
call Derivative_error
end subroutine
25
26 CHAPTER 3. FINITE DIFFERENCES
All functions and subroutines used in this chapter are gathered in a Fortran
module called: Finite_differences. To make use of these functions the state-
ment: use Finite_differences should be included at the beginning of the pro-
gram.
N
d(k) IN X d(k) `j
k
(x) = fj (x).
dx j=0
dxk
N
d(k) IN X d(k) `j
(x i ) = fj (xi ).
dxk j=0
dxk
used later by the subroutine Derivative. This subroutine multiplies the weights
`j (xi ) by the function values u(xj ) to yield the required derivative in the desired
(k)
nodal point. The values IN (xi ) are stored in a matrix uxk of two indexes. The
(k)
first index standing for the grid point xi and the second one for the order of the
derivative k.
grid point, not close to the boundaries, these points are {xi−2 , xi−1 , xi , xi+1 , xi+2 }
and the first and second derivatives give rise to:
j=i+2 j=i+2
du X (1) d2 u X (2)
(xi ) = u(xj ) `j (xi ), 2
(xi ) = u(xj ) `j (xi ).
dx j=i−2
dx j=i−2
The first argument of the subroutine Derivative is the direction "x" of the deriva-
tive, the second argument is the order of the derivative, the third one is the vector
of images of u(x) at the nodal points and the fourth argument is the derivative
evaluated at the nodal points xi ·
Additionally, the error of the first and second derivatives are calculated by
subtracting the approximated value from the exact value of the derivative
dI d2 I
2
E1 = (xi ) − π cos( πxi ) , E2 = (x ) + π sin( πx ) .
dx dx2
i i
subroutine Derivative_function_x
u = sin(pi * x)
end subroutine
Listing 3.2: API_Example_Finite_Differences.f90
28 CHAPTER 3. FINITE DIFFERENCES
In figure 3.1, the first and second derivatives of u(x) = sin(πx) are plotted
together with their numerical error.
·10−5
2
2
0
du/dx
E(x)
0
−2
−2
−4
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
·10−3
4
10
2
d2 u dx2
E(x)
0 0
−2
−10
−4
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(c) (d)
Figure 3.1: First and second derivatives of u(x) = sin(πx) by means of finite difference
formulas and their associated error. The piecewise polynomial interpolation of degree 4
and 21 nodal points. (a) First numerical derivative
du/dx . (b) Error on the calculation
of du/dx . (c) Second numerical derivative d2 u dx2 . (d) Error on the calculation of
d2 u dx2 .
3.3. DERIVATIVES OF A 2D FUNCTION 29
In this section, a two dimensional space is considered and partial derivatives are
calculated. It is considered the function:
subroutine Derivative_function_xy
end subroutine
1 1
0.5 0.5
0 0
y
y
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(a) (b)
1 1
0.5 0.5
0 0
y
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
x x
(c) (d)
In order to analyze the effect of the truncation and round-off error produced in the
approximation of a derivative by finite differences, the following example is consid-
ered. As it was shown in the Lagrange interpolation chapter, two contributions or
errors are always present when interpolating some function: truncation error and
round–off error. While the truncation error is reduced when decreasing the step
size ∆x between grid points, the round-off error is increased when reducing ∆x.
This is due to the growth of the Lebesgue function when reducing ∆x.
In the following code, the second derivative for f (x) = cos(πx) is calculated with
piecewise polynomial interpolation of degree q = 2, 4, 6, 8. The calculated value is
compared with the exact value at some point xp and the resulting is determined
by:
d2 I
E(xp ) = 2 (xp ) − π 2 cos(πxp ).
dx
Since the grid spacing initialized in Grid_initialization is uniform, the step
size ∆x = 2/N . There are two loops in the code, the first one takes into account
different degree q for the piecewise polynomial and the second one varies the number
of grid points N and, consequently, the step size ∆x. To measure the effect of the
machine precision or errors associated to measurements, the function is perturbed
by means of the subroutine randon_number with a module of order = 10−12
giving rise to the following perturbed values:
The resulting error E(xp ) is evaluated at the boundary xp = −1 for each step size
∆x.
In figure 3.3 and figure 3.4, the error E versus the step size ∆x is plotted in
logarithmic scale. As it is expected, when the step size ∆x is reduced, the error
decreases. However, when derivatives are calculated with smaller step sizes ∆x,
the round-off error becomes the same order of magnitude than the truncation error
and the global error stops decreasing. When reducing, even more, the step size,
the round-off error becomes dominant and the global error increases a lot. This
behavior is observed in figure 3.3 and figure 3.4, which display the error at x = −1
and x = 0 respectively. As the order of interpolation grows, the minimum value
of ∆x grows too indicating that the round-off error starts being relevant for larger
step sizes.
32 CHAPTER 3. FINITE DIFFERENCES
subroutine Derivative_error
doq=2, Nq , 2
l = l +1
do j=1, M
logN = 1 + 3.*(j-1)/(M-1)
N = 2*int(10** logN)
call random_number(f)
f = cos ( PI * x ) + epsilon * f
dfdx = - PI**2 * cos ( PI * x )
end subroutine
−5
−5
log E(x)
log E(x)
−10
−10
−15
−15
−8 −6 −4 −8 −6 −4
log ∆x log ∆x
(a) (b)
−5 −5
−10
log E(x)
log E(x)
−10
−15
−15
−20
−8 −6 −4 −8 −6 −4
log ∆x log ∆x
(c) (d)
Figure 3.3: Numerical error of a second order derivative versus the step size ∆x at x = −1.
(a) Piecewise polynomials of degree q = 2. (b) q = 4. (c) q = 6. (d) q = 8.
34 CHAPTER 3. FINITE DIFFERENCES
−5 −10
log E(x)
log E(x)
−10 −15
−15 −20
−8 −6 −4 −8 −6 −4
log ∆x log ∆x
(a) (b)
−10 −10
−15
log E(x)
log E(x)
−15
−20
−20
−25
−8 −6 −4 −8 −6 −4
log ∆x log ∆x
(c) (d)
Figure 3.4: Numerical error of a second order derivative versus the step size ∆x at x = 0.
(a) Piecewise polynomials of degree q = 2. (b) q = 4. (c) q = 6. (d) q = 8.
Chapter 4
Cauchy Problem
4.1 Overview
All functions and subroutines used in this chapter are gathered in a Fortran
module called: Cauchy_problem. To make use of these functions the statement:
use Cauchy_problem should be included at the beginning of the program.
35
36 CHAPTER 4. CAUCHY PROBLEM
subroutine Cauchy_problem_examples
call First_order_ODE
call Linear_Spring
call Lorenz_Attractor
call Stability_regions_RK2_RK4
call Error_solution
call Convergence_rate_RK2_RK4
end subroutine
du
= −2u(t),
dt
with the initial condition u(0) = 1. This Cauchy problem could describe the
velocity along time of a punctual mass submitted to viscous damping. This problem
has the following analytical solution:
u(t) = e−2t .
The implementation of the problem requires the definition of the differential oper-
ator f (U , t) as a function.
F(1) = -2*U(1)
end function
subroutine First_order_ODE
real :: t0 = 0, tf = 4
integer :: i
integer, parameter :: N = 1000 !Time steps
real :: Time (0:N), U(0:N,1)
contains
·10−11
1
1
E(t)
u(t)
0.5
0.5
0 0
0 1 2 3 4 0 1 2 3 4
t t
(a) (b)
Figure 4.1: Numerical solution and error on the computation of the first order Cauchy
problem. (a) Numerical solution of the first order Cauchy problem. (b) Error of the
solution along time.
The numerical solution obtained using this code can be seen in figure 4.1. In
it can be seen that the qualitative behavior of the solution u(t) is the same as the
described by the analytical solution. However, quantitative behavior is not exactly
equal as it is an approximated solution. In figure 4.1(b) it can be seen that the
solution tends to zero slower than the analytical one.
38 CHAPTER 4. CAUCHY PROBLEM
d2 u
+ a t u(t) = 0.
dt2
First of all, the problem must be formulated as a first order differential equation.
This is done by means of the transformation:
du
u(t) = U1 (t), = U2 (t),
dt
which leads to the system:
d
U1 0 1 U1
= .
dt U2 −a t 0 U2
U1 (0) 5
= .
U2 (0) 0
F(1) = U(2)
F(2) = -a * t * U(1)
end function
subroutine Linear_Spring
integer :: i
integer, parameter :: N = 100 !Time steps
real :: t0 = 0, tf = 4, Time (0:N), U(0:N, 2)
contains
10
4
2
U1 (t)
U2 (t)
0
0
−2
−10
−4
0 1 2 3 4 0 1 2 3 4
t t
(a) (b)
Figure 4.2: Numerical solution of the Linear spring movement. (a) Position along time.
(b) Velocity along time.
The numerical solution of the problem is shown in figure 4.2. It can be seen how
the initial condition for both U1 and U2 are satisfied and the oscillatory behavior
of the solution.
40 CHAPTER 4. CAUCHY PROBLEM
Another interesting example is the differential equation system from which the
strange Lorenz attractor was discovered. The Lorenz equations are a simplification
of the Navier-Stokes fluid equations used to describe the weather behavior along
time. The behavior of the solution is chaotic for certain values of the parameters
involved in the equation. The equations are written:
x a (y − z)
d
y = x (b − z) − y ,
dt
z x y−c z
along with the initial conditions:
x(0) 12
y(0) = 15 .
z(0) 30
real :: x, y , z
F(1) = a * ( y - x )
F(2) = x * ( b - z ) - y
F(3) = x * y - c * z
end function
The previous function will be used as an input argument for the subroutine
that solves the Cauchy Problem. In this case, a fourth order Runge–Kutta scheme
is used to integrate the problem.
4.4. LORENZ ATTRACTOR 41
subroutine Lorenz_Attractor
integer, parameter :: N = 10000
real :: Time (0:N), U(0:N,3)
real :: a=10., b=28., c=2.6666666666
real :: t0 =0, tf=25
integer :: i
Time = [ (t0 + (tf -t0 ) * i / (1d0 * N), i=0, N ) ]
contains
The chaotic behaviour appears for the values a = 10, b = 28 and c = 8/3.
When solved for these values, the phase planes of (x(t), y(t)) and (x(t), z(t)) show
the famous shape of the Lorenz attractor. Both phase planes can be observed on
figure 4.3.
20 40
0
y
20
−20
Figure 4.3: Solution of the Lorenz equations. (a) Phase plane (x, y) of the Lorenz attrac-
tor. (b) Phase plane (x, z) of the Lorenz attractor.
42 CHAPTER 4. CAUCHY PROBLEM
One of the capabilities of the library is to compute the region of absolute stability
of a given temporal scheme. In the following example, the stability regions of
second-order and fourth-order Runge-Kutta methods are determined.
do j=1, 2
if (j==1) then
call Absolute_Stability_Region (Runge_Kutta2 , x, y, Region)
else if (j==2) then
call Absolute_Stability_Region (Runge_Kutta4 , x, y, Region)
end if
call plot_contour(x, y, Region , "$\Re(z)$","$\Im(z)$", levels , &
legends(j), path(j), "isolines")
end do
4 4
2 2
0 0
−2 −2
−4 −4
−4 −2 0 −4 −2 0
Im(z) Im(z)
(a) (b)
Figure 4.4: Absolute stability regions. (a) Stability region of second order Runge-Kutta.
(b) Stability region of fourth order Runge-Kutta.
4.6. RICHARDSON EXTRAPOLATION TO CALCULATE ERROR 43
The library also computes the error of the obtained solution of a Cauchy problem
using the Richardson extrapolation. The subroutine Error_Cauchy_Problem uses
internally two different step sizes ∆t and ∆t/2, respectively, and estimates the
error as:
ku1 n − u2 n k
E= ,
1 − 1/2q
where E is the estimated error, un1 is the solution at the final time calculated with
the given time step, un2 is the solution at the final time calculated with ∆t/2 and
q is the order of the temporal scheme used for calculating both solutions.
This example estimates the error of a Van der Pol oscillator using a second-order
Runge-Kutta.
1
2
0
E
x
−1
−2
0 10 20 30 0 10 20 30
t t
(a) (b)
Figure 4.5: Integration of the Van der Pol oscillator. (a) Van der Pol solution, (b) Error
of the solution.
In figure 4.5 the solution together with its error is plotted. Since the error varies
significantly along time, the variable time step is required to maintain error under
tolerance.
44 CHAPTER 4. CAUCHY PROBLEM
A temporal scheme is said to be of order q when its global error with ∆t → 0 goes to
zero as O(∆tq ). It means that high order numerical methods allow bigger time steps
to reach a precise error tolerance. The subroutine Temporal_convergence_rate
determines the error of the numerical solution as a function of the number of time
steps N . This subroutine internally integrates a sequence of refined ∆ti and, by
means of the Richardson extrapolation, determines the error.
In the following example, the error or convergence rate of a second and fourth-
order Runge-Kutta for the Van der Pol oscillator are obtained.
RK2
5 RK4
−5
log E
0
y
−10
−5
−2 0 2 4 5 6
x log N
(a) (b)
Figure 4.6: Convergence rate of a second and fourth order Runge–Kutta schemes with
time step. (a) Van der Pol solution. (b) Error versus time steps.
In the figure 4.6a the Van der Pol solution is shown. In figure 4.6b the errors
versus the number of time steps N are plotted in logarithmic scale. It can be
observed that the fourth-order Runge-Kutta has an approximate slope of 4, whereas
the slope of the second-order Runge–Kutta scheme is close to two.
4.8. ADVANCED HIGH ORDER NUMERICAL METHODS 45
When high precision requirements are necessary, high order temporal schemes must
be used. This is the case of orbits or satellite missions. These simulations require
very small errors during their temporal integration. Generally, it is said that a
numerical method is of order q when its global error is O(∆tq ). This means that
high order numerical methods require greater time steps than low order schemes
to accomplish the same error. Consequently, high order methods have lower com-
putational effort than low order methods. The following subroutine shows the per-
formance of some advanced high order methods when simulating orbit problems:
subroutine Advanced_Cauchy_problem_examples
integer :: option = 1
do while (option >0)
The van der Pol oscillator is a non-conservative stable oscillator which is applied
to physical and biological sciences. Its second order differential equation is:
ẍ − µ (1 − x2 )ẋ + x = 0.
ẋ = v,
v̇ = −x + µ 1 − x2 v.
x = U(1); v = U(2)
F = [ v, mu * (1 - x**2) * v - x ]
end function
Again, the function is used as an input argument for the subroutine that com-
putes the solution of the Cauchy Problem. In this case, advanced temporal methods
for Cauchy problems are used, particularly embedded Runge-Kutta formulas. The
methods used are "RK87" and "Fehlberg87" and require the use of an error tol-
erance, which is set as = 10−8 . Both of them are selected by the subroutine
set_solver.
Each method is given a different initial condition in order to illustrate the long
time behavior of the solution. The asymptotic behavior of the solution tends to a
limit cycle, that is, given sufficient time the solution becomes periodic. This can be
observed from figure 4.7(a) where the solution is obtained with the embedded Runge
Kutta scheme "RK87" and from figure 4.7(b) integrated with the "Fehlberg87"
scheme. Although both solutions tend to the same cycle, a difference in their
phases can be observed in figure 4.7(b).
4.9. VAN DER POL OSCILLATOR 47
subroutine Van_der_Pol_oscillator
real :: t0 = 0, tf = 30
integer, parameter :: N = 350, Nv = 2
real :: Time (0:N), U(0:N, Nv , 2)
integer :: i
Time = [ (t0 + (tf -t0 ) * i / (1d0 * N), i=0, N ) ]
U(0,:,1) = [3, 4]
call set_solver("eRK", "RK87")
call set_tolerance (1d-8)
call Cauchy_ProblemS( Time , VanDerPol_equation , U(:,:,1) )
U(0,:,2) = [0, 1]
call set_solver("eRK", "Fehlberg87")
call set_tolerance (1d-8)
call Cauchy_ProblemS( Time , VanDerPol_equation , U(:,:,2) )
2 2
0
x
ẋ
0
−2
RK87 Fehlberg87 −2
−4
−2 0 2 0 10 20 30
x t
(a) (b)
Figure 4.7: Solution of the Van der Pol oscillator. (a) Trajectory on the phase plane
(x, ẋ). (b) Evolution along time of x.
48 CHAPTER 4. CAUCHY PROBLEM
The non-linear motion of a star around a galactic center, with the motion restricted
to a plane, can be modeled through Henon-Heiles system:
ẋ = px
ẏ = py
ṗx = −x − 2λxy
ṗy = −y − λ x2 − y 2 .
end function
Listing 4.14: API_Example_Cauchy_Problem.f90
subroutine Henon_Heiles_system
integer, parameter :: N = 1000, Nv = 4 , M = 1 !Time steps
real, parameter :: dt = 0.1
real :: t0 = 0, tf = dt * N
real :: Time (0:N), U(0:N, Nv), H(0:N)
integer :: i
Time = [ (t0 + (tf -t0 ) * i / (1d0 * N), i=0, N ) ]
end subroutine
Listing 4.15: API_Example_Cauchy_Problem.f90
4.10. HENON-HEILES SYSTEM 49
Once the code is compiled and executed, the trajectories in the phase plane are
shown in figure 4.8.
0.5
0.5
px
y
0
−0.5
0.5 0.5
py
0
py
−0.5 −0.5
−0.5 0 0.5 0 0.5
y px
Figure 4.8: Heinon-Heiles system solution. (a) Trajectory of the star (x, y). (b) Projection
(x, ẋ) of the solution in the phase plane. (c) Projection (y, ẏ) of the solution in the phase
plane. (d) Projection (ẋ, ẏ) of the solution in the phase plane.
This simple Hamiltonian system can exhibit chaotic behavior for certain values
of the initial conditions which represent different values of energy. For example,
the initial conditions
Generally, time-dependent problems evolve with different growth rates during its
time-span. This behavior motivates to use of variable time steps to march faster
when small gradients are encountered and march slower reducing the time step
when high gradients appear. To adapt automatically the time step, methods must
estimate the error to reduce or to increase the time step to reach a specified toler-
ance.
In the following code, the Van der Pol problem is solved with a variable time
step in an embedded Heun-Euler method. Since the imposed tolerance is set to
1010 , the embedded Heun–Euler method will not modify the time step because that
tolerance is always reached.
The other simulation is carried out with a tolerance of 10−6 . In this case, the
embedded Heun-Euler will adapt the time step to reach this specific tolerance.
2 5
0
x
const ∆t −5 const ∆t
−2 var ∆t var ∆t
0 10 20 30 −2 0 2
t x
(a) (b)
Figure 4.9: Comparison between constant and variable time step calculated by means of
local estimation error. Integration of the Van der Pol oscillator with an embedded second
order Runge–kutta HeunEuler21. (a) x position along time. (b) Phase diagram of the
solutions.
4.12. CONVERGENCE RATE OF RUNGE–KUTTA WRAPPERS 51
In the following code, the classical DOPRI5 and DOP853 embedded Runge Kutta
methods are used by means of a module that wraps the old codes.
call set_solver(family_name="weRK",scheme_name="WDOPRI5")
call set_tolerance (1e6)
call Temporal_convergence_rate ( &
Time_domain = Time , Differential_operator = VanDerPol_equation , &
U0 = U0 , order = 5, log_E = log_E (:,1), log_N = log_N )
call set_solver(family_name="weRK",scheme_name="WDOP853")
call set_tolerance (1e6)
call Temporal_convergence_rate ( &
Time_domain = Time , Differential_operator = VanDerPol_equation , &
U0 = U0 , order = 8, log_E = log_E (:,2), log_N = log_N )
WDOPRI5
5 −5 WDOP853
log E
0
y
−10
−5
−15
−2 0 2 4 5
x log N
(a) (b)
Figure 4.10: Convergence rate of Runge–Kutta wrappers based on DOPRI5 and DOP853
with number of steps. (a) Van der Pol solution. (b) Error versus time steps.
In figure 4.10b, the steeper slope of DOP853 in comparison with the slope of
DOPRI5 shows its superiority in terms of its temporal error.
52 CHAPTER 4. CAUCHY PROBLEM
The Arenstorf orbit is a stable periodic orbit between the Earth and the Moon
which was used as the basis for the Apollo missions. They are closed trajectories
of the restricted three-body problem, where two bodies of masses µ and 1 − µ are
moving in a circular rotation, and the third body of negligible mass is moving in
the same plane. The equations that govern the movement of the third body in axis
rotating about the center of gravity of the Earth and the Moon are:
ẋ = vx ,
ẏ = vy ,
(1 − µ) (x + µ) µ (x − (1 − µ))
v˙x = x + 2vy − r 3 − r 3
2 2
(x + µ) + y 2 (x − (1 − µ)) + y 2
(1 − µ) y µy
v˙y = y − 2vx − r 3 − r 3
2 2
(x + µ) + y 2 (x − (1 − µ)) + y 2
dxdt = vx
dydt = vy
dvxdt = x + 2 * vy - (1-mu)*( x + mu )/D1 - mu*(x-(1-mu))/D2
dvydt = y - 2 * vx - (1-mu) * y/D1 - mu * y/D2
end function
The following code integrates the Arenstorf orbit by means of the classical
wrapped DOPRI54 and a new implementation written in modern Fortran. Different
tolerances are selected to show the influence on the calculated orbit.
end subroutine
1 = 10−3 1 = 10−3
= 10−4 = 10−4
= 10−5 = 10−5
0 0
y
−1 −1
−1 0 1 −1 0 1
x x
(a) (b)
Figure 4.11: Integration of the Arenstorf orbit by means of embedded Runge–Kutta meth-
ods with a specific tolerance . (a) Wrapper of the embedded Runge-Kutta WDOPRI5.
(b) New implementation of the embedded Runge-Kutta DOPRI54.
As expected, the wrapped code and the new implementation show similar re-
sults. When the tolerance error is decreased, the calculated orbit approaches to a
closed trajectory.
54 CHAPTER 4. CAUCHY PROBLEM
The Gragg-Bulirsch-Stoer Method is also a common high order method for solving
ordinary equations. This method combines the Richardson extrapolation and the
modified midpoint method. For this example, the new implementation of the GBS
algorithm and the old wrapped ODEX have been used to simulate the Arenstorf
orbit.
call set_solver(family_name="wGBS")
do j=1, Np
call set_tolerance(tolerances(j))
call Cauchy_ProblemS( Time , Arenstorf_equations , U(:, :, j) )
end do
call plot_parametrics( U(:, 1, :), U(:, 2, :), names , &
"$x$", "$y$", "(a)", path (1) )
call set_solver(family_name="GBS")
do j=1, Np
call set_tolerance(tolerances(j))
call Cauchy_ProblemS( Time , Arenstorf_equations , U(:, :, j) )
end do
call plot_parametrics( U(:, 1, :), U(:, 2, :), names , &
"$x$", "$y$", "(b)", path (2) )
end subroutine
Figure 4.12 show that GBS method is much less sensitive to the set tolerance and
reach a trajectory closer to the solution than the eRK methods analyzed in the
previous section.
1 = 10−1 1 = 10−1
= 10−2 = 10−2
= 10−3 = 10−3
0 0
y
−1 −1
−1 0 1 −1 0 1
x x
(a) (b)
call set_solver(family_name="wABM")
do j=1, Np
call set_tolerance(tolerances(j))
call Cauchy_ProblemS( Time , Arenstorf_equations , U(:, :, j) )
end do
call plot_parametrics( U(:, 1, :), U(:, 2, :) , names , &
"$x$", "$y$", "(a)", path (1) )
call set_solver(family_name="ABM")
do j=1, Np
call set_tolerance(tolerances(j))
call Cauchy_ProblemS( Time , Arenstorf_equations , U(:, :, j) )
end do
call plot_parametrics( U(:, 1, :), U(:, 2, :) , names , &
"$x$", "$y$", "(b)", path (2) )
end subroutine
1 = 10−3 1 = 10−3
= 10−4 = 10−4
= 10−5 = 10−5
0 0
y
−1 −1
−1 0 1 −1 0 1
x x
(a) (b)
When high order precision is required, it is important to select the best tem-
poral scheme. The best scheme is the one that reaches the lowest error toler-
ance with the smallest CPU time. In the following code, a new subroutine called
Temporal_effort_with_tolerance is used to measure the computational effort.
Once the temporal scheme is selected, this subroutine runs the Cauchy problem
with different error tolerance based on the input argument log_mu. It measures
internally the number of evaluations of the function of the Cauchy problem for
every simulation. In this way, the number of evaluations of the function of the
Cauchy problem can be represented versus the error for different time schemes.
log_mu = [( i, i=1, M ) ]
do j=1, Np
call set_solver(family_name = "eRK", scheme_name = names(j) )
call Temporal_effort_with_tolerance ( Time , VanDerPol_equation , U0 ,&
log_mu , log_effort (:,j) )
end do
call plot_parametrics( log_mu , log_effort (: ,1:3), names (1:3) , &
"$-\log \epsilon$", "$\log M$ ", "(a)", path (1))
call plot_parametrics( log_mu , log_effort (: ,4:7), names (4:7) , &
"$-\log \epsilon$", "$\log M$ ", "(b)", path (2))
end subroutine
HeunEuler21 CashKarp
6 RK21 RK87
BogackiShampine RK65
4 DOPRI54
log M
log M
4
3.8
3
2 4 6 8 2 4 6 8
− log − log
(a) (b)
Let Ω ⊂ Rp be an open and connected set and ∂Ω its boundary set. The spatial
domain D is defined as its closure, D ≡ {Ω∪∂Ω}. Each point of the spatial domain
is written x ∈ D. A Boundary Value Problem for a vectorial function u : D → RN
of N variables is defined as:
L(x, u(x)) = 0, ∀ x ∈ Ω,
h(x, u(x)) ∂Ω
= 0, ∀ x ∈ ∂Ω,
where L is the spatial differential operator and h is the boundary conditions op-
erator that must satisfy the solution at the boundary ∂Ω.
subroutine BVP_examples
call Legendre_1D
call Poisson_2D
call Elastic_Plate_2D
call Elastic_Nonlinear_Plate_2D
end subroutine
Listing 5.1: API_example_Boundary_Value_Problem.f90
To use all functions of this module, the statement: use Boundary_value_problems
should be included at the beginning of the program.
57
58 CHAPTER 5. BOUNDARY VALUE PROBLEMS
d2 y dy
(1 − x2 ) − 2x + n(n + 1)y = 0,
dx 2 dx
where n stands for the degree of the Legendre polynomial. For n = 6, the boundary
conditions are: y(−1) = −1, y(1) = 1 and the exact solution is:
1
y(x) = (231x6 − 315x4 + 105x2 − 5).
16
This problem is solved by means of piecewise polynomial interpolation of degree
q or finite differences of order q. The implementation of the problem requires the
definition of the differential operator L(x, u(x)):
integer :: n = 6
L = (1 - x**2) * yxx - 2 * x * yx + n * (n + 1) * y
end function
Listing 5.2: API_example_Boundary_Value_Problem.f90
! Legendre solution
call Boundary_Value_Problem( x_nodes = x, &
Differential_operator = Legendre , &
Boundary_conditions = Legendre_BCs , &
Solution = U(:,1) )
Error (:,1) = U(:,1) - ( 231 * x**6 - 315 * x**4 + 105 * x**2 - 5 )/16.
Since the degree of the piecewise polynomial interpolation coincides with the degree
of the solution or the Legendre polynomial, no error is expected to obtain. It can
be observed in figure 5.1b that error is of the order of the round–off value. The
solution or the Legendre polynomial is shown in 5.1a.
·10−14
1 P6 E
1
0.5
E
y
0
0
−1
−1 0 1 −1 0 1
x x
(a) (b)
Figure 5.1: Solution of the Legendre equation with N = 20 grid points. (a) Legrendre
polynomial of degree n = 6. (b) Error of the solution.
60 CHAPTER 5. BOUNDARY VALUE PROBLEMS
Poisson’s equation is a partial differential equation of elliptic type with broad utility
in mechanical engineering and theoretical physics. This equation arises to describe
the potential field caused by a given charge or mass density distribution. In the case
of fluid mechanics, it is used to determine potential flows, streamlines and pressure
distributions for incompressible flows. It is an in-homogeneous differential equation
with a source term representing the volume charge density, the mass density or the
vorticity function in the case of a fluid. It is written in the following form:
∇2 u = s(x, y),
where ∇2 u = ∂ 2 u/∂x2 + ∂ 2 u/∂y 2 and s(x, y) is the source term. This Poisson
equation is implemented by the following code:
end function
where a is an attenuation parameter and (x1 , y1 ) and (x2 , y2 ) are the positions of
the sources. The source term is implemented in the following code:
real :: r1 , r2 , a=100
In this example, homogeneous boundary conditions are considered and they imple-
mented by the function PBCs:
The differential operator Poisson with its boundary conditions PBCs are used as
input arguments for the subroutine Boundary_value_problem
! Poisson equation
call Boundary_Value_Problem( x_nodes = x, y_nodes = y, &
Differential_operator = Poisson , &
Boundary_conditions = PBCs , Solution = U)
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
y y
(a) (b)
Figure 5.2: Solution of the Poisson equation with N x = 30, N y = 30 and piecewise
interpolation of degree q = 11. (a) Source term s(x,y).(b) Solution u(x,y).
62 CHAPTER 5. BOUNDARY VALUE PROBLEMS
∇2 w = v,
∇2 v = p(x, y).
end function
the fluid at y = 0 has ambient pressure. With these considerations, the plate is
submitted to the following non-dimensional net force:
p(x, y) = ay,
where a is a non-dimensional parameter. This external load is implemented in the
function load
load = 100*y
end function
Listing 5.11: API_example_Boundary_Value_Problem.f90
w = u(1)
v = u(2)
else
write(*,*) " Error BCs x=", x; stop
endif
end function
Listing 5.12: API_example_Boundary_Value_Problem.f90
In figure 5.3a, the external load is shown. As it was mentioned, the net force
between the hydro-static pressure and the ambient pressure takes zero value at the
vertical position y = 0. For values y > 0, the external load is positive and for
values y < 0 the load is negative. This external load divides the plate vertically
into two parts. A depressed lower part and a bulged upper part is shown in figure
5.3b.
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
Figure 5.3: Linear plate solution with 21 × 21 nodal points and q = 4. (a) External load
p(x, y). (b) Displacement w(x, y).
5.5. DEFLECTION OF AN ELASTIC NON LINEAR PLATE 65
u(x, y) = [ w, v, φ, F ],
∇2 w = v,
∇2 v = p(x, y) + µ L(w, φ),
∇2 φ = F,
∇2 F = − L(w, w).
The external load applied to the nonlinear plate is the same that it was used in
the deflections of the linear plate
p(x, y) = ay.
It allows comparing a linear solution with a nonlinear solution. The plate behaves
non-linearly when deflections are of the order of the plate thickness. Since the
deflections are caused by the external load, the non-dimensional parameter a can
be used to take the plate to a nonlinear regime.
w = u(1); wxx = uxx (1); wyy = uyy (1) ; wxy = uxy (1)
v = u(2); vxx = uxx (2); vyy = uyy (2)
phi = u(3); phixx = uxx (3); phiyy = uyy (3) ; phixy = uxy (3)
F = u(4); Fxx = uxx (4); Fyy = uyy (4)
end function
end function
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
Figure 5.4: Non linear elastic plate solution with Nx = 20 , Ny = 20, q = 4 and µ = 100.
(a) Displacement w(x, y). (b) Solution φ(x, y)
68 CHAPTER 5. BOUNDARY VALUE PROBLEMS
Chapter 6
Initial Boundary Value Problems
6.1 Overview
In this chapter, several Initial Boundary Value problems will be presented. These
problems can be divided into those purely diffusive are the heat equation and
those purely convective as the wave equation. In between, there are convective and
diffusive problems that are represented by the convective-diffusive equation. In
the subroutine IBVP_examples, six examples of these problems are implemented.
The first problem is to obtain the solution of the one-dimensional heat equation.
The second problem presents a two-dimensional solution of the heat equation with
non-homogeneous boundary conditions. The third problem and fourth problem
are devoted to the advection-diffusion equation in 1D and 2D spaces. The fifth
problem and sixth problem integrate movement of reflecting waves in a 1D closed
tube and in a 2D quadrangular box.
subroutine IBVP_examples
call Heat_equation_1D
call Heat_equation_2D
call Advection_Diffusion_1D
call Advection_Diffusion_2D
call Wave_equation_1D
call Wave_equation_2D
end subroutine
69
70 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
The heat equation is a partial differential equation that describes how the tempera-
ture evolves in a solid medium. The physical mechanism is the thermal conduction
is associated with microscopic transfers of momentum within a body. The Fourier’s
law states the heat flux depends on temperature gradient and thermal conductivity.
By imposing the energy balance of a control volume and taking into account the
Fourier’s law, the heat equation is derived:
∂u ∂2u
= .
∂t ∂x2
This spatial domain is Ω ⊂ R : {x ∈ [−1, 1]} and the temporal domain is t ∈ [0, 1].
The boundary conditions are set by imposing a given temperature or heat flux at
boundaries. In this example, a homogeneous temperature is imposed at boundaries:
u(−1, t) = 0,
u(1, t) = 0,
F = uxx
end function
Listing 6.2: API_Example_Initial_Boundary_Value_Problem.f90
! Heat equation 1D
call Grid_Initialization( "nonuniform", "x", q, x )
U(0, :) = exp(-25*x**2 )
call Initial_Boundary_Value_Problem ( &
Time_Domain = Time , x_nodes = x, &
Differential_operator = Heat_equation1D , &
Boundary_conditions = Heat_BC1D , &
Solution = U )
In figure 6.1, the temperature u(x, t) is shown during time integration by dif-
ferent parametric curves. From the initial condition, the temperature diffuses to
both sides of the spatial domain verifying zero temperature at boundaries.
·10−2
1 t= 0.0 8 t= 0.6
t= 0.2 t= 0.8
t= 0.4 6 t= 1.0
u(x, t)
u(x, t)
0.5 4
0 0
−1 0 1 −1 0 1
x x
(a) (b)
Figure 6.1: Time evolution of the heat equation with N x = 20 and q = 6. (a) Temperature
profile u(x, t) at t = 0, 0.2, 0.4. (b) Temperature profile u(x, t) at t = 0.6, 0.8, 1.
72 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
F = Uxx + Uyy
end function
Listing 6.5: API_Example_Initial_Boundary_Value_Problem.f90
if (x==x0) then
BC = U - 1
else if (x==xf .or. y==y0 .or. y==yf ) then
BC = U
else
write(*,*) "Error in Heat_BC2D"; stop
end if
end function
Listing 6.6: API_Example_Initial_Boundary_Value_Problem.f90
! Heat equation 2D
call Initial_Boundary_Value_Problem ( &
Time_Domain = Time , x_nodes = x, y_nodes = y, &
Differential_operator = Heat_equation2D , &
Boundary_conditions = Heat_BC2D , Solution = U )
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(c) (d)
Figure 6.2: Solution of the 2D heat equation solution with Nx = 20, Ny = 20 and order
q = 4, (a) temperature at t = 0.125, (b) temperature at t = 0.250,(c) temperature at
t = 0.375, (d) temperature at t = 0.5.
74 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
When convection together with diffusion is present in the physical energy transfer
mechanism, boundary conditions become tricky. For example, let us consider a fluid
inside a pipe moving to the right at constant velocity transferring by conductivity
heat to right and to the left. At the same time and due to its convective velocity,
the energy is transported downstream. It is clear that the inlet temperature can
be imposed but nothing can be said of the outlet temperature. In this section,
the influence of extra boundary conditions is analyzed. The one-dimensional en-
ergy transfer mechanism associated to advection and diffusion is governed by the
following equation:
∂u ∂u ∂2u
+ = ν 2,
∂t ∂x ∂x
where ν is a non dimensional parameter which measures the importance of the
diffusion versus the convection. This spatial domain is Ω ⊂ R : {x ∈ [−1, 1]}. As
it was mentioned, extra boundary conditions are imposed to analyze their effect.
u(−1, t) = 0, u(1, t) = 0.
The convective and diffusive eveloution is studied from the following initial condi-
tion:
u(x, 0) = exp −25x2 .
The differential operator and the boundary equations are implemented in the
following two subroutines:
F = - ux + nu * uxx
end function
Listing 6.8: API_Example_Initial_Boundary_Value_Problem.f90
! Advection diffusion 1D
call Initial_Boundary_Value_Problem ( &
Time_Domain = Time , x_nodes = x, &
Differential_operator = Advection_equation1D , &
Boundary_conditions = Advection_BC1D , Solution = U )
0.6
1 t= 0.0 t= 0.9
t= 0.3 t= 1.2
t= 0.6 0.4 t= 1.5
u(x, t)
u(x, t)
0.5
0.2
0 0
−1 0 1 −1 0 1
x x
(a) (b)
The purpose of this section is to show how the elimination of the extra boundary
conditions imposed in the 1D advection-diffusion problem allows obtaining the
desired result.
Let us consider a fluid moving with a given constant velocity v. While the
convective energy transfer mechanism is determined by v · ∇u, the energy trans-
ferred by thermal conductivity is ∇u. With these considerations, the temperature
evolution of the fluid is governed by the following equation:
∂u
+ v · ∇u = ν ∇u,
∂t
here ν is a non dimensional parameter which measures the importance of the dif-
fusion versus the convection.
In this example, v = (1, 0) and the energy transfer occurs in a two dimensional
domain Ω ⊂ R2 : {(x, y) ∈ [−1, 1] × [−1, 1]}. The above equation yields,
2
∂ u ∂2u
∂u ∂u
+ =ν + 2 .
∂t ∂x ∂x2 ∂y
F = - Ux + nu * ( Uxx + Uyy )
end function
The constant velocity v of the flow allows deciding inflow or outflow boundaries
by projecting the velocity on the normal direction to the boundary. In our case,
only the boundary x = +1 is an outflow. It is considered the flow enter at zero
temperature but no boundary condition is imposed at the outflow.
The question that arises is: if no boundary condition is imposed, how these bound-
aries conditions are modified or evolved ? The answer is to consider the boundary
points as interior points. In this way, the evolution of these points is governed by
6.5. ADVECTION-DIFFUSION EQUATION 2D 77
the advection-diffusion equation. To take into account that there are points with
this requirement, the keyword FREE_BOUNDARY_CONDITION is used. In the following
function Advection_BC2D these special boundary points are implemented:
! Advection diffusion 2D
call Initial_Boundary_Value_Problem ( &
Time_Domain = Time , x_nodes = x, y_nodes = y, &
Differential_operator = Advection_equation2D , &
Boundary_conditions = Advection_BC2D , Solution = U )
In figure 6.4, the temperature distribution is shown. At the early stages of the
simulation 6.4a and 6.4b, the energy is transported to the right and at the same
time the thermal conductivity diffuses its initial distribution. In figure 6.4c and
6.4d, the flow has reached the outflow boundary. Since no boundary conditions
are imposed at the outflow boundary x = +1, the simulation predicts what is
supposed to happen. The energy abandons the spatial domain with no reflections
or perturbation in the temperature distribution.
78 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(c) (d)
Figure 6.4: Solution of the advection-diffusion equation with outflow boundary conditions
with Nx = 20,Ny = 20 and order q = 8. (a) Initial condition u(x, y, 0), (b) solution at
t = 0.45, (c) solution at t = 0.9, (d) solution at t = 1.35
6.6. WAVE EQUATION 1D 79
The wave equation is a conservative equation that describes waves such as pressure
waves or sound waves, water waves, solid waves or light waves. It is a partial differ-
ential equation that predicts the evolution of a function u(x, t) where x represents
the spatial variable and t stands for time variable. The equation that governs the
quantity u(x, t) such as the pressure in a liquid or gas, or the displacement of some
media is:
∂2v ∂2v
2
− = 0.
∂t ∂x2
Since the module Initial_Boundary_Value_Problems is written for systems of
second order derivatives in space and first order in time, the problem must be
rewritten by means of the following transformation:
F = [w, vxx]
end function
These equations must be completed with initial and boundary conditions. In this
example, a one-dimensional tube with closed ends is considered. This spatial do-
main is Ω ⊂ R : {x ∈ [−1, 1]} and the temporal domain is t ∈ [0, 4]. It means
that waves reflects at the boundaries conserving their energy
with v(±1, t) = 0 and
w(±1, t) = 0. The initial condition is v(x, 0) = exp −15x2 , and w(x, 0) = 0. The
boundary conditions are implemented in the following function Wave_BC1D:
80 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
The differential operator and the boundary conditions function are used as input
arguments of the subroutine Initial_Boundary_Value_Problem
! Wave equation 1D
call Initial_Boundary_Value_Problem ( &
Time_Domain = Time , x_nodes = x, &
Differential_operator = Wave_equation1D , &
Boundary_conditions = Wave_BC1D , Solution = U )
In figure 6.5, time evolution of u(x, t) is shown. Since the initial condition is
symmetric with respect to x = 0 and the system is conservative, the solution is
periodic of periodicity T = 4. It is shown in 6.5b that the displacement profile
u(x, t) at t = T coincides with the initial condition.
1 t= 0.0 1 t= 2.0
t= 0.7 t= 2.7
t= 1.3 t= 3.3
t= 2.0 t= 4.0
u(x, t)
u(x, t)
0 0
−1 −1
−1 0 1 −1 0 1
x x
(a) (b)
Figure 6.5: Wave equation solution with Nx = 41 and order q = 6. (a) Time evolution of
u(x, t) from t = 0 to t = 2. (b) Time evolution of u(x, t) from t = 2 to t = 4.
6.7. WAVE EQUATION 2D 81
L(1) = w
L(2) = vxx +vyy
end function
Listing 6.17: API_Example_Initial_Boundary_Value_Problem.f90
w(x, y, 0) = 0.
82 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
The differential operator, its boundary conditions and initial condition are used
in the following code snippet:
! Wave equation 2D
call Grid_Initialization( "nonuniform", "x", Order , x )
call Grid_Initialization( "nonuniform", "y", Order , y )
In figure 6.6 time evolution of u(x, y, t) is shown from the initial condition to
time t = 2. Since waves reflect from different walls with different directions and
the round trip time depends on the direction, the problem becomes much more
complicated to analyze than the pure one-dimensional problem.
6.7. WAVE EQUATION 2D 83
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(c) (d)
Figure 6.6: Wave equation solution with Nx = 20, Ny = 20 and order q = 8. (a) Initial
value u(x, y, 0). (b) Numerical solution at t = 0.66, (c) numerical solution at t = 1.33,
(d) numerical solution at t = 2.
84 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
Chapter 7
Mixed Boundary and Initial Value
Problems
7.1 Overview
In this chapter, a mixed problem coupled with elliptic and parabolic equations is
solved making use of the module IBVP_and_BVP. Briefly and not rigorously, these
problems are governed by a parabolic time dependent problem for u(x, t) and
an elliptic problem or a boundary value problem for v(x, t). Let Ω be an open
and connected set and ∂Ω its boundary. These problems are formulated with the
following set of equations:
∂u
(x, t) = Lu (x, t, u(x, t), v(x, t)), ∀ x ∈ Ω,
∂t
hu (x, t, u(x, t)) ∂Ω = 0, ∀ x ∈ ∂Ω,
u(x, t0 ) = u0 (x), ∀ x ∈ D,
where Lu is the spatial differential operator of the initial value problem of Nu equa-
tions, u0 (x) is the initial value, hu represents the boundary conditions operator
for the solution at the boundary points u ∂Ω , Lv is the spatial differential operator
of the boundary value problem of Nv equations and hv represents the boundary
conditions operator for v at the boundary points v ∂Ω .
85
86 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
∂2w
+ ∇4 w = p(x, y, t) + µ B(w, φ),
∂t2
∇4 φ + B(w, w) = 0,
v = [ φ, F ],
Lv = [ ∇2 φ − F, ∇2 F + B(w, w) ].
Lu(1) = w2
Lu(2) = - w3xx - w3yy + load(x, y, t) &
+ mu * B( wxx , wyy , wxy , pxx , pyy , pxy)
Lu(3) = w2xx + w2yy
end function
Listing 7.1: API_Example_IBVP_and_BVP.f90
end function
Listing 7.2: API_Example_IBVP_and_BVP.f90
To impose simple supported edges, the values of all components of u(x, y, t) and
v(x, y, t) must be determined analytically at boundaries. Since w(x, y, t) is zero at
boundaries for all time and w2 = ∂w/∂t then, w2 is zero at boundaries. Since ∇2 w
is zero at boundaries for all time and
∂w3 ∂∇2 w
=
∂t ∂t
then, w3 is zero at boundaries. The same reasoning is applied to determine v(x, y, t)
components at boundaries.With these considerations, the boundary conditions hu
and hv are implemented by:
88 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
In figure 7.1, the oscillations of a plate with zero external loads starting from
an elongated position with zero velocity are shown.
7.2. NON LINEAR PLATE VIBRATION 89
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(a) (b)
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
y y
(c) (d)
Figure 7.1: Time evolution of nonlinear vibrations w(x, y, t) with 11×11 nodal points and
order q = 6. (a) w(x, y, 0.25). (b) w(x, y, 0.5). (c) Numerical solution at w(x, y, 0.75).
(d)Numerical solution at w(x, y, 1).
90 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
Part II
Developer guidelines
91
Chapter 1
Systems of equations
1.1 Overview
In this chapter, it is intended to cover the implementation of some classic issues that
might appear in algebraic problems from applied mathematics. In particular, the
operations related to linear and non linear systems and operations with matrices
such as: LU factorization, real eigenvalues and eigenvectors computation and SVD
decomposition will be presented.
93
94 CHAPTER 1. SYSTEMS OF EQUATIONS
In all first courses in linear algebra, the resolution of linear systems is treated as
the fundamental problem to be solved. This problem consists on solving for an
unknown x ∈ RN the problem:
Ax = b, (1.1)
where A ∈ MN ×N verifies det(A) 6= 0 and b ∈ RN .
A = LU, (1.2)
where L and U are lower and upper triangular matrices respectively. Note that as
U is obtained through a Gaussian elimination process, the number of operations
to compute LU factorization is the same. However, relation (1.2) gives a recursion
to obtain both L and U operating only over elements of A. The factorization of A
is equivalent to the relation between their components:
from which we want to obtain Lij and Uij . By definition, the number of non null
terms in L and U are N (N + 1)/2, which leads to N 2 + N unknown variables.
However, the number of equations supplied by (1.2) is N 2 . This makes necessary
to fix the value of N unknown variables. To solve this, we force Lkk = 1. Once this
is done, we can obtain the k-th row of U from the equation for the components Akj
with j ≥ k if the previous k rows are known. Taking into account that U11 = A11
we can compute the recursion
Note that the first row of U is just the first row of A. Hence, we can calculate
each row of U recursively by a direct implementation.
do j=k, N
A(k,j) = A(k,j) - dot_product( A(k, 1:k-1), A(1:k-1, j) )
end do
Listing 1.1: Linear_systems.f90
Note that the upper diagonal matrix U is stored on the upper diagonal elements
of A. Once we haved calculated U , a recursion to obtain the i-th row of, if all the
previous i − 1 rows of L are known. Note that for i > j = 1, Ai1 = Li1 U11 and the
first column of L can be given as an initial condition. Therefore, we can compute
the recursion as:
P
Aik − m Lmk Uim
Lik = , for m ∈ [1, k − 1]. (1.5)
Ukk
Again, this recursion can be computed through a direct implementation:
do i=k+1, N
A(i,k) = (A(i,k) - dot_product( A(1:k-1, k), A(i, 1:k-1) ) )/A(k,k)
end do
Listing 1.2: Linear_systems.f90
subroutine LU_factorization( A )
real, intent(inout) :: A(:, :)
integer :: N
integer :: k, i, j
N =size(A, dim =1)
A(1, :) = A(1,:)
A(2:N,1) = A(2:N,1)/A(1,1)
do k=2, N
do j=k, N
A(k,j) = A(k,j) - dot_product( A(k, 1:k-1), A(1:k-1, j) )
end do
do i=k+1, N
A(i,k) = (A(i,k) - dot_product( A(1:k-1, k), A(i, 1:k-1) ) )/A(k,k)
end do
end do
end subroutine
Listing 1.3: Linear_systems.f90
96 CHAPTER 1. SYSTEMS OF EQUATIONS
Once the matrix A is factorized it is possible to solve the system (1.1). In first
place it is defined y = U x, and thus:
As Lij = 0 for i < j, and Lii = 1 the first row of (1.6) gives y1 = b1 and the
value of each yi can be written on terms of the previous yj , that is:
for (1.7)
X
yi = bi − Lij yj , 1 < j < i,
j
do i=2,N
y(i) = b(i) - dot_product( A(i, 1:i-1), y(1:i-1) )
enddo
In a similar manner than before, as uij = 0 for i > j, the last row of (1.8) gives
xN = yN /uNN and each xi can be written in terms of the next xj with i < j ≤ N as
expresses the equation (1.9):
P
yi − j Uij xj
xi = , for j ∈ [i, N ]. (1.9)
uii
do i=N-1, 1, -1
x(i) = (y(i) - dot_product( A(i, i+1:N), x(i+1:N) ) )/ A(i,i)
end do
function Solve_LU( A, b )
real, intent(in) :: A(:, :), b(:)
real :: Solve_LU( size(b) )
real :: y (size(b)), x(size(b))
integer :: i, N
N = size(b)
y(1) = b(1)
do i=2,N
y(i) = b(i) - dot_product( A(i, 1:i-1), y(1:i-1) )
enddo
x(N) = y(N) / A(N,N)
do i=N-1, 1, -1
x(i) = (y(i) - dot_product( A(i, i+1:N), x(i+1:N) ) )/ A(i,i)
end do
Solve_LU = x
end function
f (x) = 0. (1.10)
(1.13)
−1
xi+1 − xi = − (∇f (xi )) · f (xi ),
where (∇f (xi ))−1 is the inverse of the Jacobian matrix. Equation (1.13) provides
an explicit sequence which converges to the solution of (1.10) x if the initial con-
dition is sufficiently close to it. Hence, a recursive iteration on (1.13) will give an
approximate solution of the non linear problem. The recursion is stopped defining
a convergence criteria for x. That is, the recursion will stop when kxi+1 − xi k ≤ ε,
where ε is a sufficiently small positive number for the desired accuracy. The imple-
mentation of an algorithm which computes the Newton method for any function is
presented in the following pages.
1 Local invertibility is equivalent to the invertibility of the Jacobian matrix as the inverse
In order to implement the Newton method, first, we have to calculate the Jacobian
matrix of the function. To avoid an excessive analytical effort, the columns of
∇f (xi ) are calculated using order 2 centered finite differences:
∂f (xi ) f (xi + ∆xej ) − f (xi − ∆xej )
' , (1.14)
∂xj 2∆x
where ej = (0, . . . , 1, . . . , 0) is the canonical basis vector whose only non zero
entry is the j-th. Thus, the implementation of the computation of each column
∂f (xi )/∂xj is straightforward:
where xj is a small perturbation along the coordinate xj . The calculation of all the
Jacobian columns is implemented in a function called Jacobian, which computes
the gradient at the point xi sweeping through j ∈ [1, N ], that is introducing the
piece of code exposed in a do loop:
function Jacobian( F, xp )
procedure (FunctionRN_RN) :: F
real, intent(in) :: xp(:)
real :: Jacobian( size(xp), size(xp) )
integer :: j, N
real :: xj( size(xp) )
N = size(Xp)
do j = 1, N
xj = 0
xj(j) = 1d-3
Jacobian (:,j) = ( F(xp + xj) - F(xp - xj) )/norm2 (2*xj);
enddo
end function
Listing 1.8: Jacobian_module.f90
J = Jacobian( F, x0 )
Once the Jacobian is calculated, it is used to compute the next iteration initial
guess xi+1 . Instead of computing the inverse of the Jacobian, we solve the system:
call LU_factorization( J )
b = F(x0);
Dx = Solve_LU( J, b )
xi+1 = xi − ∆xi ,
x0 = x0 - Dx;
3. Next iteration
Once we have calculated the next iteration initial guess xi+1 we just have to make
the assignation:
i → i + 1, xi → xi+1 . (1.16)
iteration = iteration + 1
procedure (FunctionRN_RN) :: F
real, intent(inout) :: x0(:)
real :: Dx( size(x0) ), b(size(x0)), eps
real :: J( size(x0), size(x0) )
integer :: iteration , itmax = 1000
integer :: N
N = size(x0)
Dx = 2 * x0
iteration = 0
eps = 1
iteration = iteration + 1
J = Jacobian( F, x0 )
call LU_factorization( J )
b = F(x0);
Dx = Solve_LU( J, b )
x0 = x0 - Dx;
eps = norm2( DX )
end do
if (iteration == itmax) then
write(*,*) " morm2(J) =", maxval(J), minval(J)
write(*,*) " Norm2(Dx) = ", eps , iteration
endif
end subroutine
(A − λi I)v i = 0. (1.17)
In figure 1.1 is given a classification from the spectral point of view, of the
possible situations for a real square matrix. The main characteristic which will
classify a matrix is whether is normal or not. A normal matrix commutes with its
transpose, that is, it verifies AAT = AT A and these matrices can be diagonalized
by orthonormal vectors. The fact that normal matrices can be diagonalized by
a set of orthonormal vectors is a consequence of Schur decomposition theorem2
and the fact that all normal upper-triangular matrices are diagonal. A practical
manner to check (not to prove) this fact is by taking two eigenvectors v i and v j of
A and noticing that:
v i · Av j = λj v i · v j , (1.18)
and if v i and v j are orthogonal and unitary, then the matrix whose components
are given by
Dij = v i · Av j = λj δij ,
is diagonal and its non zero entries are the eigenvalues of A. This means that
defining a matrix V whose columns are the eigenvectors of A, we can factorize A
as:
A = V DV ∗ , (1.19)
where V ∗ stands for the conjugate transpose of V . Until now, we have not specified
to which field (R or C) belong the eigenvalues of λ and over which field is defined the
vector space containing its eigenvectors. A sufficient condition for a real matrix
to have real eigenvalues and eigenvectors is given by the spectral theorem: all
symmetric matrices (which are normal) have real eigenvalues and eigenvectors and
are diagonalizable. If a matrix is normal but not symmetric then in general its
2 This theorem asserts that for any square matrix A we can find a unitary matrix U (that is
U∗ = U −1 ) such that A = U T U −1 , where T is an upper-triangular matrix and where U ∗ stands
for the conjugate transpose of U .
1.5. POWER METHOD AND DEFLATION METHOD 103
eigenvalues and eigenvectors are complex but they can have zero imaginary part
(not only symmetric matrices have real eigenvalues). Non normal matrices are
not diagonalizable in the sense we have defined but can be diagonalized by blocks
through the Jordan canonical form. In this book we will restrict ourselves to the
case of normal matrices with real eigenvalues, that is, to the case in which the
eigenvectors of A spans the real vector space Rn .
Real
matrix
A ∈ Mn×n .
AAT = AT A. AAT 6= AT A.
Necessity of
Symmetric Non Symmetric There exist
generalised
non orthogonal
eigenvalues
A = AT . A 6= AT . eigenvectors.
and eigenvectors
to diagonalize A
Eigenvalues Eigenvalues by blocks.
λ ∈ R. λ ∈ C.
Eigenvectors Eigenvectors
v ∈ Rn . v ∈ Cn .
In this section we will present an iterative method to compute the eigenvalues and
eigenvectors of normal matrices whose spectrum spans the whole real vector space
Rn . The power method is an iterative method which gives back the module of the
maximum eigenvalue. Let A ∈ Mn×n be a square real normal matrix with real
eigenvalues |λ1 | > |λ2 | ≥ · · · ≥ |λn |, and their orthonormal associated eigenvectors
{v 1 , . . . , v n }. The method is based on the fact that as the eigenvectors of A form
a basis of Rn we can write for any vector x0 ∈ Rn :
(1.20)
X
x0 = ai v i .
i
104 CHAPTER 1. SYSTEMS OF EQUATIONS
Axk Ak x 0
xk+1 = = , (1.22)
kxk k kAk x0 k
which as |λ1 | > |λi | for all i > 1 and taking in account (1.21) verifies:
lim xk = v 1 . (1.23)
k→∞
Once we have computed this eigenvector we can obtain the associated eigenvalue
λ1 from the Rayleigh quotient as3 :
The algorithm that carries out the power method can be summarized in three
steps
1. Initial condition: We can set x0 to be any vector, for example its compo-
nents can be the natural numbers 1, 2, . . . , n:
U = [ (k, k=1, N) ]
All the previous steps are implemented in the subroutine Power_method which
takes the matrix A and gives back the eigenvalue lambda and the eigenvector U.
k = 1
do while( norm2(U-U0) > 1d-12 .and. k < k_max )
U0 = U
V = matmul( A, U )
U = V / norm2(V)
k = k + 1
end do
lambda = dot_product( U, matmul(A, U) )
end subroutine
This subroutine, given a normal matrix A gives back its maximum module
eigenvalue λ1 and its associated eigenvector v 1 . The eigenvalue is yielded on the
real lambda and the eigenvector in the vector U.
Once we have presented the power method iteration to compute the dominant
eigenvalue λ1 and its associated eigenvector v 1 , a method to compute all the eigen-
values and eigenvectors of a matrix using power method is presented. This iterative
method is call deflation method. Note that for the power method to work properly
we need |λ1 | to be strictly greater than the rest of eigenvalues, but if |λ1 | = |λ2 |
the method works fine. Deflation method requires a stronger condition which is
that the eigenvalues satisfy |λ1 | > |λ2 | > · · · > |λn |. The method is based on
the fact that as the matrix B2 = A − λ1 v 1 ⊗ v 1 replaces the eigenvalue λ1 for an
eigenvalue of zero value. The symbol ⊗ stands for the tensor product in Rn which
is defined from the contraction a ⊗ b · c = a(b · c). When this is done λ1 is replaced,
but the rest of the eigenvalues and eigenvectors remain invariant and therefore the
dominant eigenvector of B2 is λ2 . This is a consequence of the spectral theorem,
which asserts that A can be written as:
(1.29)
X
A= λi v i ⊗ v i ,
i
1.5. POWER METHOD AND DEFLATION METHOD 107
(1.30)
X
B2 = v 1 ⊗ v 1 + λi v i ⊗ v i ,
i6=1
where we see explicitly how the eigenvalue is replaced and the rest remain unaltered.
Hence, if we define a succession of matrices
1. Power method over the initial matrix: First we apply the power method
to Bk and compute λk and v k . This is implemented by a simple call to the
subroutine Power_method where the array A stores the entries of Bk .
2. Next step matrix: Once we have λk and v k stored over lambda(k) and
U(:,k) respectively, we can obtain Bk+1 by simply applying formula (1.31)
and storing its result on A:
N = size(A, dim=1)
do k=1, N
end do
end subroutine
Listing 1.20: Linear_systems.f90
For non singular matrices, a method which gives the eigenvalue of lesser module
|λn | and its associated eigenvector v n is presented. If A is non singular, we can
premultiply (1.17) by its inverse A−1 obtaining:
A−1 v = λ−1 v. (1.32)
Therefore, we can extract two conclusions, the first is that A and A−1 have the
same eigenvectors and the second is that their eigenvalues are inversely proportional
to each other. This means that if µ is an eigenvalue of A−1 with eigenvector v, it
satisfies µ = λ−1 . Therefore if the eigenvalues of A−1 satisfy |µn | > |µn−1 | ≥ · · · ≥
|µ1 |, we have that the dominant eigenvalue of A−1 is related to the eigenvalue of A
of minimum module λn . Hence, if we apply the power method to A−1 we get λ−1 n
and v n . This method is known as inverse power method for obvious reasons and
the recursion for it is obtained substituting A by A−1 in (1.22), leading to:
A−1 xk
xk+1 = ,
kxk k
or equivalently:
xk
Axk+1 = , (1.33)
kxk k
and for each iteration we solve the system (1.33). The algorithm that carries out
the inverse power method is summarized in four steps:
1.6. INVERSE POWER METHOD 109
call LU_factorization(Ac)
2. Initial condition: We can set x0 to be any vector, for example its compo-
nents can be the natural numbers 1, 2, . . . , n:
U = [ (k, k=1, N) ]
V = solve_LU(Ac , U)
N = size(U)
allocate ( Ac(N,N), U0(N), V(N) )
Ac = A
call LU_factorization(Ac)
U = [ (k, k=1, N) ]
k = 1
do while( norm2(U-U0) > 1d-12 .and. k < k_max )
U0 = U
V = solve_LU(Ac , U)
U = V / norm2(V)
k = k + 1
end do
lambda = norm2(matmul(A, U))
end subroutine
Once we have seen how we can compute real eigenvalues and eigenvectors of nor-
mal matrices whose eigenvalues are strictly ordered, we can speak of probably the
most important matrix factorization. Singular Value Decomposition or SVD is a
factorization applicable to any real matrix A ∈ Mm×n and that provides all the
information about the fundamental sub-spaces of the matrix. Let’s recall that the
four fundamental sub-spaces of A are its image Im A ⊂ Rm and kernel ker A ⊂ Rn
and the image and kernel of its transpose Im AT ⊂ Rn and ker AT ⊂ Rm . The
singular value decomposition is a factorization such that:
A = U T ΣV, (1.38)
Let A ∈ Mm×n a real matrix, it is easy to check that the matrix AT A ∈ Mn×n
is symmetric (and therefore normal) and semi-definite positive. The symmetry is
immediate taking into account the rule of the transpose of a product, to prove that
is semi-definite positive we have to check that x · AT Ax ≥ for any x ∈ Rn . This is
done as follows:
(1.39)
2
kAxk = Ax · Ax = x · AT Ax ≥ 0,
That AT A is semi-definite positive makes that all of its eigenvalues are not only
real but positive or zero. If {λ1 , . . . , λn } and {v 1 , . . . , v n } are the eigenvalues and
associated orthonormal eigenvectors of AT A respectively, we can write:
2
kAv i k = v i · AT Av i = λi ,
and therefore we conclude that λi ≥ 0 and we define the singular values of A as:
for i = 1, . . . , n. (1.40)
p
σi = λi ,
And now we know that the entries of the main diagonal of Σ are just the square
roots of the eigenvalues of AT A. The answer to why (1.38) is a correct factorization
of A and what are the explicit expressions of U and V requires to state some useful
112 CHAPTER 1. SYSTEMS OF EQUATIONS
facts. Let’s suppose without loss of generality that only r of the eigenvalues of
AT A are non zero and that they are ordered as:
Av i · Av j = v i · AT Av j = λj v i · v j = λj δij , (1.42)
Av i
ui = , for i = 1, . . . , r, (1.43)
σi
(1.45)
V = v 1 · · · v n ∈ Mn×n , ,
(1.46)
U = u1 · · · um ∈ Mm×m ,
where if r < m, we can always compute the vectors ui with i > m as orthogonal
to each other and to the set {u1 , . . . , ur }, therefore both V and U are orthogonal.
With the definition above, we can rewrite (1.44) in matrix form as:
AV = U Σ,
Now we have seen that such a factorization is possible let’s see what information
of A is provided by V and U . In first place we have to notice that {u1 , . . . , ur } is
a basis for Im A, to see this let’s pick a generic element y ∈ Im A, that is y = Ax
1.7. SVD DECOMPOSITION 113
for some x ∈ Rn and taking into account that the eigenvectors of AT A span Rn
we can project y over any Av i for i = 1, . . . , n as:
X
y · Av i = Ax · Av i = x · AT Av i = xj v j · AT Av i = xi λi ,
j
Note that for i > r we have y·Av i = 0, which means that Av i is perpendicular to
any element of the image. This perpendicularity implies that the subspace spanned
by {Av r+1 , . . . , Av n } is orthogonal to Im A. This implies that if we project y onto
the set {u1 , . . . , um } (which is a basis of Rm ) we have:
X r
X
y= y · ui y = y · ui y,
i i=1
which means that {v r+1 , . . . , v n } forms a basis for ker AT A. From (1.39) we deduce
that if x ∈ ker AT A then x ∈ ker A (ker AT A ⊂ ker A ). Conversely, we have that
if x ∈ ker A
0 ≤ AT Ax ≤ AT kAxk = 0, (1.48)
where AT stands for the induced norm for matrices from the norm in the vector-
space:
AT y
AT = sup , ∀ y 6= 0 ∈ Rm ,
kyk
and from (1.48) we have that ker AT A = ker A. Hence, the set {v r+1 , . . . , v n } is
a basis for ker A. This implies that these sets of vectors can be used to define
projection matrices (matrices whose image is always in the subspace onto which
they project). A dual argument will serve to prove that the remaining vectors of the
two sets of orthonormal vectors also serve as basis for the remaining fundamental
sub-spaces of A. Hence, if we define the reduced matrices:
(1.49)
Vn−r = v r+1 · · · v n ∈ Mn×r , ,
(1.50)
Ur = u1 · · · ur ∈ Mm×r ,
We have that the projection matrices onto ker A and Im A are respectively.
T
Pker A = Vn−r Vn−r , (1.51)
PIm A = Ur UrT . (1.52)
114 CHAPTER 1. SYSTEMS OF EQUATIONS
where
(1.55)
Um−r = ur+1 ··· um ∈ Mm×r , ,
(1.56)
Vr = v1 ··· v r ∈ Mn×r .
Thus, the reader can have an idea of the importance of SVD factorization as
once is done, it provides all the information about the four fundamental sub-spaces
condensed on U and V . Besides, the rank of Σ is the rank of A.
B = matmul( transpose(A), A )
call Eigenvalues_PM( B, sigma , V )
sigma = sqrt(sigma)
The whole precess is embedded in the subroutine SVD which takes A as input
and gives back the singular values, U and V respectively on the arrays sigma, U
and V.
1.8. CONDITION NUMBER 115
integer :: i, N
real, allocatable :: B(:,:)
N = size(A, dim=1)
B = matmul( transpose(A), A )
call Eigenvalues_PM( B, sigma , V )
sigma = sqrt(sigma)
do i=1, N
end subroutine
When solving a linear system of equations, the round–off error of the solution is
associated to the condition number of the matrix system. In order to understand
the motivation of this concept, let us be a linear system of equations such as:
Ax = b,
where x, b are vectors from a vector space V , equipped with the norm k·k and A
is a square matrix.
If an induced norm is defined for matrices the previous equation can give us a
measurable relation from the system of equations. In these conditions, the following
order relation is satisfied:
kbk ≤ kAkkxk.
116 CHAPTER 1. SYSTEMS OF EQUATIONS
Given the linearity of the system, if the vector b is perturbed with a perturbation
δb, the solution will be as well with the perturbation δx and if A is non singular,
the following order relation is satisfied:
Combining both order relations it is obtained an upper bound for the relative
perturbation of the solution, that is:
kδxk kδbk
≤ kAkkA−1 k ,
kxk kbk
where kAkkA−1 k determines the upper bound of the perturbation in the solution.
The condition number κ(A) for this linear system can be written:
κ(A) = kAkkA−1 k .
Whenever the norm defined for V is the quadratic norm k·k2 , the condition
number can be written in terms of the square roots of the maximum and minimum
module eigenvalues of AAT , σmax and σmin :
σmax
κ(A) = ,
σmin
as kAk = σmax and kA−1 k = 1/σmin .
integer :: i, j, k, N
real, allocatable :: B(:,:), U(:)
real :: sigma_max , sigma_min , lambda
N = size(A, dim=1)
allocate( U(N), B(N,N) )
B = matmul( transpose(A), A )
end function
2.1 Overview
119
120 CHAPTER 2. LAGRANGE INTERPOLATION
The Lagrange polynomials `j (x) of grade N for a set of points {xj } for j =
0, 1, 2 . . . , N are defined as:
N
x − xi
(2.1)
Y
`j (x) = ,
x
i=0 j
− xi
i6=j
which satisfy:
where δij is the delta Kronecker function. This property is fundamental because
as we will see it permits to obtain the Lagrange interpolant very easily once the
Lagrange polynomials are determined.
k k−1
x − xi x − xk−1 Y x − xi
(2.3)
Y
`jk (x) = = ,
x − xi
i=0 j
xj − xk−1 i=0 xj − xi
i6=j i6=j
x − xk−1
`jk (x) = `jk−1 (x) , (2.4)
xj − xk−1
Hence, it can be obtained recursively the interpolant at xj for each grade as:
2.2. LAGRANGE INTERPOLATION 121
x − x0
`j1 (x) = ,
xj − x0
x − x0 x − x1
`j2 (x) = ,
xj − x0 xj − x1
..
.
x − x0 x − xj−1 x − xj+1 x − xk−2 x − xk−1
`jk (x) = ... ... .
xj − x0 xj − xj−1 xj − xj+1 xj − xk−2 xj − xk−1
| {z }
`jk−1 (x)
From equation (2.4). a recursion to calculate the k first derivatives of ljk (x) is
obtained.
0 0 x − xk−1 `jk−1 (x)
`jk (x) = `jk−1 (x) + ,
xj − xk−1 xj − xk−1
0
2`jk−1 (x)
x − xk−1
`00jk (x) = `00jk−1 (x) + ,
xj − xk−1 xj − xk−1
..
.
(m−1)
!
m`jk−1 (x)
x − xk−1
(2.6)
(m) (m)
`jk (x) = `jk−1 (x) + ,
xj − xk−1 xj − xk−1
..
.
0 (k−1)
!
k`jk−1 (x)
(k) (k)
> x − xk−1
`jk (x) = `jk−1
(x) + .
xj − xk−1 xj − xk−1
Note that equation (2.6) is also valid for m = 0, value for which it reduces to
(2.4). Hence, starting from the value `j0 = 1 we can compute the polynomial `jk
and its first k derivatives using recursion (2.6). The idea is known the first k − 1
derivatives of `jk−1 we start computing `jk , then `jk until we calculate `jk = `jk .
k) k−1) 0)
`j0 1
`0j1 = = ,
xj − x0 xj − x0
x − x0 x − x0
`j1 = `j0 = .
xj − x0 xj − x0
The reason to sweep m in descending order through the interval [0, k] is because
computing the recursion in this manner permits to implement the calculation of
the derivatives storing the values of `jk over the value of `jk−1 .
(m) (m)
122 CHAPTER 2. LAGRANGE INTERPOLATION
Once `jk and its k first derivatives are calculated we can compute the integral
of `jk in the interval [x0 , x] from its truncated Taylor series of grade k. Hence, we
can express the integral as:
x
(x − x0 )2 (x − x0 )k+1
Z
(k)
`jk (x)dx = `jk (x0 )(x − x0 ) + `0jk (x0 ) + · · · + `jk (x0 ) .
x0 2 (k + 1)!
(2.7)
The computation of the first k derivatives and the integral for a grid of k + 1
nodes, is carried out by the function Lagrange_polynomials. The derivatives and
integrals at a point xp are stored on a vector d whose dimension is the number
of set nodes. For a fixed point of the grid (that is, for fixed j) the following loop
computes the derivatives and the image of the Lagrange polynomial `j , evaluated
at xp.
! ** k derivative of lagrange(x) at xp
do r = 0, N
if (r/=j) then
do k = Nk , 0,-1
d(k) = ( d(k) *( xp - x(r) ) + k * d(k-1) ) /( x(j) - x(r) )
end do
endif
The integral is computed in a different loop once the derivatives are calculated
! ** integral of lagrange(x) form x(jp) to xp
f = 1
j1 = minloc( abs(x - xp) ) - 2
jp = max(0, j1(1))
do k=0, Nk
f = f * ( k + 1 )
d(-1) = d(-1) - d(k) * ( x(jp) - xp )**(k+1) / f
enddo
integer :: j ! node
integer :: r ! recursive index
integer :: k ! derivative
integer :: Nk ! maximum order of the derivative
integer :: N, j1(1), jp
Nk = size(x) - 1
N = size(x) - 1
do j = 0, N
d(-1:Nk) = 0
d(0) = 1
! ** k derivative of lagrange(x) at xp
do r = 0, N
if (r/=j) then
do k = Nk , 0, -1
d(k) = ( d(k) *( xp - x(r) ) + k * d(k-1) ) /( x(j) - x(r) )
end do
endif
enddo
! ** integral of lagrange(x) form x(jp) to xp
f = 1
j1 = minloc( abs(x - xp) ) - 2
jp = max(0, j1(1))
do k=0, Nk
f = f * ( k + 1 )
d(-1) = d(-1) - d(k) * ( x(jp) - xp )**(k+1) / f
enddo
Lagrange_polynomials (-1:Nk , j ) = d(-1:Nk)
end do
end function
N
(2.8)
X
I(x) = bj `j (x).
j=0
This interpolant is used to approximate the function f (x) within the interval
[x0 , xN ]. For this, the constants of the linear combination bj must be determined.
The interpolant must satisfy to intersect the exact function f (x) on the nodal
points xi for i = 0, 1, 2 . . . , N , that is:
Note that in the equation above the degree of `j does not necessarily need to
be N and in general its degree q satisfies q ≤ N .
N = size(x) - 1
if(present(degree))then
Nk = degree
else
Nk = 2
end if
allocate( Weights (-1:Nk , 0:Nk))
deallocate(Weights)
end function
N
(2.12)
(k)
X
(k)
I (x) = f (xj )`j (x).
j=0
126 CHAPTER 2. LAGRANGE INTERPOLATION
for a posterior usage. Once the Lagrange polynomials derivatives are computed,
we just have to linearly combine the images f (xj ) using the elements of the array
Weights as coefficients.
N = size(x) - 1
M = size(xp) - 1
Nk = degree
allocate( Weights (-1:Nk , 0:Nk))
do i=0, M
do k=0, Nk
Interpolant(k, i) = sum ( Weights(k, 0:Nk) * y(s:s+Nk) )
end do
end do
deallocate(Weights)
end function
Listing 2.5: Interpolation.f90
The implementation is done in a function called Integral given the set of nodes
xi , the images yi , i = 0, . . . N and optionally the degree of the polynomials used.
integer :: N, j, s
real :: summation , Int, xp
N = size(x) - 1
if(present(degree))then
Nk = degree
else
Nk = 2
end if
summation = 0
do j=0, N
if (mod(Nk ,2) ==0) then
s = max( 0, min(j-Nk/2, N-Nk) )
else
s = max( 0, min(j-(Nk -1)/2, N-Nk) )
endif
xp = x(j)
Weights (-1:Nk , 0:Nk , j)=Lagrange_polynomials(x = x(s:s+Nk), xp = xp )
deallocate(Weights)
end function
Finally, an additional function which is important for the next chapter is de-
fined. This function determines the stencil or information that a q order interpo-
lation requires.
end function
Listing 2.7: Lagrange_interpolation.f90
Whenever the approximated function for the set of nodes {(xi , yj )} for i = 0, 1 . . . , Nx
and j = 0, 1 . . . , Ny is f : R2 → R, the interpolant I(x, y) can be calculated as a
two dimensional extension of the interpolant for the single variable function. In
such case, a way in which the interpolant I(x, y) can be expressed is:
Ny
Nx X
(2.15)
X
I(x, y) = bij `i (x)`j (y).
i=0 j=0
Again, using the property of Lagrange polynomials (2.2), the coefficients bij are
determined as:
Ny
Nx X
(2.17)
X
I(x, y) = f (xi , yj )`i (x)`j (y).
i=0 j=0
Ny Nx
(2.18)
X X
I(xm , y) = f (xm , yj )`j (y), I(x, yn ) = f (xi , yn )`i (x),
j=0 i=0
Nx
X
I(x, y) = I(xi , y)`i (x)
i=0
Ny
(2.19)
X
= I(x, yj )`j (y).
j=0
I(x, y) = `x · F · `y . (2.20)
Nx
(2.21)
X
f (x, y) = f˜(x; s) ' I(x;
˜ s) = bi (s)`i (x),
y=s i=0
Ny
(2.22)
X
bi (s) = bij `j (s).
j=0
Ny
Nx X
(2.23)
X
˜ s) = I(x, y)
I(x; = bij `i (x)`j (s),
y=s i=0 j=0
In the same manner, the interpolated value I(x, y) can be achieved restricting
the value in x = s:
Ny
(2.25)
X
f (x, y) = f˜(y; s) ' I(y;
˜ s) = bj (s)`j (y),
x=s j=0
Nx
(2.26)
X
bj (s) = bij `i (s).
i=0
Ny
Nx X
(2.27)
X
˜ s) = I(x, y)
I(y; = bij `i (s)`j (y),
x=s i=0 j=0
Ny
Nx X
(2.28)
X
I(x, y) = bij `i (x)`j (y).
i=0 j=0
132 CHAPTER 2. LAGRANGE INTERPOLATION
˜ s)
I(x;
˜ s)
I(y;
(a) (b)
133
134 CHAPTER 3. FINITE DIFFERENCES
The expression (3.3) is the finite difference formula of order q which approximates
the derivative of order k at the point xj . To illustrate the procedure let’s consider
the computation of the two first derivatives for order q = 2 and the set of equispaced
nodes {x0 , x1 , x2 }, that is x2 −x1 = x1 −x0 = ∆x. For this problem the interpolant
and its derivatives are:
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
f (x) = f0 2
− f1
2∆x ∆x2
(x − x0 )(x − x1 )
+ f2 ,
2∆x2
df (x) (x − x1 ) + (x − x2 ) (x − x0 ) + (x − x2 )
= f0 − f1
dx 2∆x2 ∆x2
(x − x0 ) + (x − x1 )
+ f2 ,
2∆x2
d2 f (x) f0 2f1 f2
= − + .
dx2 ∆x2 ∆x2 ∆x2
Note that the second derivative is the famous finite difference formula for cen-
tered second order derivatives. Evaluating the first derivative in the nodal points
we obtain the well-known forward, centered and backward finite differences ap-
proximations of order 2:
d2 u du
+2 + u(x) = 0, x ∈ (0, 1),
dx2 dx
du/dx (0) = −2, u(1) = 0.
−3u0 + 4u1 − u2
= −2,
2∆x
uj−1 − uj + uj+1 uj+1 − uj−1
+2 + uj = 0, j = 1, 2, . . . N − 1,
∆x2 2∆x
uN = 0,
whose solution is an approximation of u(x) in the nodal values. Note that for
every point j = 0, 1, . . . , N − 1 the formula used to approximate the first derivative
is different. This is so as the set of Lagrange polynomials used to approximate
the derivative at each point is different. For j = 0 we use {`0 , `1 , `2 } for the
stencil {0, 1, 2}, while for j = 1, . . . , N − 1 we use {`j−1 , `j , `j+1 } for the stencil
{j − 1, j, j + 1}. The selection of the stencil must be done taking into account the
order q of interpolation. In this example, we just had to differentiate between the
inner points 0 < j < N and the boundary points j = 0, N (note that if we needed to
compute derivatives at xN the formula would be the backward finite difference) but
for generic order q the situation is slightly different. First of all, the stencil for even
values of q consists of an odd number of nodal points and therefore the formulas
can be centered. On the contrary, for odd values of q as the stencil contains an even
number of nodal points the formulas are not centered. Nevertheless, in both cases
the stencil is composed of q + 1 nodal points which will be the ones used by the
corresponding Lagrange interpolants. In the following lines we give a classification
for both even and odd generic order q.
1. Even order: When q is even we have three possible scenarios for the stencil
depending on the nodal point xj . We classify the stencil in terms of its first
element which corresponds to the index j − q/2.
• For j − q/2 < 0 we use the stencil {x0 , . . . , xq } and its associated La-
grange polynomials {`0 (x), . . . , `q (x)} evaluated at xj .
• For 0 ≤ j − q/2 ≤ N − q we use the stencil {xj−q/2 , . . . , xj+q/2 } and its
associated Lagrange polynomials {`j−q/2 (x), . . . , `j+q/2 (x)} evaluated at
xj .
• For j −q/2 > N −q we use the stencil {xN −q , . . . , xN } and its associated
Lagrange polynomials {`N −q (x), . . . , `N (x)} evaluated at xj .
On figure 3.1 is represented a sketch of the three different stencils for even
order and the conditions under which are used.
2. Odd order: When q is odd we have three possible scenarios for the stencil
depending on the nodal point xj . We classify the stencil in terms of its first
element which corresponds to the index j − (q − 1)/2.
136 CHAPTER 3. FINITE DIFFERENCES
q q q q
0 2 q j− 2 j j+ 2 N −q N− 2 N
Figure 3.1: Sketch of the possible stencils for finite differences of even order q. Each set of
three nodes represent the q +1 nodes that constitute the grid for the Lagrange polynomial
`j (x).
• For j − (q − 1)/2 < 0 we use the stencil {x0 , . . . , xq } and its associated
Lagrange polynomials {`0 (x), . . . , `q (x)} evaluated at xj .
• For 0 ≤ j−(q−1)/2 ≤ N −q we use the stencil {xj−(q−1)/2 , . . . , xj+(q+1)/2 }
and its associated polynomials {`j−(q−1)/2 (x), . . . , `j+(q+1)/2 (x)} evalu-
ated at xj .
• For j − (q − 1)/2 > N − q we use the stencil {xN −q , . . . , xN } and its
associated Lagrange polynomials {`N −q (x), . . . , `N (x)} evaluated at xj .
On figure 3.2 is represented a sketch of the three different stencils for odd
order and the conditions under which are used.
Figure 3.2: Sketch of the possible stencils for finite differences of odd order q. Each set of
three nodes represent the q +1 nodes that constitute the grid for the Lagrange polynomial
`j (x).
In order to store the information and properties of the grid, a derived data type
called Grid is defined and its properties declared as globals. This will permit to
perform the computation of the coefficients of the high order derivatives just once.
type Grid
character(len=30) :: name
real, allocatable :: Derivatives( :, :, :)
integer :: N
real, allocatable :: nodes (:)
end type
integer, save :: Order
integer, parameter :: Nmax = 20
type (Grid), save :: Grids (1: Nmax)
integer, save :: ind = 0
Listing 3.1: Finite_differences.f90
integer :: N, j, s
real :: xp
N = size(z_nodes) - 1;
do j=0, N
enddo
end subroutine
Listing 3.2: Finite_differences.f90
138 CHAPTER 3. FINITE DIFFERENCES
Order = q
if (d == 0) then
ind = ind + 1
Grids(ind) % N = size(nodes) - 1
Grids(ind) % name = direction
Taking this into account, the subroutine Derivative1D which calculates deriva-
tives of single variable functions is implemented as follows.
integer :: i, d, N
integer, allocatable :: sx(:)
integer :: k
d = 0
d = findloc( Grids (:) % name, direction , dim=1 )
k = derivative_order
if (d > 0) then
N = Grids(d) % N
allocate ( sx(0:N) )
sx = Stencilv( Order , N )
do i= 0, N
else
write(*,*) " Error Derivative1D"
stop
endif
end subroutine
integer :: i, j, d1 , d2 , Nx , Ny
integer, allocatable :: sx(:), sy(:)
integer :: k
d1 = 0 ; d1 = findloc( Grids (:) % name, direction (1), dim=1 )
d2 = 0 ; d2 = findloc( Grids (:) % name, direction (2), dim=1 )
k = derivative_order
do i=0, Nx
do j=0, Ny
if (coordinate == 1) then
Wxi(i,j) = dot_product( Grids(d1) % Derivatives(k, 0:Order , i), &
W(sx(i):sx(i)+Order , j) );
else
write(*,*) " Error Derivative2D"
write(*,*) "Grids =", Grids (:)% name, "direction =", direction
write(*,*) "d1 =", d1 , "d2 =", d2
stop
end if
end subroutine
4.1 Overview
From the physical point of view, a Cauchy problem represents the evolution of
any physical system with different degrees of freedom. From the movement of a
material point in a three-dimensional space to the movement of satellites or stars,
the movement is governed by a system of ordinary differential equations. If the
initial condition of all degrees of freedom of this system is know, the movement can
be predicted and the problem is named a Cauchy problem. Generally, this system
involves first and second order derivatives of functions that depend on time. In
order to design and to use the different temporal schemes, the problem is always
formulated a system of first order equations.
dU
= F (U ; t), F : RN × R → RN , (4.1)
dt
U (t0 ) = U 0 , ∀ t ∈ [t0 , +∞). (4.2)
141
142 CHAPTER 4. CAUCHY PROBLEM
The idea of any temporal scheme is to approximate the integral appearing in (4.3)
with an approximate value. Once the integral is approximated, U n is used to denote
the approximate value to differentiate it from the exact value U (tn ). In figure 4.1,
an scheme with the nomenclature of this chapter is shown. Superscript n stands
for the approximated value at the temporal instant tn . The approximated value of
F (U (tn ), tn ) is denoted by F n .
Un
U 1 r ∆tn = tn+1 − tn
r U n−1
U0 r
r U n+1
r
Depending on how the integral appearing in equation (4.3) is carried out, different
schemes are divided in the following groups:
From the implementation point of view, two main main subroutines are de-
signed. Given a temporal domain partition [ti , i = 0, . . . M ], a subroutine called
Cauchy_ProblemS is responsible to call different temporal schemes to approximate
(4.3). In the following code the implementation of this subroutine is shown:
if (present(Scheme)) then
call Scheme( Differential_operator , t1 , t2 , &
Solution(i,:), Solution(i+1,:), ierr )
dt = t2 -t1; t = t1
k1 = F( U1 , t)
k2 = F( U1 + dt * k1/2, t + dt/2 )
k3 = F( U1 + dt * k2/2, t + dt/2 )
k4 = F( U1 + dt * k3 , t + dt )
ierr = 0
end subroutine
This is the classical fourth order Runge-Kutta. Given the input value U1, and
the vector function F,the scheme calculates the value U2. In the following code the
interface of the vector function F is shown:
function ODES( U, t)
real :: U(:), t
real :: ODES( size(U) )
end function
When the integral of equation (4.3) done by any the the different temporal schemes
involves the value U n+1 , the resulting scheme becomes implicit and a nonlinear sys-
tem of N equations must be solved at each time step. Since the complexity and
the computational cost of implicit methods is much greater than the explicit meth-
ods, the only reason to implement these methods relies on the stability behavior.
Generally, implicit methods do nor require time steps limitations or constraints to
be numerically stable. The simplest implicit method is the inverse Euler method,
From the implementation point of view, the scheme can be implemented with the
methodology presented above. In the following code, the subroutine Inverse_Euler
uses a Newton method to solve the equation (4.5) each time step.
dt = t2 -t1
U2 = U1
G = X - U1 - dt * F(X, t2)
end function
end subroutine
Since the error of a numerical solution is defined as the difference between the exact
solution u(tn ) minus the the approximate solution U n at the same instant tn
E n = u(tn ) − U n , (4.6)
the determination the error requires knowing the exact solution. This situation is
unusual and makes necessary to find some technique out.
For one-step methods this expansion can be found. However, for multi-step
methods, the presence of spurious solutions do not allow this expansion. To cure
this problem and to eliminate the oscillatory behavior of the error, averaged values
U can be defined as:
n
1
(4.8)
n
U n + 2U n−1 + U n−2 ,
U =
4
allowing expansions like (4.7).
If the error can be expanded like in (4.7) and by integrating two grids one
with time step ∆tn and other with ∆tn /2, an estimation of the error based on
Richardson’s extrapolation can be found. Let U1 be the solution integrated with
∆tn and U2 the solution integrated with ∆tn /2. The expression (4.7) for the two
solutions is written:
U22n − U1n
En = . (4.12)
1 − 21q
4.4. RICHARDSON’S EXTRAPOLATION TO DETERMINE ERROR 147
do i=0, N-1
t2(2*i) = t1(i)
t2(2*i+1) = ( t1(i) + t1(i+1) )/2
end do
t2(2*N) = t1(N)
do i=N, 0, -1
U1(i,:) = ( U1(i, :) + 2 * U1(i-1, :) + U1(i-2, :) )/4
end do
do i=2*N, 0, -1
U2(i,:) = ( U2(i, :) + 2 * U2(i-1, :) + U2(i-2, :) )/4
end do
do i=0, N
Error(i,:) = ( U2(2*i, :)- U1(i, :) )/( 1 - 1./2** order )
end do
Solution = U1 + Error
deallocate ( t1 , U1 , t2 , U2 )
end subroutine
Given a Time_Domain, two temporal grids are defined t1 and t2. While t1
is the original temporal grid, t2 has double of points than t1 and it is obtained
by halving the time steps of t1. Then, two independent simulations U1 and U2
are carried out starting from the same initial condition. They are averaged with
expression (4.8) to eliminate oscillations and Error is calculated with expresion
(4.12). Finally, the Error is used to correct the U1 solution to give the Solution.
148 CHAPTER 4. CAUCHY PROBLEM
U1(0,:) = U0(:)
t1 = Time_Domain
call Cauchy_ProblemS( t1 , Differential_operator , U1 , Scheme )
do i = 1, m ! simulations in different grids
N = 2 * N
allocate ( t2(0:N), U2(0:N, Nv) )
t2(0:N:2) = t1; t2(1:N -1:2) = ( t1(1:N/2) + t1(0:N/2-1) )/ 2
U2(0,:) = U0(:)
where h = tn+1 − tn , matrix aij is the Butcher’s array, bi and ci are constant of
the scheme and
Xe
ki = F tn + ci h, un + aij kj .
j=1
Note that as c1 = 0, it does not appear on the Butcher array. In the special case in
which aij = 0, ∀ i ≤ j, then the Runge-Kutta is explicit, that is, ki can be obtained
from {k1 , . . . , ki−1 }.
The embedded Runge-Kutta method uses two explicit schemes sharing ci and
150 CHAPTER 4. CAUCHY PROBLEM
aij for all i, j ≤ e. Therefore, this method has the extended Butcher’s array:
c2 a21
.. ..
. .
ce ae1 ... aee−1 (4.16)
un+1 b1 ... be
n+1
û b̂1 ... b̂e
In which bi and b̂i are respectively the coefficients of the approximated solutions
un+1 and ûn+1 . Since the local truncation error of un+1 is C hq+1 and the error
of ûn+1 is Ĉ hq+2 , the estimation of the local truncation error T n+1 of order q + 1
is obtained by substantiating the two approximations:
T n+1 = un+1 − ûn+1 = C hq+1 . (4.17)
If the norm of the local truncation error should less than a prescribed tolerance ,
then the optimal time step ĥ can be obtained from equation (4.17) to yield:
and the optimum time step can be obtained from the previous time step
1
q+1
ĥ = h . (4.20)
kT n+1 k
This step size selection is implemented in the following code:
real :: normT
normT = norm2(dU)
ierr = 0
end subroutine
h = t2 - t1
end do
N_eRK_effort = N_eRK_effort + Ne
U2 = U1 + h * matmul( b, k )
A pair of Runge-Kutta schemes are identified by its name which must be previ-
ously selected by the subroutine set_solver. If the subroutine is called with
tag="First", the butcher’s array is created and the values of different stages ki
are calculated and stored in k(i,:) where the first index stands for the stage index
and the second index represents the index of the variable. Later, an approximate
value for U2 is calculated by using b coefficients previously defined. If the subrou-
tine is called with tag="Second" and since the butcher’s array and the k(i,:) are
saved, an approximate value for U2 is calculated by using bs coefficients previously
defined.
4.7. GRAGG-BULIRSCH-STOER METHOD 153
In the GBS algorithm, the error is improved by halving consecutively the interval
[tn , tn+1 ] and by using the Richardson’s extrapolation technique.
tj = tn + j hi , j = 0, 1 . . . , 2ni . (4.21)
where hi = h/(2ni ).
2. Modified midpoint scheme.
The solution un+1
i is obtained in each level by applying the modified midpoint
scheme:
un+1 = un+1
1 + E n+1 .
154 CHAPTER 4. CAUCHY PROBLEM
In the following discussion, a formula for the error E n+1 based on the solutions
un+1
i will be given. It was proven that the global error
For each level or grid, the temporal step is written hi = h/ni which leads to l
expressions for the error:
∞ ∞ 2j
X X h
E 1 (tn+1 ) = u(tn+1 ) − un+1
1 = k2j (tn )h2j
1 = k2j (tn ) ,
j=1 j=1
n1
∞ ∞ 2j
X X h
E 2 (tn+1 ) = u(tn+1 ) − un+1
2 = k2j (tn )h2j
2 = k2j (tn ) ,
j=1 j=1
n2
..
.
∞ ∞ 2j
X X h
E i (tn+1 ) = u(tn+1 ) − un+1
i = k2j (tn )h2j
i = k2j (tn ) ,
j=1 j=1
ni
..
.
∞ ∞ 2j
X X h
E l (tn+1 ) = u(tn+1 ) − un+1
l = k2j (tn )h2j
l = k2j (tn ) .
j=1 j=1
nl
The above expression yields the exact global error but requires an infinite num-
ber of terms. If the number of levels is high enough, the rate of convergence of
terms of series h2j allow to truncate the series to give a good approximation. That
is, an estimation of the error can be obtained with l levels
l−1
(4.29)
X
un+1 n+1
i+1 − ui ' Aij k2j (tn )h2j ,
j=1
or in vector form:
n+1
u2 − un+1 k2 (tn )h2
1
.. ..
n+1 . n+1 .
(4.30)
2j
i+1 − ui = Aij k2j (tn )h
u .
.. ..
. .
un+1
l − u n+1
l−1 k (t
2(l−1) n )h2(l−1)
n+1
k2 (tn )h2 u2 − un+1
1
.. ..
. .
E n+1 = 1, . . . , 1 A−1 un+1 − un+1 .
2j
= 1, . . . , 1
k 2j (t n )h ij i+1 i
.. ..
. .
2(l−1) n+1 n+1
k2(l−1) (tn )h ul − ul−1
(4.33)
un+1 = un+1
1 + E n+1 .
156 CHAPTER 4. CAUCHY PROBLEM
N_levels = 1; Error = 10
do while (norm2(Error) > Tolerance)
The required operations described in expression (4.33) to obtain the error esti-
mation is implemented in the following code:
do i = 1, q
do j = 1, q
A(j,i) = ( ( 1./n(i))**(2*j) - (1./n(i+1))**(2*j) )
end do
end do
! *** Vector b computation
ones = 1.
call LU_factorization( A )
b = Solve_LU( A , ones )
end subroutine
Listing 4.11: Gragg_Burlisch_Stoer.f90
h = (t - t0) / ( 2*n )
U(:,0) = U0
U(:,1) = U(:,0) + h * F( U(:,0), t0 )
do i=1, 2*n
ti = t0 + i*h
U(:, i+1) = U(:, i-1) + 2*h* F( U(:,i), ti )
end do
Un = ( U(:, 2*n-1) + 2 * U(:, 2*n) + U(:, 2*n+1) )/4.
end subroutine
Listing 4.12: Gragg_Burlisch_Stoer.f90
158 CHAPTER 4. CAUCHY PROBLEM
In this chapter, the last of the classical high order temporal schemes will be pre-
sented, the Adams-Bashforth-Moulton methods (ABM). These methods are based
on linear multi-step methods (Adams) which can be explicit (Adams-Bashforth) or
implicit (Adams-Moulton). Given an interval [tn , tn+1 ] of length ∆t, this family of
methods gives the solution at the end of the interval as:
s
(4.34)
X
un+1 = un + ∆t βj F n+1−j ,
j=0
where F n+1−j = F (tn+1−j , un+1−j ), and βj are the coefficients of the scheme.
Note, that an s−step explicit method satisfies β0 = 0 and an implicit method of
the same number of steps satisfies βs = 0. The resolution for explicit methods is
straightforward meanwhile for implicit methods the solution must be obtained by
an iterative process.
The origin of the coefficients, and therefore of the methods, has its basis on ap-
proximating the quadrature
Z tn+1
un+1 = un + F (t, u)dt, (4.37)
tn
Hence, for a fixed step size ∆t the coefficients are obtained by the approximation:
Z tn+1
Explicit: un+1 ' un + I(t)dt
tn
s Z tn+1
(4.42)
X
= un + F n+1−j `n+1−j dt,
1 tn
Z tn+1
Implicit: un+1 ' un + I(t)dt
tn
s−1 Z tn+1
(4.43)
X
= un + F n+1−j `n+1−j dt,
0 tn
therefore the coefficients for both explicit and implicit schemes (choosing the ap-
propriate interpolant) are written as:
Z tn+1
1
βj = `n+1−j dt. (4.44)
∆t tn
One interesting remark is that the coefficients are dependant on the step size distri-
bution of the temporal grid. For this reason, it becomes very expensive to compute
variable step size Adams methods.
Example 1. Two steps Adams-Bashforth. Let’s consider the case s = 2 and
constant ∆t for an explicit method. In these conditions, the interpolant for the
differential operator can be written:
I(t) = F n `n (t) + F n−1 `n−1 (t)
t − tn−1 t − tn
= Fn − F n−1 ,
∆t ∆t
and the coefficients are calculated as:
Z tn+1 Z tn+1
1 1 t − tn−1 3
β1 = `n dt = dt = ,
∆t tn ∆t tn ∆t 2
Z tn+1 Z tn+1
1 1 t − tn 1
β2 = `n−1 dt = − dt = − ,
∆t tn ∆t tn ∆t 2
160 CHAPTER 4. CAUCHY PROBLEM
where
(t − tn−1 )(t − tn−2 )
`n (t) = ,
2∆t2
(t − tn )(t − tn−2 )
`n−1 (t) = − ,
∆t2
(t − tn )(t − tn−1 )
`n−2 (t) = ,
2∆t2
and the coefficients are calculated as:
Z tn+1 Z tn+1
1 1 (t − tn−1 )(t − tn−2 ) 23
β1 = `n dt = dt = ,
∆t tn ∆t tn 2∆t2 12
Z tn+1 Z tn+1
1 1 (t − tn )(t − tn−2 ) 16
β2 = `n−1 dt = − 2
dt = − ,
∆t tn ∆t tn ∆t 12
Z tn+1 Z tn+1
1 1 (t − tn )(t − tn−1 ) 5
β3 = `n−2 dt = dt = ,
∆t tn ∆t tn 2∆t2 12
4.8. ABM OR MULTI-VALUE METHODS 161
Note that if for each step ∆t changed its value, that is ∆t1 = tn − tn−1 ,
∆t2 = tn−1 − tn−2 , the Lagrange polynomials would depend on these step sizes:
In sight of the previous examples it is possible to obtain the coefficients for any
desired value of s in a similar manner. However, in order to implement an algorithm
that controls the step size, it would require to calculate the coefficents at each step
of the simulation. This means a high computational cost for the algorithm in terms
of computation time, which is undesirable.
The multi-value formulation will permit to reduce the computational and imple-
mentation cost associated to the obtention of the coefficients βj of Adams methods.
dj u
z }| {
uj)
n = = u00 · · · 0 ,
n u0)
n = un .
dtj tn
From the expansion (4.48) it can be obtained the values of the s first derivatives
of ũ on tn+1 . In general, the i−th derivative can be written:
s j)
un
(4.49)
i)
X
ũn+1 = ∆tj−i ,
j=i
(j − i)!
162 CHAPTER 4. CAUCHY PROBLEM
where the same notation holds for ũn+1 . For Nordsieck methods, instead of saving
i)
the values of the differential operator at the s steps, the values of the s deriva-
tives are stored. In particular, the addends y in = ∆ti un /i! of the expansion are
i)
if j < i
0,
Bij = j! , i, j = 0, 1, 2 . . . , s,
, if j ≥ i
i!(j − i)!
we can write:
s
(4.50)
X
ỹ in+1 = Bij y jn , i = 0, 1, 2 . . . , s.
j=i
The extrapolation ỹ in+1 must be corrected by two parameters α ∈ RNv and ri as:
i) i) ri i!
un+1 = ũn+1 + α.
∆ti
It has not yet defined the value for the coefficients ri and α. This quantities will
be obtained so the multi-value methods become equivalent to Adams methods as
we will see in the next pages.
To obtain the coefficients for the multi-value method, let’s consider the case
Nv = 1, as once obtained for it, the case for Nv > 1 is straightforward. For this
case we can write:
i
yn+1 i
= ỹn+1 + ri α, i = 0, 1, 2 . . . , s, (4.52)
i!
(4.53)
i) i)
un+1 = ũn+1 + ri α, i = 0, 1, 2 . . . , s.
∆ti
The value of α is obtained by forcing that the solution for i = 1, that is,
the derivative satisfies the differential equation. In other words, un+1 = u0n+1 =
1)
shall fix r1 = 1 for convenience. With this restriction α is determined from the
second equation of (4.51) as:
s j)
X un
α = ∆tFn+1 − ∆tũ0n+1 = ∆tFn+1 − ∆t ∆tj−1 .
j=1
(j − 1)!
Note that imposing this value for α introduces the information of the differential
equation to the multivalue method. The rest of the coefficients ri can be obtained
as:
i) i)
∆ti un+1 − ũn+1
ri =
i! α
i−1 ui) i)
∆t n+1 − ũn+1
=
i! Fn+1 − ũ1)
n+1
i−1 i) i)
∆t un+1 − ũn+1
= j)
.
i! Ps un
Fn+1 − j=1 (j−1)! ∆tj−1
Notice that in this equation, for each ri there is associated a value of un+1 .
i)
Therefore, the value of the coefficients can be fixed to satisfy that un+1 comes
i)
from the i − 1-th derivative of an interpolant I(t), which is called I i−1) . That is,
imposing:
(4.54)
i) i−1) i−1)
un+1 = δi0 un + In+1 , In+1 := I i−1) (tn+1 ),
i = 0, 2, 3 . . . , s,
where δi0 is the delta Kronecker, and I −1) is defined as the integral:
Z tn+1
I −1) (tn+1 ) = Idt. (4.55)
tn
It has not been yet specified the stencil of the interpolant, which as we have seen.
For i > 1 the interpolant must include the differential operator evaluated at the
next step, that is F n+1 . However, in the case i = 0 it depends on if the Adams
method associated to the coefficient values is explicit or implicit. As has been stated
previously, the interpolant takes a different form depending on this characteristic
of the scheme. However, for Bashforth methods, the value obtained for un+1 is
exactly the value of the extrapolation, when the derivatives of un are computed as
the derivatives (of one order less) of I in tn , that is:
Z tn+1 s
−1)
X
un+1 = un + Idt = un + Fn+1−j `n+1−j (tn+1 ),
tn j=1
164 CHAPTER 4. CAUCHY PROBLEM
s
X ∆ti
ũn+1 = un + ui)
n
i=1
i!
s s
X ∆ti X i−1)
= un + Fn+1−j `n+1−j (tn )
i=1
i! j=1
s s
!
X X ∆ti i−1)
= un + Fn+1−j `n+1−j (tn ) .
j=1 i=1
i!
we have that the value at the next step un+1 is the same as the extrapolation
ũn+1 , and r0 = 0 for explicit Adams. Note that this can also be intuited from the
equation:
i) i)
un+1 − ũn+1 ∆ti−1
ri = .
Fn+1 − ũ0n+1 i!
For the case i = 0, r0 represents the difference between the extrapolation ũn+1 and
the solution given by the scheme un+1 . This means that for implicit methods, it
shall be obtained by determining un+1 with the interpolant for implicit methods
which we will call I and ũn+1 with the interpolant for explicit methods which
we will call I.
˜ Their Lagrange polynomials respectively will be called `˜k and `k .
Therefore, we can write r0 for implicit methods as:
un+1 − ũn+1 1
r0 =
F n+1 − ũ0n+1 ∆t
Ps−1 n+1−j −1) Ps −1)
j=0 F `n+1−j − j=1 F n+1−j `˜n+1−j 1
= s
F n+1 − j=1 F n+1−j `˜0n+1−j
P
∆t
n+1−j
P s−1 n+1−j Ps n+1−j
F + j=1 F βj /β0 − j=1 F β̃j /β0
= β0 s
F n+1−j `˜0
P
F n+1 − j=1 n+1−j
= β0 . (4.56)
Hence, for Adams Moulton, the first coefficient r0 for the multivalue expression is
the same as the coefficient β0 of the classical approach. To check the veracity of
this claims, let’s consider some examples.
Example 4. Two steps Adams Bashforth: For this method we have that
3∆t 1
`−1)
n (tn+1 ) = = β1 ∆t, `0n (tn ) = ,
2 ∆t
−1) ∆t 1
`n−1 (tn+1 ) = − = β2 ∆t, `0n−1 (tn ) = − ,
2 ∆t
4.8. ABM OR MULTI-VALUE METHODS 165
1 F n+1 + F n − 3F n + F n−1 1
r0 = = = β0 .
2 F n+1 − 2Fn + Fn−1 2
Whenever we want to determine the coefficients for i > 1 (for both explicit
and implicit methods), the obtention of ri must be done with an interpolant which
includes the value of F at tn+1 . As the interpolant is of grade s − 1, if we do not
inclued the values at tn+1 the value of un+1 would be the same as un which is
s) s)
i) i)
∆ti−1 un+1 − ũn+1
ri = ,
i! Fn+1 − ũ0n+1
Ps−1 i−1) i)
∆ti−1 δi0 un + j=0 Fn+1−j `n+1−j (tn+1 ) − ũn+1
=
i! Fn+1 − ũ0n+1
∆ti−1 i−1)
= `n+1 (tn+1 ) ηi ,
i!
166 CHAPTER 4. CAUCHY PROBLEM
where:
i−1) i)
δi0 un Ps−1 `n+1−j (tn+1 ) ũn+1
i−1)
+ j=0 Fn+1−j i−1)
− i−1)
`n+1 (tn+1 ) `n+1 (tn+1 ) `n+1 (tn+1 )
ηi =
Fn+1 − ũ0n+1
= 1. (4.57)
∆ti−1 i−1)
ri = `n+1 (tn+1 ), for i > 1. (4.58)
i!
Example 6. Two steps Adams For this method we have seen that for the explicit
r0 = 0 and to satisfy the differential equation r1 = 1 for both explicit and implicit
methods (u0n+1 = Fn+1 ). The only remaining coefficients are r0 and r2 , we will see
that both can be computed as given by (4.58).
and we saw that `0n+1 (tn+1 ) = 1/∆t, `n+1 (tn+1 ) = ∆t/2. As was explained the
−1)
Hence, we have seen that multi-value methods are equivalent to Adams meth-
ods. When formulating the multi-step methods in the form given by (4.51), the
change of step size is very simple as if we calculate y in+1 for a given ∆t1 , the solu-
tion for another step size ∆t2 is given by ∆ti2 y in+1 /∆ti2 . The change of the method
from Adams family is done by simply changing ri . Besides, this formulation per-
mits to write the multi-step methods on a more compact form defining two new
state vectors Y n , U n ∈ Ms+1×Nv given by
0
yn un
.. ..
. .
i i i)
Y = y n = ∆t un /i! = AU n ,
n
(4.59)
. .
.. ..
s s)
yn s
∆t un /s!
where A ∈ Ms+1×s+1 is given by Aij = δij ∆ti /i!. Thus, we can write:
Y n+1 = BY n + r ⊗ α,
(4.60)
n+1 −1 n −1
U =A BAU + A r ⊗ α,
168 CHAPTER 4. CAUCHY PROBLEM
Y n+1 = Y n + r ⊗ α̃,
where α̃ = ∆tF (ỹ 0n+1 , tn+1 ) − ỹ 0n+1 , and r is the coefficients vector. Note that the
multi-value formulation permits to compute the Adams-Bashforth-Moulton meth-
ods and to easily change the method changing r and the step size scaling the
solution U n+1 by a proper A−1 .
Chapter 5
Boundary Value Problems
5.1 Overview
In this chapter, the mathematical foundations of the boundary value problems are
presented. Generally, these problems are devoted to find a solution of some scalar
or vector function in a spatial domain. This solution is forced to comply some
specific boundary conditions. The elliptic character of the solution of a boundary
value problem means that the every point of the spatial domain is influenced by
the whole points of the domain. From the numerical point of view, it means that
the discretized solution is obtained by solving an algebraic system of equations.
The algorithm and the implementation to obtain and solve the system of equations
is presented.
Let Ω ⊂ Rp be an open and connected set and ∂Ω its boundary set. The spatial
domain D is defined as its closure, D ≡ {Ω∪∂Ω}. Each point of the spatial domain
is written x ∈ D. A Boundary Value Problem for a vector function u : D → RN
of N variables is defined as:
where L is the spatial differential operator and h is the boundary conditions op-
erator that must satisfy the solution at the boundary ∂Ω.
169
170 CHAPTER 5. BOUNDARY VALUE PROBLEMS
If the spatial domain D is discretized in ND points, the problem extends from vector
to tensor, as a tensor system of equations of order p appears for each variable of
u(x). The order of the tensor system merging from the complete system is p + 1
and its number of elements is N = Nv × ND where Nv is the number of variables
of u(x). The number of points in the spatial domain ND can be divided on inner
points NΩ and on boundary points N∂Ω , satisfying: ND = NΩ + N∂Ω . Thus,
the number of elements of the tensor system evaluated on the boundary points
is NC = Nv × N∂Ω . Once the spatial discretization is done, the system emerges
as a tensor difference equation that can be rearranged into a vector system of N
equations. Particularly two systems appear: one of N − NC equations from the
differential operator on inner grid points and another of NC equations from the
boundary conditions on boundary points:
L(U ) = 0,
H(U ) ∂Ω
=0
F = A U − b.
BVP
L(x, u(x)) = 0, ∀ x ∈ Ω
+
h(x, u(x)) ∂Ω = 0.
Difference equations
L(U ) = 0,
+
H(U ) ∂Ω = 0.
AU =b F (U ) = 0.
d2 u
+ sin u = 0,
dx2
along with boundary conditions:
u(−1) = 1, u(1) = 0.
The algorithm to solve this problem, based on second order finite differences for-
mulas, consists on defining a equispaced mesh with ∆x spatial size
{xi , i = 0, . . . , N },
impose the discretized differential equations in these points
ui+1 − 2ui + ui−1
+ sin ui , i = 1, . . . N − 1,
∆x2
and impose the boundary conditions
u0 = 1, uN = 0.
Finally, these nonlinear N + 1 equations are solved.
This abstraction level allows integrating this boundary value problem with dif-
ferent finite-difference orders or with different mathematical models by making a
very low effort from the algorithm and implementation point of view. As it was
mentioned, the solution of a boundary value problem requires three steps: select
the grid distribution, impose the difference equations and solution of the resulting
system. Let us try to do it from a general point of view.
1. Grid points.
Define an equispaced or a nonuniform grid distribution of points xi . This can
be done by the following subroutine which determines the optimum distribu-
tion of points to minimize the truncation error depending on the degree or
5.3. FROM CLASSICAL TO MODERN APPROACHES 173
! Grid points
call Grid_Initialization("nonuniform", "x", Order , x)
This grid defines the approximate values ui of the unknown u(x) at the grid
points xi . Once the order is set and the grid points are given, Lagrange
polynomials and their derivatives are built and particularized at the grid
points xi . These numbers are stored to be used as the coefficients of the
finite differences formulas when calculating derivatives.
2. Difference equations.
Once the grid is initialized and the derivative coefficients are calculated, the
subroutine Derivative allows calculating the second derivative in every grid
point xi and their values are stores in uxx. Once all derivatives are expressed
in terms of the nodal points ui , the difference equations are built by means
of the following equation:
! Difference equations
function Equations(u) result(F)
real, intent (in) :: u(0:)
real :: F(0:size(u) -1)
real :: uxx (0:Nx)
end function
call Newton(Equations , u)
subroutine BVP_FD
! Spatial domain
x(0) = -1; x(Nx) = +1
! Grid points
call Grid_Initialization("nonuniform", "x", Order , x)
! Initial guess
u = 1
! Newton solution
call Newton(Equations , u)
! Graph
call qplot(x, u, Nx+1)
contains
! Difference equations
function Equations(u) result(F)
real, intent (in) :: u(0:)
real :: F(0:size(u) -1)
real :: uxx (0:Nx)
end function
end subroutine
module Boundary_value_problems
use Boundary_value_problems1D
use Boundary_value_problems2D
use Boundary_value_problems3D
implicit none
private
public :: Boundary_Value_Problem ! It solves a boundary value problem
interface Boundary_Value_Problem
module procedure Boundary_Value_Problem1D , &
Boundary_Value_Problem2D , &
Boundary_Value_Problem2D_system , &
Boundary_Value_Problem3D_system
end interface
end module
Listing 5.5: Boundary_value_problems.f90
For the sake of simplicity, the implementation of the algorithm provided below
is only shown 1D problems. Once the program matches the interface of a 1D
boundary value problem, the code will use the following subroutine:
if (linear1D) then
call Linear_Boundary_Value_Problem1D ( x_nodes , &
Differential_operator , Boundary_conditions , Solution)
else
call Non_Linear_Boundary_Value_Problem1D ( x_nodes , &
Differential_operator , Boundary_conditions , Solution , Solver)
end if
end subroutine
Depending on the linearity of the problem, the implementation differs. For this
reason and in order to classify between linear and nonlinear problem, the subrou-
tine Linearity_BVP_1D is used. Besides and in order to speed up the calculation,
the subroutine Dependencies_BVP_1D checks if the differential operator L depends
on first or second derivative of u(x). If the differential operator does not depend
on the first derivative, only second derivative will be calculated to build the dif-
ference operator. The same applies if no dependency on the second derivative is
encountered.
Once the problem is classified into a linear or a nonlinear problem and depen-
dencies of derivatives are determined, the linear or nonlinear subroutine is called
in accordance.
5.6. NON LINEAR BOUNDARY VALUE PROBLEMS IN 1D 177
First, the subroutine calculates only the derivatives appearing in the problem.
Second, the boundary conditions at x0 and xN are analyzed. If there is no im-
posed boundary condition (C == FREE_BOUNDARY_CONDITION), the corresponding
equation is taken from the differential operator. On the contrary, the equation
represents the discretized boundary condition.
Once the boundary conditions are discretized, the differential operator is dis-
cretized for the inner grid points x1 , . . . , xn−1 .
F (U ) = A U − b, (5.3)
where A is the matrix of the linear system of equations and b is the independent
term.
F (0) = −b.
To determine the first column of matrix A, the function in equation 5.3 is evaluated
with U 1 = [1, 0, . . . , 0]T
F (U 1 ) = A U 1 − b
The components of F (U 1 ) are Ai1 − bi . Since the independent term b is known,
the first column of the matrix A is
C1 = F (U 1 ) + b
C2 = F (U 2 ) + b.
Once the matrix A and the independent term b are obtained, any validated
subroutine to solve linear systems can be used. This algorithm to solve the BVP
based on the determination of A and b is implemented in the following subroutine
called Linear_Boundary_Valu_Problem1D:
180 CHAPTER 5. BOUNDARY VALUE PROBLEMS
deallocate( U, A, b )
contains
At the beginning of this subroutine, the independent term b and the matrix A
are calculated. Then, the linear system is solved by means of a L U factorization.
Chapter 6
Initial Boundary Value Problems
6.1 Overview
∂u
(x, t) = L(x, t, u(x, t)), ∀ x ∈ Ω,
∂t
h(x, t, u(x, t)) ∂Ω = 0, ∀ x ∈ ∂Ω,
u(x, t0 ) = u0 (x),
where L is the spatial differential operator, u0 (x) is the initial value and h is the
boundary conditions operator for the solution at the boundary points u ∂Ω .
181
182 CHAPTER 6. INITIAL BOUNDARY VALUE PROBLEMS
Hence, once the spatial discretization is carried out, the resulting problem com-
prises a system of N − NC first order ordinary differential equations and NC al-
gebraic equations. This differential-algebraic system of equations (DAEs) contains
differential equations (ODEs) and algebraic equations are generally more difficult
to solve than ODEs. Since the algebraic equations must be verified for all time,
the algorithm to solve an initial boundary value problem comprises the following
three steps:
IBVP
∂u
= L(x, t, u),
∂t
u(x, 0) = u0 (x),
+
h(x, t, u) ∂Ω = 0.
Spatial
discretization
Cauchy Problem
dUΩ
= F (U ; t),
dt
U (0) = U 0 ,
+
H(U ; t) ∂Ω = 0.
Temporal
discretization
Temporal scheme
where U 0 is known.
4 n 1
unN = u − un
3 N −1 3 N −2
∆t
uni = uni + uni+1 − 2uni + uni−1 ,
2
i = 1, . . . , N − 1.
∆x
6.3. FROM CLASSICAL TO MODERN APPROACHES 185
The above approach is tedious and requires extra analytical work if we change
the equation, the temporal scheme or the order of the finite differences formulas.
One of the main objectives of our NumericalHUB is to allow such a level of abstrac-
tion that these numerical details are hidden, focusing on more important issues
related to the physical behavior or the numerical scheme error. This abstraction
level allows integrating any initial boundary value problem with different finite-
difference orders or with different temporal schemes by making a very low effort
from the algorithm and implementation point of view.
The following abstraction levels will be considered when implementing the so-
lution of the discretized initial boundary value problem:
U n+1 = U n + ∆t F (U n ), n = 0, . . .
module Initial_Boundary_Value_Problems
use Initial_Boundary_Value_Problem1D
use Initial_Boundary_Value_Problem2D
use Utilities
use Temporal_Schemes
use Finite_differences
implicit none
private
public :: Initial_Boundary_Value_Problem
interface Initial_Boundary_Value_Problem
module procedure IBVP1D , IBVP1D_system , IBVP2D , IBVP2D_system
end interface
end module
Listing 6.1: Initial_Boundary_value_problems.f90
For the sake of simplicity, the implementation of the algorithm provided below
is only shown 1D problems. Once the program matches the interface of a 1D
boundary value problem, the code will use the following subroutine:
Nx = size(x_nodes) - 1
Nt = size(Time_Domain) - 1
dU = Dependencies_IBVP_1D( Differential_operator )
contains
function Space_discretization( U, t ) result(F)
real :: U(:), t
real :: F(size(U))
call Space_discretization1D( U, t, F )
end function
subroutine Space_discretization1D( U, t, F )
real :: U(0:Nx), t, F(0:Nx)
integer :: k, N_int
real :: Ux(0:Nx), Uxx (0:Nx)
real, allocatable :: Ub(:)
t_BC = t
call Binary_search(t_BC , Time_Domain , it)
Solution(it , :) = U(:)
else
N_int = Nx -1
allocate( Ub(2) )
Ub = [ U(0), U(Nx) ]
call Newton( BCs2 , Ub )
U(0) = Ub(1)
F(0) = U(0) - Ub(1)
U(Nx) = Ub(2)
F(Nx) = U(Nx) - Ub(2)
end if
! *** inner grid points
if (dU(1)) call Derivative( "x", 1, U, Ux)
if (dU(2)) call Derivative( "x", 2, U, Uxx)
do k=1, N_int
F(k) = Differential_operator (x_nodes(k), t, U(k), Ux(k), Uxx(k))
enddo
if (allocated(Ub)) deallocate( Ub )
end subroutine
As it was mentioned, the algorithm to solve a IBVP has to deal with differential-
algebraic equations. Hence, any time the vector function is evaluated by the tem-
poral scheme, boundary values are determined by solving a linear or nonlinear
system of equations involving the unknowns at the boundaries. Once these values
are known, the inner components of F (U ) are evaluated.
Chapter 7
Mixed Boundary and Initial Value
Problems
7.1 Overview
In the present chapter, the numerical resolution and implementation of the initial
boundary value problem for an unknown variable u coupled with an elliptic problem
for another unknown variable v are considered. Prior to the numerical resolution
of the problem by means of an algorithm, a brief mathematical presentation must
be given.
Evolution problems coupled with elliptic problems are common in applied physics.
for example, when considering an incompressible flow, the information travels in
the fluid at infinite velocity. This means that the pressure adapts instantaneously
to the change of velocities. From the mathematical point of view, it means that the
pressure is governed by an elliptic equation. Hence, the velocity and the tempera-
ture of fluid evolve subjected to the pressure field which adapts instantaneously to
velocity changes.
The chapter shall be structured in the following manner. First, the mathemat-
ical presentation and both spatial and temporal discretizations will be described.
Then, an algorithm to solve the discretized algebraic problem is presented. Finally,
the implementation of this algorithm is explained. Thus, the intention of this chap-
ter is to show how these generic problems can be implemented and solved form an
elegant and mathematical point of view using modern Fortran.
189
190 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
Let Ω ⊂ Rp be an open and connected set, and ∂Ω its boundary. The spatial
domain D is defined as its closure, D ≡ {Ω ∪ ∂Ω}. Each element of the spatial
domain is called x ∈ D. The temporal dimension is defined as t ∈ R.
u : D × R → RNu
of Nu variables and
v : D × R → RNv
of Nv variables. These functions are governed by the following set of equations:
∂u
(x, t) = Lu (x, t, u(x, t), v(x, t)), ∀ x ∈ Ω, (7.1)
∂t
hu (x, t, u(x, t)) ∂Ω = 0, ∀ x ∈ ∂Ω, (7.2)
u(x, t0 ) = u0 (x), ∀ x ∈ D, (7.3)
(7.4)
Lv (x, t, v(x, t), u(x, t)) = 0, ∀ x ∈ Ω, (7.5)
hv (x, t, v(x, t)) ∂Ω
= 0, ∀ x ∈ ∂Ω, (7.6)
where Lu is the spatial differential operator of the initial boundary value problem
of Nu equations, u0 (x) is the initial value, hu is the boundary conditions operator
for the solution at the boundary points u ∂Ω , Lv is the spatial differential operator
of the boundary value problem of Nv equations and hv is the boundary conditions
operator for v at the boundary points v ∂Ω .
It can be seen that both problems are coupled by the differential operators
since these operators depend on both variables. The order in which appear in
the differential operators u and v indicates its number of equations, for example:
Lv (x, t, v, u) and v are of the same size as it appears first in the list of variables
from which the operator depends on.
It can also be observed that the initial value for u appears explicitly, while
there is no initial value expression for v. This is so, as the problem must be
interpreted in the following manner: for each instant of time t, v is such that
verifies Lv (x, t, v, u) = 0 in which u acts as a known vector field for each instant
of time. This interpretation implies that the initial value v(x, t0 ) = v 0 (x), is the
solution of the problem Lv (x, t0 , v 0 , u0 ) = 0. This means that the initial value for v
is given implicitly in the problem. Hence, the solutions must verify both operators
and boundary conditions at each instant of time, which forces the resolution of
them to be simultaneous.
7.2. ALGORITHM TO SOLVE A COUPLED IBVP-BVP 191
Once the spatial discretization is done, the initial boundary value problem and
the boundary value problem transform. The differential operator for u emerges as
a tensor Cauchy Problem of Ne,u − NC,u elements, and its boundary conditions as
a difference operator of NC,u equations. The operator for v is transformed into
a tensor difference equation of Ne,v − NC,v elements and its boundary conditions
in a difference operator of NC,v equations. Notice that even though they emerge
as tensors is indifferent to treat them as vectors as the only difference is the ar-
range between of the elements which conform the systems of equations. Thus, the
spatially discretized problem can be written:
dUΩ
= FU (U, V ; t), HU (U ; t) ∂Ω
= 0,
dt
U (t0 ) = U 0 ,
FV (U, V ; t) = 0, HV (V ; t) ∂Ω
= 0,
where U ∈ RNe,u and V ∈ RNe,u are the solutions comprising inner and boundary
points, UΩ ∈ RNe,u −NC,u is the solution of inner points, U ∂Ω ∈ RNC,u and V ∂Ω ∈
RNC,v are the solutions at the boundary points, U 0 ∈ RNe,u is the discretized initial
value, the difference operators associated to both differential operators are,
and
HU : RNe,u × R → RNC,u ,
HV : RNe,v × R → RNC,v ,
Hence, the resolution of the problem requires solving a Cauchy problem and
algebraic systems of equations for the discretized variables U and V . To solve the
Cauchy Problem, the time is discretized in t = tn en . The term n ∈ Z is the index
of every temporal step that runs over [0, Nt ], where Nt is the number of temporal
steps. The algorithm will be divided into three steps that will be repeated for every
n of the temporal discretization. As the solution is evaluated only in these discrete
time points, from now on it will be used the notation for every temporal step tn :
UΩ (tn ) = UΩn , U (tn ) = U n and V (tn ) = V n .
U (t0 ) = U 0 , HU (U n ; tn ) ∂Ω
= 0,
FV (U n , V n ; tn ) = 0, HV (V n ; tn ) ∂Ω
= 0,
where
G : RNe,u −NC,u × RNe,u × . . . × RNe,u ×R × R → RNe,u −NC,u ,
| {z }
s steps
is the difference operator associated to the temporal scheme and ∆t is the temporal
step. Thus, at each temporal step four systems of Ne,u − NC,u , NC,u , Ne,v − NC,v
and NC,v equations appear. In total a system of Ne,u + Ne,v equations appear at
each temporal step for all components of U n and V n .
IBVP + BVP
∂u
= Lu (x, t, u, v),
∂t
u(x, 0) = u0 (x),
Lv (x, t, v, u) = 0,
+
hu (x, t, u) ∂Ω = 0.
hv (x, t, v) ∂Ω = 0.
Spatial
discretization
Step 1. Boundary points U∂Ω
HU (U ; t) ∂Ω
= 0,
FV (V, U ; t) = 0,
HV (V ; t) ∂Ω
= 0.
FV (V n , U n ; tn ) = 0,
+
HU (U n ; tn ) ∂Ω = 0,
HV (V n ; tn ) ∂Ω = 0.
where U 0 is known.
Figure 7.1: Method of lines for mixed initial and boundary value problems.
194 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
Starting from the initial value U 0 , the initial value V 0 is calculated by means of
the BVP that governs the variable V . Using both values U 0 and V 0 , the difference
operator FU at that instant is constructed. With this difference operator, the
temporal scheme yields the next temporal step UΩ1 . Then, the boundary conditions
of the IBVP are imposed to obtain the solution U 1 . This solution will be used as
the initial value to solve the next temporal step. In this way, the algorithm consists
of a sequence of four steps that are carried out iteratively.
HU (U n ; tn ) ∂Ω
= 0.
Even though this might look redundant for the initial value U 0 (which is sup-
posed to satisfy the boundary conditions), it is not for every other temporal
step as the Cauchy Problem is defined only for the inner points UΩn . This
means that to construct the solution U n its value at the boundaries U n ∂Ω
must be calculated satisfying the boundary constraints.
FV (V n , U n ; tn ) = 0,
HV (V n ; tn ) ∂Ω
= 0.
Since U n and tn act as a parameter, this problem can be solved by using
the subroutines to solve a classical boundary value problem. However, to
reuse the same interface than the classical BVP uses, the operator FV and
the boundary conditions H must be transformed into functions
FV,R (V n ) = 0,
HV,R (V n ) ∂Ω
= 0,
which is solvable in the same manner as explained in the chapter of Boundary
Value Problems. Since this algorithm reuses the BVP software layer which
has been explained previously, the details of this step will not be included.
By the end of this step, both solutions U n and V n are known.
Step 3. Spatial discretization of the IBVP.
Once U n and V n are known, their derivatives are calculated by the se-
lected finite differences formulas. Once calculated, the difference operator
FU (U n , V n ; tn ) is built.
Step4. Temporal step for U n .
Finally, the difference operator previously calculated FU acts as the evolution
function of a Cauchy problem. Once the following step is evaluated, the
solution UΩn+1 at inner points is yielded. This means solving the system:
In this system, the values of the solution at the s steps are known and there-
fore, the solution of the system is the solution at the next temporal step
UΩn+1 . However, the temporal scheme G in general is a function that needs
to be restricted in order to be invertible. In particular a restricted function
G̃ must be obtained:
such that,
G̃ : RNe,u −NC,u → RNe,u −NC,u .
Hence, the solution at the next temporal step for the inner points results:
This value will be used as an initial value for the next iteration. The philos-
ophy for other temporal schemes is the same, the result is the solution at the
next temporal step.
196 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
Once the algorithm is set with a precise notation, it is very easy to implement
following rigorously the steps provided by the algorithm. The ingredients that are
used to solve the IBVP coupled with an BVP are given by the equations (7.1)-
(7.6). Hence, the arguments of the subroutine IBVP_BVP to solve this problem is
implemented in the following way:
Nx = size(x) - 1 ; Ny = size(y) - 1
Nt = size(Time) - 1
Nu = size(Ut , dim=4); Nv = size(Vt , dim=4)
M1 = Nu*(Ny -1); M2 = Nu*(Ny -1); M3 = Nu*(Nx -1); M4 = Nu*(Nx -1)
allocate( Ux(0:Nx ,0:Ny , Nu), Uxx (0:Nx ,0:Ny , Nu), Uxy (0:Nx ,0:Ny , Nu), &
Uy(0:Nx ,0:Ny , Nu), Uyy (0:Nx ,0:Ny , Nu) )
contains
These arguments comprise two differential operators L_u and L_v for the IBVP and
the BVP respectively. The boundary conditions operators or functions of these two
problems are called BC_u and BC_v. There are two output arguments Ut and Vt
which the solution of the IBVP and the BVP respectively. The first three steps of
the algorithm are carried out in the function BVP_and_IBVP_discretization and
the four step is carried out in the subroutine Cauchy-ProblemS.
7.4. BVP_AND_IBVP_DISCRETIZATION 197
7.4 BVP_and_IBVP_discretization
t_BC = t
call Binary_search(t_BC , Time , it)
write(*,*) " Time domain index = ", it
end function
integer :: i1 , i2 , i3 , i4
U1 = Y(1 : i1 -1)
U2 = Y(i1 : i2 -1)
U3 = Y(i2 : i3 -1)
U4 = Y(i3 : i4 -1)
end subroutine
subroutine Asign_BCs( G1 , G2 , G3 , G4 )
real, intent(out) :: G1(1:Ny -1,Nu), G2(1:Ny -1,Nu), &
G3(1:Nx -1,Nu), G4(1:Nx -1,Nu)
do k=1, Nu
call Derivative( ["x","y"], 1, 1, Ut(it , 0:, 0:, k), Wx(0:, 0:, k) )
call Derivative( ["x","y"], 2, 1, Ut(it , 0:, 0:, k), Wy(0:, 0:, k) )
end do
do j = 1, Ny -1
G1(j,:) = BC_u( x(0), y(j), t_BC , &
Ut(it , 0, j, : ), Wx(0, j,:), Wy(0, j,:))
G2(j,:) = BC_u( x(Nx), y(j), t_BC , &
Ut(it , Nx , j, : ), Wx(Nx , j, :), Wy(Nx , j, :))
end do
do i = 1, Nx -1
G3(i,:) = BC_u( x(i), y(0), t_BC , &
Ut(it , i, 0,:), Wx(i, 0, :), Wy(i, 0,:))
G4(i,:) = BC_u( x(i), y(Ny), t_BC , &
Ut(it , i, Ny , : ), Wx(i, Ny , :), Wy(i, Ny , :))
end do
end subroutine
end function
end function
As it can be observed, the interface of L_v_R and BC_v_R comply the require-
ments of the subroutine Boundary_Value_Problem. The extra arguments that L_v
and BC_v require are accessed as external variables using the lexical scoping of
subroutines inside another by means of the instruction contains.
7.7. STEP 3. SPATIAL DISCRETIZATION OF THE IBVP 201
The spatial discretization of the IBVP is done in the last part of the subroutine
BVP_and_IBVP_discretization by means of the differential operator L_u which
is an input argument of the subroutine BVP_and_IBVP. Once derivatives of U and V
are calculated in all grid points, the subroutine calculates the discrete or difference
operator in each point of the domain. It is copied code snippet of the subroutine
BVP_and_IBVP_discretization to follow easily these explanations.
As it observed, two loops run through all inner grid points of U variable. In step 1,
the boundary values of U are obtained by imposing the boundary conditions. It is
important also to notice that the evolution of the inner grid points U depends on
the values of U, V and their derivatives that are calculated previously
! *** Derivatives of U for inner grid points
do k=1, Nu
call Derivative( ["x","y"], 1, 1, U(0:,0:, k), Ux (0:,0:,k) )
call Derivative( ["x","y"], 1, 2, U(0:,0:, k), Uxx (0:,0:,k) )
call Derivative( ["x","y"], 2, 1, U(0:,0:, k), Uy (0:,0:,k) )
call Derivative( ["x","y"], 2, 2, U(0:,0:, k), Uyy (0:,0:,k) )
call Derivative( ["x","y"], 2, 1, Ux(0:,0:,k), Uxy (0:,0:,k) )
end do
! *** Step 2. BVP for V
call Boundary_Value_Problem( x, y, L_v_R , BC_v_R , Vt(it ,0: ,0: ,:))
! *** Derivatives for V
do k=1, Nv
call Derivative( ["x","y"], 1, 1, Vt(it , 0:,0:, k), Vx (0:,0:,k) )
call Derivative( ["x","y"], 1, 2, Vt(it , 0:,0:, k), Vxx(0:,0:,k) )
call Derivative( ["x","y"], 2, 1, Vt(it , 0:,0:, k), Vy (0:,0:,k) )
call Derivative( ["x","y"], 2, 2, Vt(it , 0:,0:, k), Vyy(0:,0:,k) )
call Derivative( ["x","y"], 2, 1, Vx(0: ,0:, k), Vxy(0:,0:,k) )
end do
! *** Step 3. Differential operator L_u(U,V) at inner grid points
Finally, the state of the system in the next temporal step n + 1 is calculated by
using the classical subroutine Cauchy_ProblemS. A code snippet of the subroutine
BVP_and_IBVP is copied here to follow the explanations.
contains
To reuse the subroutine Cauchy_ProblemS and since the interface of the func-
tion BVP_and_IBVP_discretization does not comply the requirements of the
subroutine Cauchy_ProblemS, a restriction is used by means of the subroutine
BVP_and_IBVP_discretization_2D
end function
Nomenclature
Application Program
Interface
207
Chapter 1
Systems of equations
1.1 Overview
implicit none
private
public :: &
LU_factorization , & ! A = L U (lower , upper triangle matrices)
Solve_LU , & ! It solves L U x = b
Gauss , & ! It solves A x = b by Guass elimination
Condition_number , & ! Kappa(A) = norm2(A) * norm2( inverse(A) )
Tensor_product , & ! A_ij = u_i v_j
Power_method , & ! It determines to largest eigenvalue of A
Inverse_Power_method , & ! It determines the smallest eigenvalue of A
Eigenvalues_PM , & ! All eigenvalue of A by the power method
SVD ! A = U S transpose(V)
contains
209
210 CHAPTER 1. SYSTEMS OF EQUATIONS
LU factorization
call LU_factorization( A )
The subroutine LU_factorization returns the inlet matrix which has been
factored by the LU method. The arguments of the subroutine are described in the
following table.
Solve LU
x = Solve_LU( A , b )
The function Solve_LU finds the solution to the linear system of equations
A x = b, where the matrix A has been previously L U factorized and b is a given
vector. The arguments of the function are described in the following table.
Gauss
x = Gauss( A , b )
The function Gauss finds the solution of the linear system of equations A x = b
by means of a classical Gaussian elimination. The arguments of the function are
described in the following table.
Condition number
kappa = Condition_number(A)
The function Condition_number determines the condition number κ = ||A||2 ||A−1 ||2
Tensor product
A = Tensor_product(u, v)
Power method
call Power_method(A, lambda , U)
The function Power_method finds the largest eigenvalue of A by the power method.
The arguments of the function are described in the following table.
The function Power_method finds the smallest eigenvalue of A by the inverse power
method. The arguments of the function are described in the following table.
SVD
call SVD(A, sigma , U, V)
use Jacobian_module
use Linear_systems
implicit none
private
public :: &
Newton , & ! It solves a vectorial system F(x) = 0
Newtonc ! It solves a vectorial system G(x) = 0
! with M implicit equations < N unknowns
! e.g. G1 = x1 - x2 (implicit) with x1 = 1 (explicit)
contains
Newton
call Newton( F , x0 )
Newtonc
call Newtonc( F , x0 )
The subroutine Newtonc returns the solution of implicit and explicit equations
packed in the same function F (x). Hence, the function F (x) has internally the
following form:
x1 = g1 (x2 , x3 , . . . xN ),
x2 = g2 (x1 , x3 , . . . xN ),
.. ..
. .
xm = gm (x1 , x2 , . . . xN ),
F1 = 0,
F2 = 0,
.. ..
. .
Fm = 0,
2.1 Overview
module Interpolation
use Lagrange_interpolation
use Chebyshev_interpolation
use Fourier_interpolation
implicit none
private
public :: &
Interpolated_value , & ! It interpolates at xp from (x_i , y_i)
Integral , & ! It integrates from x_0 to x_N
Interpolant ! It interpolates I(xp) from (x_i , y_i)
contains
Listing 2.1: Interpolation.f90
217
218 CHAPTER 2. INTERPOLATION
Interpolated value
yp = interpolated_value( x , y , xp , degree )
Integral
I = Integral( x , y , degree )
module Lagrange_interpolation
implicit none
public :: &
Lagrange_polynomials , & ! Lagrange polynomial at xp from (x_i , y_i)
Lebesgue_functions ! Lebesgue function at xp from x_i
contains
Listing 2.2: Lagrange_interpolation.f90
Lagrange polynomials
yp = Lagrange_polynomials( x, xp )
and their derivatives `j (x) (first index of the array) calculated at the scalar point
(i)
xp. The integral of the Lagrange polynomials is taken into account by the first index
of the array with value equal to -1. The index 0 means the value of the Lagrange
polynomials and an index k greater than 0 represents the ”k-th” derivative of the
Lagrange polynomial.
2.3. LAGRANGE INTERPOLATION MODULE 221
Lebesgue functions
The function Lebesgue_functions computes the Lebesgue function and its deriva-
tives at different points xp. Given a set of nodal or interpolation points x, the
following sentence determines the Lebesgue function:
yp = Lebesgue_functions( x, xp )
and their derivatives λ(i) (xpj ) (first index of the array) calculated at different points
point xpj . The integral of the Lebesgue function is represented by the first index
with value equal to -1. The index 0 means the value of the Lebesgue function and
an index k greater than 0 represents the ”k-th” derivative of the Lebesgue function.
The second index of the array takes into account different components of xp.
222 CHAPTER 2. INTERPOLATION
Chapter 3
Finite Differences
3.1 Overview
223
224 CHAPTER 3. FINITE DIFFERENCES
Grid Initalization
call Grid_Initialization( grid_spacing , direction , q , grid_d )
Given the desired grid spacing, this subroutine calculates a set of points within
the space domain defined with the first point at x0 and the last point xN . Later, it
builds the interpolant and its derivatives at the same data points xi and it stores
their values for future use by the subroutine Derivative. The arguments of the
subroutine are described in the following table.
Derivatives for x ∈ Rk
interface Derivative
module procedure Derivative3D , Derivative2D , Derivative1D
end interface
Listing 3.2: Finite_differences.f90
4.1 Overview
module Cauchy_Problem
use ODE_Interface
use Temporal_scheme_interface
use Temporal_Schemes
implicit none
private
public :: &
Cauchy_ProblemS , & ! It calculates the solution of a Cauchy problem
set_tolerance , & ! It sets the error tolerance of the integration
set_solver , & ! It defines the family solver and the name solver
get_effort ! # function evaluations (ODES) after integration
contains
Listing 4.1: Cauchy_Problem.f90
227
228 CHAPTER 4. CAUCHY PROBLEM
Cauchy ProblemS
call Cauchy_ProblemS(Time_Domain , Differential_operator , Solution , Scheme)
Set solver
call set_solver( family_name , scheme_name)
The subroutine set_solver allows to select the family of the numerical method
and some specific member of the family to integrate the Cauchy problem.
The following list describes new software implementations of the different fam-
ilies and members:
The following list describes wrappers for classical codes for the different families:
Set tolerance
call set_tolerance(Tolerance)
The subroutine set_tolerance allows to fix the relative and absolute error
tolerance of the solution. Embedded Runge-Kutta methods, Adams Bashforth or
GBS methods are able to modify locally their time step to attain the required error
tolerance.
Get effort
get_effort ()
use Non_Linear_Systems
use ODE_Interface
use Temporal_scheme_interface
use Embedded_RKs
use Gragg_Burlisch_Stoer
use Adams_Bashforth_Moulton
use Wrappers
implicit none
private
public :: &
Euler , & ! U(n+1) <- U(n) + Dt F(U(n))
Inverse_Euler , & ! U(n+1) <- U(n) + Dt F(U(n+1))
Crank_Nicolson , & ! U(n+1) <- U(n) + Dt/2 ( F(n+1) + F(n) )
Leap_Frog , & ! U(n+1) <- U(n-1) + Dt/2 F(n)
Runge_Kutta2 , & ! U(n+1) <- U(n) + Dt/2 ( F(n)+F(U_Euler) )
Runge_Kutta4 , & ! Runge Kutta method of order 4
Adams_Bashforth2 , & ! U(n+1) <- U(n) + Dt/2 ( 3 F(n)-F(U(n-1) )
Adams_Bashforth3 , & ! Adams Bashforth method of Order 3
Predictor_Corrector1 ,& ! Variable step methods
implicit none
abstract interface
subroutine Temporal_Scheme(F, t1 , t2 , U1 , U2 , ierr )
use ODE_Interface
procedure (ODES) :: F
real, intent(in) :: t1 , t2
real, intent(in) :: U1(:)
real, intent(out) :: U2(:)
integer, intent(out) :: ierr
end subroutine
end interface
end module
4.4 Stability
module Stability_regions
use Temporal_scheme_interface
implicit none
private
public :: Absolute_Stability_Region ! For a generic temporal scheme
contains
use Cauchy_Problem
use Temporal_scheme_interface
use ODE_interface
implicit none
private
public :: &
Error_Cauchy_Problem , & ! Richardson extrapolation
Temporal_convergence_rate , & ! log Error versus log time steps
Temporal_effort_with_tolerance ! log time steps versus log (1/ tolerance
)
contains
The module uses the Cauchy_Problem module and comprises three subroutines
to analyze the error of the temporal schemes. It is an application layer based on
the Cauchy_Problem layer. The error is calculated by integrating the same solution
in successive time grids. By using the Richardson extrapolation method, the error
is determined.
234 CHAPTER 4. CAUCHY PROBLEM
5.1 Overview
This library is intended to solve linear and nonlinear boundary value problems.
An equation involving partial derivatives together with some constraints applied
to the frontier of its spatial domain constitute a boundary value problem.
module Boundary_value_problems
use Boundary_value_problems1D
use Boundary_value_problems2D
use Boundary_value_problems3D
implicit none
private
public :: Boundary_Value_Problem ! It solves a boundary value problem
interface Boundary_Value_Problem
module procedure Boundary_Value_Problem1D , &
Boundary_Value_Problem2D , &
Boundary_Value_Problem2D_system , &
Boundary_Value_Problem3D_system
end interface
end module
237
238 CHAPTER 5. BOUNDARY VALUE PROBLEMS
The subroutine calculates the solution of the following boundary value problem:
∂u ∂ 2 u
∂u
L x, u, , = 0, h x, u, =0
∂x ∂x2 ∂x
∂u ∂u ∂ 2 u ∂ 2 u ∂ 2 u
L x, y, u, , , , , = 0,
∂x ∂y ∂x2 ∂y 2 ∂x∂y
∂u ∂u ∂ 2 u ∂ 2 u ∂ 2 u
L x, y, u, , , , , =0
∂x ∂y ∂x2 ∂y 2 ∂x∂y
The solution of this problem is calculated using the libraries by a simple call to
the subroutine:
call Boundary_Value_Problem2D_system (x_nodes , y_nodes , &
Differential_operator , &
Boundary_conditions , Solution)
∂u ∂u ∂u ∂ 2 u ∂ 2 u ∂ 2 u ∂ 2 u ∂2u ∂2u
L x, y, z, u, , , , , , , , , =0
∂x ∂y ∂z ∂x2 ∂y 2 ∂z 2 ∂x∂y ∂x∂z ∂y∂z
6.1 Overview
This library is intended to solve an initial value boundary problem. This prob-
lem is governed by a set time evolving partial differential equations together with
boundary conditions and an initial condition.
module Initial_Boundary_Value_Problems
use Initial_Boundary_Value_Problem1D
use Initial_Boundary_Value_Problem2D
use Utilities
use Temporal_Schemes
use Finite_differences
implicit none
private
public :: Initial_Boundary_Value_Problem
interface Initial_Boundary_Value_Problem
module procedure IBVP1D , IBVP1D_system , IBVP2D , IBVP2D_system
end interface
end module
Since the space domain Ω ⊂ Rk with k = 1, 2, 3, initial value boundary problems are
stated in 1D, 2D and 3D grids. To have the same name interface when dealing with
different space dimensions, the subroutine Initial_Value_Boundary_Problem has
been overloaded.
243
244 CHAPTER 6. INITIAL VALUE BOUNDARY PROBLEM
∂u ∂ 2 u
∂u
= L x, t, u, ,
∂t ∂x ∂x2
∂u ∂ 2 u
∂u
= L x, t, u, ,
∂t ∂x ∂x2
Besides, an initial condition must be established: u(x, t = t0 ) = u0 (x). The
arguments of the subroutine are described in the following table.
This subroutine calculates the solution to a scalar initial value boundary problem
in a rectangular domain (x, y) ∈ [x0 , xf ] × [y0 , yf ]:
∂u
= L(x, y, t, u, ux , uy , uxx , uyy , uxy ), h(x, y, t, u, ux , uy ) = 0.
∂t ∂Ω
Besides, an initial condition must be established: u(x, y, t0 ) = u0 (x, y). The argu-
ments of the subroutine are described in the following table.
∂u
= L(x, y, t, u, ux , uy , uxx , uyy , uxy ), h(x, y, t, u, ux , uy ) = 0.
∂t ∂Ω
7.1 Overview
This library is intended to solve an initial value boundary problem for a vectorial
variable u with a coupled boundary value problem for v.
module IBVPs_and_BVPs
use Cauchy_problem
use Temporal_scheme_interface
use Finite_differences
use Linear_Systems
use Non_Linear_Systems
use Boundary_value_problems
use Utilities
implicit none
private
public :: IBVP_and_BVP
249
250 CHAPTER 7. MIXED BOUNDARY AND INITIAL VALUE PROBLEMS
∂u
= Lu (x, y, t, u, ux , uy , uxx , uyy , uxy , v, v x , v y , v xx , v yy , v xy )
∂t
Lv (x, y, t, v, v x , v y , v xx , v yy , v xy , u, ux , uy , uxx , uyy , uxy ) = 0,
hu (x, y, t, u, ux , uy ) = 0, hv (x, y, t, v, v x , v y ) = 0.
∂Ω ∂Ω
call IBVP_and_BVP( Time , x_nodes , y_nodes , L_u , L_v , BCs_u , BCs_v , &
Ut , Vt , Scheme )
8.1 Overview
This library is designed to plot x − y and contour graphs on the screen or to create
automatically Latex files with graphs and figures. The module: Plots has two
subroutines: plot_parametrics and plot_contour.
This subroutine plots a given number of parametric curves (x, y) on the screen
and creates a Latex file for optimum quality results. This subroutine is overloaded
allowing to plot parametric curves sharing x–axis for all curves or with different
data booth for x and y axis. That is, x can be a vector the same for all para-
metric curves or a matrix. In this case, (xij , yij ) represents the the point i of the
parametric curve j. The last three arguments are optional. If they are given, this
subroutine creates a plot data file (path.plt) and a latex file (path.tex) to show
the same graphics results by compiling a latex document.
253
254 CHAPTER 8. PLOTTING GRAPHS WITH LATEX
subroutine myexampleC
x = [ (a + (b-a)*i/N, i=0, N) ]
y(:, 1) = sin(x); y(:, 2) = cos(x); y(:, 3) = sin(2*x)
call plot_parametrics( x, y, ["$\sin x$", "$\cos x$", "$\sin 2x$"], &
"$x$", "$y$", "(a)", path (1) )
call plot_parametrics( y(:,1), y(:,:), ["O1", "O2", "O3"], &
"$y_2$", "$y_1$", "(b)", path (2) )
call plot_parametrics( y(:,1), y(: ,2:2), ["O2"], "$y_2$", "$y_1$", &
"(c)", path (3) )
call plot_parametrics( y(:,1), y(: ,3:3), ["O3"], "$y_2$", "$y_1$", &
"(d)", path (4) )
end subroutine
Listing 8.1: my_examples.f90
8.2. PLOT PARAMETRICS 255
The above Fortran example creates automatically four plot files and four latex
files. By compiling the following Latex file, the same plots showed on the screen
can be included in any latex manuscript.
\usepackage{tikz}
\usepackage{pgfplots}
\pgfplotsset{compat =1.5}
\begin{document}
\fourgraphs
{\ input {./ results/myexampleCa.tex} }
{\ input {./ results/myexampleCb.tex} }
{\ input {./ results/myexampleCc.tex} }
{\ input {./ results/myexampleCd.tex} }
{Heinon -Heiles system solution.
(a) Trajectory of the star $(x,y)$.
(b) Projection $(x,\dot{x})$ of the solution.
(c) Projection $(y,\dot{y})$ of the solution.
(d) Projection $(\dot{x},\dot{y})$.}
{fig:exampleCad}
After compiling the above Latex code, the plot of figure 8.1 is obtained.
1 sin x 1 O1
cos x O2
sin 2x O3
y1
0 0
y
−1 −1
0 2 4 6 −1 0 1
x y2
(a) (b)
1 O2 1 O3
y1
y1
0 0
−1 −1
−1 0 1 −1 0 1
y2 y2
(c) (d)
Figure 8.1: Heinon-Heiles system solution. (a) Trajectory of the star (x, y). (b) Projection
(x, ẋ) of the solution. (c) Projection (y, ẏ) of the solution. (d) Projection (ẋ, ẏ).
8.3. PLOT CONTOUR 257
This subroutine plots a contour map of z = z(x, y) on the screen and creates
a Latex file for optimum quality results. Given a set of values xi and yj where
some function z(x, y) is evaluated, this subroutine plot a contour map. The last
three arguments are optional. If they are given, this subroutine creates a plot data
file (path.plt) and a latex file (path.tex) to show the same graphics results by
compiling a latex document.
subroutine myexampleD
The above Fortran example creates the following data files and latex files:
./results/myexampleDa.plt, ./results/myexampleDa.tex,
./results/myexampleDb.plt, ./results/myexampleDb.tex.
By compiling the following Latex file, the same plots showed on the screen are
included in any latex manuscript.
\twographs
{\ input {./ results/myexampleDa.tex}}
{\ input {./ results/myexampleDa.tex}}
{Heinon -Heiles system solution.
(a) Trajectory of the star $(x,y)$.
(b) Projection $(x,\dot{x})$ of the solution.
}
{fig:exampleDa}
To compile successfully the above code, gnuplot must be installed in the com-
puter. Besides, during installation, the path environmental variable should be
8.3. PLOT CONTOUR 259
added. If TexStudio is used to compile the Latex file, the lualatex and PDFLatex
orders should be modified as follows:
0
6 −0.2 0.2 6 −0.2 0.2
−
−
0.
0.
0.
0.
.6
.6
4
4
8 8
4
4
0. 0.
0. 6
0. 6
8
8
−0
−0
0.
0.
− −
−0.2
−0.2
−0.4
−0.4
0.2
0.2
0.4
0. 4
4 4
−0 −0
0
0
.2 0.2 .2 0.2
y
0 0 0 0
0
0
0.2 −0.2 − 0.2 −0.2 −
0. 0.
0.
0.
4 4
− .6
− .6
4
4
8
8
0.6
0.6
2 8 2 8
0. 0.
0.
0.
−0
−0
−0.2
−0.2
−0.4
−0.4
0.2
0.2
0.4
0.4
0.2 −0 0. 2 −0
.2 .2
0 0 0 0
0 2 4 6 0 2 4 6
x x
(b) (b)
[1] M.A. Rapado Tamarit, B. Moreno Santamaría, & J.A. Hernández Ramos.
Programming with Visual Studio: Fortran & Python & C++. Amazon Kindle
Direct Publishing 2020.
[2] Lloyd N. Trefethen & D. Bau. Numerical linear algebra. SIAM 1977.
[3] William H. Press, Saul A. Teukolsky, William T. Vetterling & Brian P. Flan-
nery. Numerical Recipes in Fortran 77, The Art of Scientific Computing Second
Edition. Cambridge University Press 1992.
[8] J.A. Hernandez Ramos & M. Zamecnik Barros. FORTRAN 95, programación
multicapa para la simulación de sistemas físicos. Second Edition Amazon Kin-
dle Direct Publishing 2019.
261
262 BIBLIOGRAPHY