0% found this document useful (0 votes)
68 views27 pages

R. Verf Urth

This document provides instructions for installing and using the Numerics library of Scilab functions for numerical analysis. It describes how to install Scilab and Numerics, the various data structures used as arguments for Numerics functions, the structure of Numerics functions including mandatory and optional arguments, and gives examples of using several Numerics functions for interpolation and integration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views27 pages

R. Verf Urth

This document provides instructions for installing and using the Numerics library of Scilab functions for numerical analysis. It describes how to install Scilab and Numerics, the various data structures used as arguments for Numerics functions, the structure of Numerics functions including mandatory and optional arguments, and gives examples of using several Numerics functions for interpolation and integration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

NUMERICS USER GUIDE

R. VERFÜRTH

Contents
1. Introduction 1
2. Installing Scilab 2
3. Installing Numerics 2
4. Using Numerics 2
4.1. Data structures 2
4.2. Structure of Numerics functions 4
4.3. Start 5
4.4. Help 5
4.5. Interpolation 5
4.6. Numerical integration 8
4.7. Nonlinear equations 8
4.8. Iterative solvers for linear systems of equations 9
4.9. Eigenvalue problems 11
4.10. Initial value problems for odes: fixed step-size methods 12
4.11. Initial value problems for odes: variable step-size
methods 14
4.12. Boundary value problems for odes 17
4.13. Finite difference methods for pdes 17
4.14. Linear optimization 19
4.15. Discrete optimization 20
4.16. Nonlinear optimization 23
References 25
Index 27

1. Introduction
Numerics is a library of Scilab functions implementing many of the
algorithms presented in our courses on numerical analysis for math-
ematicians and engineers [5, 6, 7, 8, 9, 10]. It covers the following
areas:
• interpolation,
• numerical integration,
• nonlinear equations,
Date: January 5, 2012.
1
2 R. VERFÜRTH

• iterative solvers for linear systems of equations,


• eigenvalue problems,
• initial value problems for odes,
• boundary value problems for odes,
• finite difference methods for pdes,
• linear optimization,
• integer optimization,
• nonlinear optimization.
A detailed description of Numerics is given in section 4 below. In
sections 2 and 3 we first explain how to install Scilab and Numerics,
respectively.

2. Installing Scilab
Scilab is a numerical programming environment comparable to
Matlab with a similar syntax. It was developed by INRIA, France and is
currently maintained by the Digiteo-foundation in collaboration with
INRIA. Contrary to Matlab, Scilab is completely free of charge. There
are versions for the operating systems Mac OsX, Unix and Windows. To
use Numerics you first have to get the appropriate version of Scilab
from
http://www.scilab.org/
and install it on your computer following the instructions of that web-
site. Useful introductions to Scilab are e.g. [1, 2]; [4] may be of par-
ticular interest for those familiar with Matlab.

3. Installing Numerics
To install Numerics click the link Numerics.zip on
http://www.rub.de/num1/softwareE.html
or
http://www.rub.de/num1/software.html.
This installs a zip-archive Numerics.zip on your computer. Unpack
the zip-archive and put the created folder Numerics at any place on
your computer that suits you. Now you can start Numerics as described
in section 4.3.

4. Using Numerics
4.1. Data structures. We refer to [1, 2, 4] for a detailed introduction
to Scilab. Here, we briefly present those data structures of Scilab
that are used for input arguments of Numerics functions.
Boolean: Boolean arguments are transferred to a Numerics function
by either directly entering %t for the value true or %f for the value
false or by creating an own boolean variable myboolean by typing
myboolean = %t or myboolean = %f in the Scilab console and then
NUMERICS USER GUIDE 3

calling the Numerics function with myboolean as argument.


String: Strings are enclosed by quotation marks. They are transferred
to a Numerics function by either directly entering "name" for the string
value name or by creating an own string variable mystring by typ-
ing mystring = "name" in the Scilab console and then calling the
Numerics function with mystring as argument.
Number: Numbers may be integers like 2 and −3 or fixed point reals like
3.14 and −2.87 or floating point reals like 0.578D4, −0.689D5, 0.203D−7,
and −0.1763D−5. They are transferred to a Numerics function by ei-
ther directly entering the numerical value or by creating an own variable
mynumber by typing mynumber = 2 or similar in the Scilab console and
then calling the Numerics function with mynumber as argument.
Vector / Matrix: A vector is either an m×1 matrix for a column vector
or a 1 × n matrix for a row vector. Matrices are created by typing the
appropriate sequence of numbers in lexicographic order from top left to
bottom right in brackets with columns separated by blanks or commas
and rows separated by semicolons. Thus the matrices
 
  1 2
1 2 3
and 3 4
4 5 6
5 6
are represented by
[1 2 3; 4 5 6] and [1, 2; 3, 4; 5, 6].
A0 is the transpose of the matrix A; A∗B is the product of the matrices
A and B. ones(m, n) and zeros(m, n) create m × n matrices with
all elements equal to 1 and 0, respectively; eye(n, n) creates the n ×
n identity matrix. Matrices are transferred to a Numerics function
by either directly entering the matrix or by creating an own matrix-
valued variable mymatrix and then calling the Numerics function with
mymatrix as argument.
Sparse matrix: Scilab has a built-in data structure for sparse matrices
which differs from the corresponding structure of Matlab. A sparse
matrix with n non-zero entries is stored in an n × 1 column vector
containing the non-zero entries and an n × 2 matrix containing the row
and column indices of the non-zero entries. Suppose you want to store
the matrix  
1 0 0 2
0 0 3 0
4 0 −1 0
in sparse format. Then you must enter in the Scilab console (without
the full stop at the end!)
myentries = [1; 2; 3; 4; −1]
myindices = [1, 1; 1, 4; 2, 3; 4, 1; 4, 3]
mysparsematrix = sparse(myindices, myentries).
4 R. VERFÜRTH

Note that the input


myoentries = [−1; 1; 2; 3; 4]
myoindices = [4, 3; 1, 1; 1, 4; 2, 3; 4, 1]
myosparsematrix = sparse(myoindices, myoentries)
represents the same matrix. This matrix in sparse format is transferred
to a Numerics function by using mysparsematrix or myosparsematrix
or an equivalent construct as argument.
Function: Functions, user-defined or built-in ones, can act as argu-
ments of Numerics functions. To clarify this, consider two examples.
Suppose first that you want to interpolate the built-in exponential func-
tion in 10 equidistant points on the interval [−1, 1]. Then you simply
enter
lagrange(exp,10)
in the Scilab console. This invokes the Numerics function lagrange
with the built-in exponential function exp and the number 10 as argu-
x
ments. Suppose next that you want to interpolate the function 2+x 2

in 10 equidistant points on the interval [−1, 1]. Then you first have to
create the corresponding function by entering
function y = myfunction(x)
y = x/(2 + x ∗ x)
endfunction
in the Scilab console. Then you may start the interpolation by typing
(without the full stop at the end!)
lagrange(myfunction,10).

4.2. Structure of Numerics functions. Numerics functions never


have a return value, they only print the required time for the calcula-
tion. In addition, depending on their arguments, they may print and
eventually plot results. All functions have a certain number of manda-
tory and optional arguments. Mandatory arguments must be specified
by the user. Optional arguments may be omitted, corresponding de-
fault values will be used then. Note that arguments are identified by
order. Consequently you must enter a void argument [ ] for all those op-
tional arguments that you don’t want to change and that come prior to
an optional argument which you want to change. As an example con-
sider the function lagrange. It requires a function as mandatory first
argument, the number of interpolation points as mandatory second ar-
gument, the left endpoint of the interpolation interval as optional third
argument with default value −1, the right endpoint of the interpolation
interval as optional fourth argument with default value 1 and further
optional arguments which we do not need for this example. Then the
inputs
NUMERICS USER GUIDE 5

lagrange(exp,10)
lagrange(exp,10,-2)
lagrange(exp,10,[ ],2)
lagrange(exp,10,-4,3)
yield the Lagrange interpolation of the built-in function exp in 10
equidistant points on the intervals [−1, 1], [−2, 1], [−1, 2] and [−4, 3],
respectively.
4.3. Start. Before using any Numerics function you must make it
known to Scilab by first entering
mylib = lib(path)
to the console. Here, mylib may be any other name different from
the name of any Numerics function, any built-in function and any of
your user-defined functions. path is a string giving the full path to the
Numerics library. Suppose for example that you are using Mac OsX,
that Gargantua is your user name and that you have put the Numerics
folder into the subfolder Pantagruel of your documents folder. Then
the above command must read (without the full stop at the end!)
mylib =
lib("/users/Gargantua/documents/Pantagruel/Numerics").
If everything is correctly installed, Scilab returns the names of all
functions in the Numerics library. This includes auxiliary functions
which are not user-relevant and which will not be described below.
4.4. Help. Numerics comes with a help function numerics help which
may be used with any number of arguments including an empty ar-
gument list. Typing numerics help() yields a list of all user-relevant
functions for which help is available. When specifying arguments, these
must be strings with the names of Numerics functions. Thus typing
numerics help("newton","secant rule","regula falsi")
provides information on the functions newton, secant rule and
regula falsi.
4.5. Interpolation. Numerics provides the functions
neville, lagrange, hermite, cubic spline, goertzel
for interpolation. These functions can be used in two different ways
which are determined by the type of their arguments:
• interpolation of a function,
• interpolation of discrete data.
The function neville realizes interpolation by Neville’s scheme (see
[5, Algorithmus I.2.7]). When interpolating a given function the user
must provide the following arguments the first three being mandatory:
y: scalar, vector or matrix of points in which the interpolation
polynomial should be evaluated,
f: function that should be interpolated,
6 R. VERFÜRTH

n: number n of interpolation points (at least 2),


a: first interpolation point a, default is −1,
b: last interpolation point b, default is 1,
t: spacing of interpolation points,
i−1
"e": equidistant points a + (b − a) n−1 , 1 ≤ i ≤ n,
1 i−1
"c": Čebysev points a + 2 (b − a)(1 − cos(π n−1 )), 1 ≤ i ≤ n,
default is "e".
When interpolating discrete data the user must provide the following
three mandatory arguments:
y: scalar, vector or matrix of points in which the interpolation
polynomial should be evaluated,
f: vector of function values at the interpolation points (at least
2),
x: vector of interpolation points (at least 2).
Note that both vectors f and x must have the same dimensions. In
both cases neville prints the vectors of the evaluation points and of
the corresponding values of the interpolation polynomial.
The function lagrange performs Lagrange interpolation using divided
differences and Newton’s scheme (see [5, Algorithmus I.2.11, I.2.13], [8,
Algorithmus III.2, III.3]). When interpolating a given function the user
must provide the following arguments the first two being mandatory:
f: function that should be interpolated,
n: number n of interpolation points (at least 2),
a: first interpolation point a, default is −1,
b: last interpolation point b, default is 1,
t: spacing of interpolation points,
i−1
"e": equidistant points a + (b − a) n−1 , 1 ≤ i ≤ n,
1 i−1
"c": Čebysev points a + 2 (b − a)(1 − cos(π n−1 )), 1 ≤ i ≤ n,
default is "e"
ap: left endpoint α of plot interval, default is a,
bp: right endpoint β of plot interval, default is b,
np: number k of plot points, default is 101.
When interpolating discrete data the role of the first two arguments
changes: f now is the vector of the function values that should be
interpolated and n is replaced by the vector x of the corresponding
abscissa. Both vectors must have the same dimensions. The function
lagrange prints the maximal interpolation error and plots the inter-
polation polynomial, the original function and the error by drawing a
piecewise linear interpolation of these functions at k equidistant points
on the interval [α, β].
The function hermite performs Hermite interpolation using divided
differences and Newton’s scheme (see [8, §III.4]). When interpolating
a given function the user must provide the following arguments the first
three being mandatory:
NUMERICS USER GUIDE 7

f: function that should be interpolated,


df: derivative of the function that should be interpolated,
n: number n of interpolation points (at least 2),
a: first interpolation point a, default is −1,
b: last interpolation point b, default is 1,
t: spacing of interpolation points,
i−1
"e": equidistant points a + (b − a) n−1 , 1 ≤ i ≤ n,
1 i−1
"c": Čebysev points a + 2 (b − a)(1 − cos(π n−1 )), 1 ≤ i ≤ n,
default is "e"
ap: left endpoint α of plot interval, default is a,
bp: right endpoint β of plot interval, default is b,
np: number k of plot points, default is 101.
When interpolating discrete data the role of the first three arguments
changes: f and df now are the vectors of the function values and corre-
sponding derivatives, resp. that should be interpolated and n is replaced
by the vector x of the corresponding abscissa. These three vectors must
have the same dimensions. The function hermite prints the maximal
interpolation error and plots the interpolation polynomial, the original
function and the error in the same way as the function lagrange.
The function cubic spline realizes interpolation by natural cubic
splines (see [5, Satz I.3.4], [8, Algorithmus III.8]). When interpolat-
ing a given function the user must provide the following arguments the
first two being mandatory:
f: function that should be interpolated,
n: number n of interpolation points (at least 2),
a: first interpolation point a, default is −1,
b: last interpolation point b, default is 1,
t: spacing of interpolation points,
i−1
"e": equidistant points a + (b − a) n−1 , 1 ≤ i ≤ n,
1 i−1
"c": Čebysev points a + 2 (b − a)(1 − cos(π n−1 )), 1 ≤ i ≤ n,
default is "e"
np: number k of plot points per subinterval, default is 10.
When interpolating discrete data the role of the first two arguments
changes: f now is the vector of the function values that should be
interpolated and n is replaced by the vector x of the corresponding
abscissa. Both vectors must have the same dimensions. The function
cubic spline prints the maximal interpolation error and plots the in-
terpolation polynomial, the original function and the error by drawing a
piecewise linear interpolation of these functions at k equidistant points
in each subinterval of [a, b].
The function goertzel performs trigonometric interpolation using the
Goertzel scheme (see [5, Algorithmus I.5.3]). When interpolating a
given function the user must provide the following arguments the first
one being mandatory:
8 R. VERFÜRTH

f: function f that should be interpolated,


2π`
n: interpolate f at the points 2n+1 , 0 ≤ ` ≤ 2n,
np: number k of plot points, default is 101.
When interpolating discrete data the role of the first argument changes:
f now is the vector of the function values that should be interpolated.
The function goertzel prints the maximal interpolation error and plots
the interpolation polynomial, the original function and the error by
drawing a piecewise linear interpolation of these functions at k equidis-
tant points on the interval [0, 2π].
4.6. Numerical integration. Numerics provides the functions
midpoint rule, trapezoidal rule, simpson rule,
gauss rule, romberg
for numerical integration.
The functions midpoint rule, trapezoidal rule, simpson rule im-
plement the corresponding composite quadrature formulae (see [5, Bei-
spiel II.2.2(3)], [8, §IV.2]). They all have the following arguments the
first one being mandatory and print the computed approximate value
of the integral:
f: function that should be integrated,
a: left endpoint of integration interval, default is 0,
b: right endpoint of integration interval, default is 1,
nc: initial number n of subintervals, default is 1,
nr: number k of refinement steps, default is 0,
in the `-th refinement step, 0 ≤ ` ≤ k, the integration interval
is split into n2` subintervals.
The function gauss rule implements a composite Gauß quadrature
formula with m interior Gauß points and order 2m − 1 (see [5, §II.3],
[8, §IV.4]). The value of m is passed as optional sixth argument ng.
The value of ng may be 2, 3 or 4 with 4 being the default value.
The function romberg realizes the Romberg scheme (see [5, Definition
II.4.6], [8, §IV.6]). It has six arguments the first five being the same as
those of midpoint rule. The optional sixth argument ne determines
the number of columns of the Romberg scheme. Its default value is 5
and it should be at least 2 and at most min{5, nr}.
4.7. Nonlinear equations. Numerics provides the functions
newton, secant rule, regula falsi
for the solution of nonlinear (systems of) equations.
The function newton implements Newton’s method with damping for
systems of nonlinear equations f (x) = 0 in Rn (see [5, Definition
III.2.1], [8, §§II.3, II.5]). It has the following arguments the first three
being mandatory:
x: initial value,
f: function f ,
NUMERICS USER GUIDE 9

df: derivative Df of f ,
pr: if %t results are printed in each iteration, default is %f,
nod: if %t damping is switched off, default is %f i.e. damping is
performed, q
tol: error tolerance ε, iteration is stopped once n1 f (x) · f (x) ≤
ε, default is 1.0D−8,
nmax: maximal number of iterations, default is 10.
The function
q newton prints the required number of iterations, the final
residual n1 f (x) · f (x) and the approximate solution x.
The function secant rule realizes the secant rule for scalar nonlinear
equations f (x) = 0 in R (see [5, Algorithmus III.3.1], [8, Algorithmus
II.3]). It has the following arguments the first three being mandatory:
x1: first initial value x0 ,
x2: second initial value x1 ,
f: function f ,
pr: if %t results are printed in each iteration, default is %f,
tol: error tolerance ε, iteration is stopped once |f (x)| ≤ ε, default
is 1.0D−8,
nmax: maximal number of iterations, default is 10.
The function secant rule prints the required number of iterations, the
final residual |f (x)| and the approximate solution x.
The function regula falsi implements the regula falsi for scalar non-
linear equations f (x) = 0 in R (see [5, Algorithmus III.3.2]). It has the
same arguments as secant rule and produces the same output but
the function values f (x0 ) and f (x1 ) of the two initial values must now
be of different sign.

4.8. Iterative solvers for linear systems of equations. Numerics


provides the functions
richardson, jacobi, gauss seidel, ssor,
cg, pcg ssor, bicg stab, pbicg stab ssor,
compare solvers
for the iterative solution of linear systems of equations Ax = b. With
the exception of compare solvers all functions print intermediate and
final results depending on their argument prl and plot the relative
kAx −bk kAxn −bk
residuals kAxn0 −bk , convergence rates kAxn−1 −bk
and mean convergence
q
kAx −bk
rates n kAxn0 −bk over the number n of iterations.
The functions richardson, jacobi, gauss seidel and ssor imple-
ment the corresponding classical stationary iterative methods (see [5,
Beispiel IV6.3, Algorithmus IV.7.14], [10, §§II.2.2–II.2.5]). They all
have the following arguments the first two being mandatory:
A: matrix A,
10 R. VERFÜRTH

b: right-hand side vector b,


x: : initial guess x0 , default is x0 = 0,P
om: damping parameter, default is n1 ij |Aij | for richardson, 1
for jacobi and gauss seidel and 1.5 for ssor,
prl: print level, default is 0
(0) only print required number of iterations
q n, final residual
kAx −bk
kAxn − bk and mean convergence rate n kAxn0 −bk ,
(1) print actual iteration number n, actual residual kAxn − bk
kAxn −bk
and actual convergence rate kAxn−1 −bk
,
(2) same as 0, additionally print the solution vector xn ,
(3) same as 1, additionally print the current vector xn ,
tol: tolerance ε, default is 1.0D−8,
nmax: maximal number N of iterations, default is 100,
kAx −bk
iteration is stopped if kAxn0 −bk ≤ ε or n = N .

The functions cg and pcg ssor realize the conjugate gradient algorithm
(see [5, Algorithmus IV.7.7], [9, Algorithmus VI.4.1], [10, §III.3.2])
and the preconditioned conjugate gradient algorithm with SSOR-pre-
conditioning (see [5, Algorithmen IV.7.10, IV.7.14], [9, Algorithmen
VI.4.2, VI.4.3], [10, §§III.3.3, III.3.4]). cg has the same arguments as
richardson except the missing relaxation parameter om. pcg ssor has
the same arguments as richardson with the default value 1.5 for the
relaxation parameter om. Notice that for both algorithms the matrix
A must by symmetric positive definite.
The functions bicg stab and pbicg stab ssor implement the stabi-
lized bi-conjugate gradient algorithm (see [9, Algorithmus VI.7.1], [10,
Algorithm III.5.1]) and the preconditioned stabilized bi-conjugate gra-
dient algorithm with SSOR-preconditioning. They have the same ar-
guments as their ‘symmetric’ counterparts cg and pcg ssor up to an
additional optional final argument nmaxrestart which limits the max-
imal number of restarts with the default value being 10.
The function compare solvers compares various direct and iterative
solvers for the solution of the linear system of equations arising from
the five-point difference discretization of the Poisson equation with ho-
mogeneous Dirichlet boundary conditions on the unit square (see [6,
§III.3], [9, §III.1]). It has the following arguments none being manda-
tory:
methods: a vector of strings specifying the chosen methods,
available methods are:
LU: sparse LU-factorization,
CG: the conjugate gradient algorithm,
PCG: the preconditioned conjugate gradient algorithm with
ssor preconditioning,
GS: gauss-seidel iteration,
NUMERICS USER GUIDE 11

SSOR: ssor iteration,


default setting is to use all methods,
nfirst: number of interior grid points in each direction on the
coarsest grid, default is 3,
maxref: number of uniform mesh refinements each refinement
step doubling the number of grid points in each direction, de-
fault is 5
rtol: relative error tolerance ε for iterative solvers, iteration stops
kAx −bk
once kAxn0 −bk ≤ ε, default is 0.01,
om: relaxation parameter for ssor iteration, default is 1.5.
Note that compare solvers may be called with an empty argument
list and that the strings identifying methods are case sensitive. The
function compare solvers prints and plots the computing times of
the chosen methods and their complexity defined as the ratio of the
logarithm of the computing time over the logarithm of the number of
unknowns.

4.9. Eigenvalue problems. Numerics provides the functions


power iteration, rayleigh,
inverse power iteration, inverse rayleigh
for computing the maximal and minimal eigenvalues of a general qua-
dratic matrix A or of a symmetric positive definite matrix.
The functions power iteration and inverse power iteration im-
plement the power iteration (see [5, Algorithmus V.3.1], [8, §V.2]) and
the inverse power iteration (see [5, Algorithmus V.3.9], [8, §V.4]) for
computing the in absolute value largest and smallest, resp. eigenvalue
of a general quadratic matrix A and a corresponding eigenvector. They
both have the following arguments the first one being mandatory:
A: matrix A,
x: initial guess for an eigenvector, default is a random vector,
prl: print level, default is 0,
(0) print number of required iterations and final approximation
for the largest resp. smallest eigenvalue,
(1) print number of actual iteration and actual approximation
for the largest resp. smallest eigenvalue,
(2) same as 0, additionally print the final approximation of the
eigenvector,
(3) same as 1, additionally print the actual approximation of
the eigenvector,
tol: error tolerance ε, default is 1D−8,
nmax: maximal number N of iterations, default is 100
iteration is stopped if two consecutive eigenvalue approxima-
tions differ less than ε and after N iterations latest.
12 R. VERFÜRTH

The functions rayleigh and inverse rayleigh realize the Rayleigh


quotient iteration (see [5, Algorithmus V.3.5], [8, §V.3]) and the inverse
Rayleigh quotient iteration (see [5, Algorithmus V.3.11], [8, §V.5]) for
computing the largest and smallest, resp. eigenvalue of a symmetric
positive definite matrix A. They both have the following arguments
the first one being mandatory:
A: matrix A,
x: initial guess for an eigenvector, default is a random vector,
prl: if %t print iteration number and actual approximation for
the largest resp. smallest eigenvalue, otherwise print this infor-
mation only at the end of the algorithm; default is %f
tol: error tolerance ε, default is 1D−8,
nmax: maximal number N of iterations, default is 100
iteration is stopped if two consecutive eigenvalue approxima-
tions differ less than ε and after N iterations latest.

4.10. Initial value problems for odes: fixed step-size methods.


Numerics provides the functions
explicit euler fs, implicit euler fs, crank nicolson fs
rk fs, rkf fs, sdirk fs,
nystroem fs, adams bashforth fs, adams moulton fs, bdf fs,
ode fs
for the solution of initial value problems for ordinary differential equa-
tions in Rn
x0 (t) = f (t, x(t)) t0 < t ≤ T
x(t0 ) = x0
with fixed step-size methods. All functions plot the computed solution
according to their argument pc.
The function explicit euler fs implements the explicit Euler scheme
(see [6, Algorithmus I.2.1], [8, §IV.1], [10, §I.2.2]). It has the following
arguments the first four being mandatory:
f: force function f ,
x0: initial value x0 ,
T: final time T ,
−t0
nt: number of time steps, the step-size is h = T nt ,
t0: initial time t0 , default is t0 = 0,
pc: integer vector of length 2 fixing the components of x that will
be plotted, default settings are:
[0,1]: for scalar odes, i.e. plot x(t) versus t,
[1,2]: for systems of odes, i.e. plot the curve (x1 (t), x2 (t)).
Note that the user-defined function f must have the two arguments t
and x (in this order!).
The function implicit euler fs implements the implicit Euler scheme
NUMERICS USER GUIDE 13

(see [6, Algorithmus I.2.2], [8, §IV.1], [10, §I.2.2]). It has the following
arguments the first five being mandatory:
f: force function f ,
df: derivative Dx f w.r.t. x of the force function f ,
x0: initial value x0 ,
T: final time T ,
−t0
nt: number of time steps, the step-size is h = T nt ,
t0: initial time t0 , default is t0 = 0,
pc: integer vector of length 2 fixing the components of x that will
be plotted, default settings are:
[0,1]: for scalar odes, i.e. plot x(t) versus t,
[1,2]: for systems of odes, i.e. plot the curve (x1 (t), x2 (t)).
Note that the user-defined functions f and df must have the two argu-
ments t and x (in this order!).
The function crank nicolson fs implements the Crank-Nicolson me-
thod (see [6, Algorithmus I.2.3], [8, §IV.1], [10, §I.2.2]). It has the same
arguments as implicit euler fs.
The function rk fs implements the classical Runge-Kutta scheme (see
[6, §I.2], [8, §IV.6.4], [10, §I.2.3]). It has the same arguments as the
function explicit euler fs.
The function rkf fs implements Runge-Kutta-Fehlberg schemes of or-
der 2 with 3 stages, of order 3 with 4 stages and of order 4 with 6 stages
(see [6, Beispiel I.4.7], [8, Beispiel VI.7]). It has the same arguments as
the function explicit euler fs except that pc now is the seventh ar-
gument and that a new sixth argument order determines the method.
This argument may take the values 2, 3 and 4 with 3 being its default
value.
The function sdirk fs implements a strongly diagonally implicit
Runge-Kutta scheme of order 4 (see [8, §VI.4], [10, §I.2.4]). It has
the same arguments as implicit euler fs.
The function nystroem fs implements the Nyström two-step method
(see [6, Beispiel I.5.1]). The first two approximations are computed
with the explicit Euler scheme. The function nystroem fs has the
same arguments as explicit euler fs.
The function adams bashforth fs implements the Adams-Bashforth
two- and three-step schemes (see [6, Beispiel I.5.1]). The first two or
three, resp. steps are computed with the explicit Euler scheme. The
function adams bashforth fs has the same arguments as the function
explicit euler fs except that pc now is the seventh argument and
that a new sixth argument steps determines the method. This argu-
ment may take the values 2 or 3 with 2 being its default value.
The function adams moulton fs implements the Adams-Moulton two-
and three-step schemes (see [6, Beispiel I.5.1]). The first two or three,
resp. steps are computed with the implicit Euler scheme. The function
14 R. VERFÜRTH

adams moulton fs has the same arguments as implicit euler fs ex-


cept that pc now is the eighth argument and that a new seventh ar-
gument steps determines the method. This argument may take the
values 2 or 3 with 2 being its default value.
The function bdf fs implements the backward difference two- and
three-step schemes (see [6, Beispiel I.5.2], [8, §VI.8]). The first two
or three, resp. steps are computed with the implicit Euler scheme. The
function bdf fs has the same arguments as implicit euler fs except
that pc now is the eighth argument and that a new seventh argument
steps determines the method. This argument may take the values 2
or 3 with 2 being its default value.
The function ode fs allows a comparison of the above methods. Its
first argument is an array sm of strings which identify the methods that
should be compared and which may take the following values:
"ee": for the explicit Euler scheme,
"ie": for the implicit Euler scheme,
"cn": for the Crank-Nicolson scheme,
"rk": for the classical Runge-Kutta scheme,
"rkf2": for the Runge-Kutta-Fehlberg method of order 2,
"rkf3": for the Runge-Kutta-Fehlberg method of order 3,
"rkf4": for the Runge-Kutta-Fehlberg method of order 4,
"sdirk": for the strongly diagonal implicit Runge-Kutta method
of order 4,
"ny": for the Nystroem method,
"ab2": for the Adams-Bashforth two-step method,
"ab3": for the Adams-Bashforth three-step method,
"am2": for the Adams-Moulton two-step method,
"am3": for the Adams-Moulton three-step method,
"bdf2": for the backward difference two-step method,
"bdf3": for the backward difference three-step method.
The remaining arguments of ode fs are the same as for the function
implicit euler fs. Note that the argument df is also present for
explicit methods as the explicit Euler or Adams-Bashforth schemes.
When comparing explicit schemes exclusively you may enter the void
argument [ ] for df.
4.11. Initial value problems for odes: variable step-size meth-
ods. Numerics provides the functions
explicit euler vs, implicit euler vs, crank nicolson vs
rk vs, rkf vs, sdirk vs,
ode vs
for the solution of initial value problems for ordinary differential equa-
tions in Rn
x0 (t) = f (t, x(t)) t0 < t ≤ T
x(t0 ) = x0
NUMERICS USER GUIDE 15

with variable step-size methods. The function rkf vs controls the step-
size by comparing methods of different order (see [6, Algorithmus I.4.6],
[8, Algorithmus VI.5]); the other functions perform the step-size con-
trol by comparing results for different step-sizes (see [6, Algorithmus
I.4.3], [8, Algorithmus VI.6]). All functions plot the computed solution
according to their argument pc.
The function explicit euler vs implements the explicit Euler scheme
(see [6, Algorithmus I.2.1], [8, §IV.1], [10, §I.2.2]). It has the following
arguments the first four being mandatory:
f: force function f ,
x0: initial value x0 ,
T: final time T ,
dt: initial step-size,
t0: initial time t0 , default is t0 = 0,
tol: required tolerance ε,
the function strives at obtaining an approximation η(t) for the
solution x(t) such that kη(t) − x(t)k ≤ ε kx(t)k holds for all t,
default value is ε = 0.001,
ntmax: maximal number of time steps, default is 10000,
sfdt: safety factor for step-size variation,
1
reduce the step-size at most by the factor sfdt and increase it
at most by the factor sfdt,
default value is 10,
maxr: maximal number of step-size reductions in a single time
step, default is 10,
pc: integer vector of length 2 fixing the components of x that will
be plotted, default settings are:
[0,1]: for scalar odes, i.e. plot x(t) versus t,
[1,2]: for systems of odes, i.e. plot the curve (x1 (t), x2 (t)).
Note that the user-defined function f must have the two arguments t
and x (in this order!).
The function implicit euler vs implements the implicit Euler scheme
(see [6, Algorithmus I.2.2], [8, §IV.1], [10, §I.2.2]). It has the following
arguments the first five being mandatory:
f: force function f ,
df: derivative Dx f w.r.t. x of the force function f ,
x0: initial value x0 ,
T: final time T ,
dt: initial step-size,
t0: initial time t0 , default is t0 = 0,
tol: required tolerance ε,
the function strives at obtaining an approximation η(t) for the
solution x(t) such that kη(t) − x(t)k ≤ ε kx(t)k holds for all t,
default value is ε = 0.001,
16 R. VERFÜRTH

ntmax: maximal number of time steps, default is 10000,


sfdt: safety factor for step-size variation,
1
reduce the step-size at most by the factor sfdt and increase it
at most by the factor sfdt,
default value is 10,
maxr: maximal number of step-size reductions in a single time
step, default is 10,
pc: integer vector of length 2 fixing the components of x that will
be plotted, default settings are:
[0,1]: for scalar odes, i.e. plot x(t) versus t,
[1,2]: for systems of odes, i.e. plot the curve (x1 (t), x2 (t)).

Note that the user-defined functions f and df must have the two argu-
ments t and x (in this order!).
The function crank nicolson vs implements the Crank-Nicolson me-
thod (see [6, Algorithmus I.2.3], [8, §IV.1], [10, §I.2.2]). It has the same
arguments as implicit euler vs.
The function rk vs implements the classical Runge-Kutta scheme (see
[6, §I.2], [8, §IV.6.4], [10, §I.2.3]). It has the same arguments as the
function explicit euler vs.
The function rkf vs implements Runge-Kutta-Fehlberg schemes of or-
der 2 with 3 stages, of order 3 with 4 stages and of order 4 with 6 stages
(see [6, Beispiel I.4.7], [8, Beispiel VI.7]). It has the same arguments as
the function explicit euler vs except that pc now is the eleventh ar-
gument and that a new tenth argument order determines the method.
This argument may take the values 2, 3 and 4 with 3 being its default
value.
The function sdirk vs implements a strongly diagonally implicit
Runge-Kutta scheme of order 4 (see [8, §VI.4], [10, §I.2.4]). It has
the same arguments as implicit euler vs.
The function ode vs allows a comparison of the above methods. Its
first argument is an array sm of strings which identify the methods that
should be compared and which may take the following values:

"ee": for the explicit Euler scheme,


"ie": for the implicit Euler scheme,
"cn": for the Crank-Nicolson scheme,
"rk": for the classical Runge-Kutta scheme,
"rkf2": for the Runge-Kutta-Fehlberg method of order 2,
"rkf3": for the Runge-Kutta-Fehlberg method of order 3,
"rkf4": for the Runge-Kutta-Fehlberg method of order 4,
"sdirk": for the strongly diagonal implicit Runge-Kutta method
of order 4.

The remaining arguments of ode vs are the same as for the function
implicit euler vs. Note that the argument df is also present for
NUMERICS USER GUIDE 17

explicit methods as the explicit Euler scheme. When comparing explicit


schemes exclusively you may enter the void argument [ ] for df.
4.12. Boundary value problems for odes. Numerics provides the
function
sturm FD
for solving the Sturm-Liouville problem
0
− (pu0 ) + qu = f in (a, b)
u(a) = α
u(b) = β
with a finite difference discretization on a uniform mesh with mesh-size
h (see [6, §II.4], [9, §I.6], [10, §I.5]). It has the following arguments the
first two being mandatory:
1
nx: number of interior points, the mesh-size is h = nx+1 ,
f: force function f ,
p: diffusion function p, default is p = 1,
q: reaction function q, default is q = 0,
uex: exact solution u if known, default is [ ] signifying an unknown
exact solution,
a: left end point a, default is a = 0,
b: right end point b, default is b = 1,
al: left boundary value α, default is α = 0,
be: right boundary value β, default is β = 0.
The function sturm FD plots the discrete solution and additionally the
exact solution and the error if uex is provided. In this case sturm FD
also prints the L2 - and H 1 -norms of the error.
4.13. Finite difference methods for pdes. Numerics provides the
functions
poisson FD, heat FD 1D, lax friedrichs, lax wendroff
for finite difference discretizations of partial differential equations.
The function poisson FD implements the standard 5-point difference
discretization of the two dimensional Poisson equation with homoge-
neous Dirichlet boundary conditions
−∆u = f in Ω
u = 0 on Γ
(see [6, §III.3], [9, §III.1]). The domain Ω is the intersection of the
rectangle [xl1 , xr1 ] × [xl2 , xr2 ] with bottom-left corner xl and top-right
corner xr with the set {x ∈ R2 : ω(x) < 0} the square [−1, 1]2 being
the default setting. The function poisson FD plots the level lines of the
discrete solution and its maximal error if the exact solution is known.
It has the following arguments the first two being mandatory:
f: force function f ,
18 R. VERFÜRTH

np: number of interior grid points in each coordinate direction,


−xl1 −xl2
the mesh-size in x- and y-direction is xr1
np+1
and xr2
np+1
, resp.,
uex: exact solution if known, default setting is [ ] signifying an
unknown exact solution,
xbl: bottom-left corner xl , default is xl = (−1, −1),
xtr: top-right corner xr , default is xr = (1, 1),
fom: function ω describing the domain Ω, default is
ω(x) = kxk∞ − 1,
nll: number of level lines for plotting the solution, default is 10.
The function heat FD 1D solves the one dimensional heat equation
∂u ∂ 2u
= + f in (0, T ) × (0, 1)
∂t ∂x2
u(t, 0) = 0 in (0, T )
u(t, 1) = 0 in (0, T )
u(0, x) = u0 in (0, 1)
using a second order symmetric difference discretization for the spatial
derivative and the θ-scheme for the time discretization (see [6, §III.4],
[9, §III.2]). It plots the discrete solution at each intermediate time-step
and has the following arguments the first three being mandatory:
T: final time T ,
T
nt: number of time steps, the fixed time step-size is τ = nt ,
nx: number of interior grid points in x direction, the fixed spatial
1
mesh-size is h = nx+1 ,
theta: parameter θ for the temporal discretization,
θ = 0 is the explicit Eule scheme,
θ = 0.5 is the Crank-Nicolson scheme,
θ = 1 is the implicit Euler scheme,
default is θ = 0.5,
u0: initial function u0 , default is u0 = 0,
f: force function f , default is f = 0.
The functions lax friedrichs and lax wendroff implement the Lax-
Friedrichs [3, Exercise 14.4] and Lax-Wendroff [3, Equation (14.29)]
finite difference schemes for the numerical solution of first order partial
differential equations of the form
∂u ∂u
+ a(x) =0 in (0, T ) × (0, 1)
∂t ∂x
u(t, 0) = u(t, 1) in (0, T )
u(0, x) = u0 in (0, 1).
They both plot the numerical solution at each time step and have the
following arguments the first three being mandatory:
T: final time T ,
NUMERICS USER GUIDE 19

nx: number of interior grid points in x direction,


1
the fixed spatial mesh-size is h = nx+1 ,
the number of time steps is automatically set to
nt = dT ∗ (nx + 1)e which result in a constant time step τ . h,
u0: initial function u0 ,
a: advection function a, default is a = 1,
rp: clear plot window every rp time steps, default is 20.

4.14. Linear optimization. Numerics provides the functions


simplex, autostart simplex, dual simplex, inner point
for linear optimization problems
minimize ct x subject to Ax = b, x ≥ 0.
All functions print values of the objective function ct x and of the solu-
tion vector x depending on their argument prl.
The function simplex implements the basic simplex method (see [7, Al-
gorithmus I.4.5]). It has the following arguments the first three being
mandatory:
clp: cost vector ct ∈ Rn (clp should be a row-vector!),
Alp: constraint matrix A ∈ Rm×n ,
blp: constraint vector b ∈ Rm (blp should be a column-vector!)
Jlp: first basis J, default setting is J = {n − m + 1, . . . , n}.
prl: print level determines the print output, default is 2
(0) no print output,
(1) print optimal value ct x at the end,
(2) print optimal value ct x and vector x at the end,
(3) print value ct x at every iteration,
(4) print value ct x and vector x at every iteration,
(5) print value ct x, vector x and simplex tableau at every iter-
ation.
The function autostart simplex implements the simplex method with
automatic initialization (see [7, Algorithmus I.4.14]). It has the follow-
ing arguments the first three being mandatory:
clp: cost vector ct ∈ Rn (clp should be a row-vector!),
Alp: constraint matrix A ∈ Rm×n ,
blp: constraint vector b ∈ Rm (blp should be a column-vector!)
prl: print level determines the print output, default is 2
(0) no print output,
(1) print optimal value ct x at the end,
(2) print optimal value ct x and vector x at the end,
(3) print value ct x at every iteration,
(4) print value ct x and vector x at every iteration,
(5) print value ct x, vector x and simplex tableau at every iter-
ation.
20 R. VERFÜRTH

The function dual simplex implements the dual simplex method (see
[7, Algorithmus I.5.5]). It has the following arguments the first three
being mandatory:
clp: cost vector ct ∈ Rn (clp should be a row-vector!),
Alp: constraint matrix A ∈ Rm×n ,
blp: constraint vector b ∈ Rm (blp should be a column-vector!)
Jlp: first basis J, default setting is J = {n − m + 1, . . . , n}.
prl: print level determines the print output, default is 2
(0) no print output,
(1) print optimal value ct x at the end,
(2) print optimal value ct x and vector x at the end,
(3) print value ct x at every iteration,
(4) print value ct x and vector x at every iteration,
(5) print value ct x, vector x and simplex tableau at every iter-
ation.
The function inner point implements an inner point method for lin-
ear optimization (see [7, Algorithmus I.7.7]). It has the following ar-
guments the first three being mandatory:
clp: cost vector ct ∈ Rn (clp should be a row-vector!),
Alp: constraint matrix A ∈ Rm×n ,
blp: constraint vector b ∈ Rm (blp should be a column-vector!)
xys: initial column-vector of the form [x; y; s] satisfying
Ax = b, At y + s = c, x > 0, s > 0,
default setting is xys = [ ], inner point then tries to automat-
ically determine vectors x, y and s with the above properties,
prl: print level determines the print output, default is 2
(0) no print output,
(1) print optimal value ct x at the end,
(2) print optimal value ct x and vector x at the end,
(3) print value ct x at every iteration,
(4) print value ct x and vector x at every iteration,
exact: : default is %f,
if %t, inner point tries to compute the exact solution of the
optimization problem by guessing an admissible basis from the
computed approximate solution and starting a simplex algo-
rithm with this basis,
itm: maximal number of iterations, default is 1000,
tol: tolerance ε, default is ε = 0.001,
the iteration is stopped if the relative change of two consecu-
tive x-vectors in the maximum norm is less than ε or if the
parameter µ of inner-point method is less than ε.
4.15. Discrete optimization. Numerics provides the functions
branch and bound, cutting planes,
NUMERICS USER GUIDE 21

dijkstra, floyd warshall,


ford fulkerson, minimal cost flow
for discrete optimization.
The functions branch and bound and cutting planes implement
branch-and-bound and cutting-planes methods, resp. for integer op-
timization problems
minimize ct x subject to x ∈ Zn , Ax = b, x ≥ 0
(see [7, Algorithmen II.1.8, II.1.12]). They print results according to
their argument prl and have the following arguments the first three
being mandatory:
clp: cost vector ct ∈ Rn (clp should be a row-vector!),
Alp: constraint matrix A ∈ Rm×n ,
blp: constraint vector b ∈ Rm (blp should be a column-vector!)
prl: print level determines the print output, default is 2
(0) no print output,
(1) print optimal value ct x at the end,
(2) print optimal value ct x and vector x at the end,
(3) print value ct x at every branch-and-bound or cutting-planes
iteration,
(4) print value ct x and vector x at every branch-and-bound or
cutting-planes iteration,
(5) print value ct x at every simplex iteration,
(6) print value ct x and vector x at every simplex iteration.
The function dijkstra implements the Dijkstra algorithm for finding
a shortest (s, t)-path in a graph (see [7, Algorithmus II.3.3]). It prints
results according to its argument prl and has the following arguments
the first one being mandatory:
Adjl: one of the following two objects:
• a square matrix such that Adjl[i,j] gives the length of
the arc connecting nodes i and j of the graph and equals
%inf if nodes i and j are not connected by an arc or if
i = j,
• a list of vectors [a,b,c] describing the properties of the
arcs such that c is the length of the arc connecting nodes
a and b,
first: number of starting node s, default is 1,
last: number of terminal node t, default is size(Adjl,"c"),
prl: print level, default is 2,
(0) no print output,
(1) print length of shortest path at end,
(2) print length of shortest path and path itself at end,
(3) print all intermediate values of vectors D, V and M (see
[7, Algorithmus II.3.3]).
22 R. VERFÜRTH

The function floyd warshall implements the Floyd-Warshall algo-


rithm for finding the shortest paths between all pairs of nodes in a
graph (see [7, Algorithmus II.3.11]). It prints results according to
its argument prl and has the following arguments the first one be-
ing mandatory:
Adjl: one of the following two objects:
• a square matrix such that Adjl[i,j] gives the length of
the arc connecting nodes i and j of the graph and equals
%inf if nodes i and j are not connected by an arc or if
i = j,
• a list of vectors [a,b,c] describing the properties of the
arcs such that c is the length of the arc connecting nodes
a and b,
prl: print level, default is 2,
(0) no print output,
(1) print matrix of path lengths,
(2) print matrices of path lengths and predecessors on shortest
paths.
The function ford fulkerson implements the Ford-Fulkerson algo-
rithm for finding a maximal (s, t)-flow x in a network (see [7, Algo-
rithmus II.4.6]). It prints results according to its argument prl and
has the following arguments the first one being mandatory:
Adjc: one of the following two objects:
• a square matrix such that Adjc[i,j] gives the capacity of
the arc connecting nodes i and j of the graph and equals
-%inf if nodes i and j are not connected by an arc or if
i = j,
• a list of vectors [a,b,c] describing the properties of the
arcs such that c is the capacity of the arc connecting nodes
a and b,
first: number of starting node s, default is 1,
last: number of terminal node t, default is size(Adjc,"c"),
prl: print level, default is 2,
(0) no print output,
(1) print the value of the maximal flow x,
(2) print the value of the maximal flow x and the corresponding
matrix where the element [i,j] gives the flow through the
arc connecting nodes i and j,
(3) print in every iteration the value of the current flow x,
(4) print in every iteration the value of the current flow x and
the corresponding matrix where the element [i,j] gives
the flow through the arc connecting nodes i and j.
The function minimal cost flow implements algorithm II.4.16 of [7]
for finding an (s, t)-flow x with prescribed value Φ and minimal cost in
NUMERICS USER GUIDE 23

a network. It prints results according to its argument prl and has the
following arguments the first three being mandatory:
Adjc: one of the following two objects:
• a square matrix such that Adjc[i,j] gives the capacity of
the arc connecting nodes i and j of the graph and equals
-%inf if nodes i and j are not connected by an arc or if
i = j,
• a list of vectors [a,b,c] describing the properties of the
arcs such that c is the capacity of the arc connecting nodes
a and b,
Adjp: one of the following two objects:
• a square matrix such that Adjp[i,j] gives the cost of the
arc connecting nodes i and j of the graph and equals %inf
if nodes i and j are not connected by an arc or if i = j,
• a list of vectors [a,b,c] describing the properties of the
arcs such that c is the cost of the arc connecting nodes a
and b,
val: requested value Φ of the flow,
first: number of starting node s, default is 1,
last: number of terminal node t, default is size(Adjc,"c"),
prl: print level, default is 2,
(0) no print output,
(1) print the value of the final flow x,
(2) print the value of the final flow x and the corresponding
matrix where the element [i,j] gives the flow through the
arc connecting nodes i and j,
(3) print in every iteration the value of the current flow x,
(4) print in every iteration the value of the current flow x and
the corresponding matrix where the element [i,j] gives
the flow through the arc connecting nodes i and j.

4.16. Nonlinear optimization. Numerics provides the functions


descent, augmented lagrangian, nelder mead
for nonlinear optimization.
The function descent realizes a descent algorithm for unconstrained
nonlinear optimization problems
minimize f (x) in Rn
(see [7, Algorithmus III.1.5]). The search direction is the negative
gradient of the objective function f . The line search is either an exact
one or an Armijo one with parameters c1 = c2 = 0.5 (see [7, Korollar
III.1.10]). In the first case the user must provide the Hessian matrix
D2 f of the objective function. The function descent prints results
according to its argument prl and has the following arguments the
first three being mandatory:
24 R. VERFÜRTH

xp: initial guess x0 ,


fct: objective function f ,
gfct: gradient Df of f (gfct should be a column-vector!),
Hfct: Hessian D2 f of f ,
if Hfct is provided the line search is an exact one otherwise it
is an Armijo one,
default is Hfct = [ ], i.e. Armijo line search,
prl: print level, default is 2,
(0) print optimal value of f ,
(1) print optimal value of f and corresponding point x,
(2) print current value of f at every iteration,
(3) print current value of f and x at every iteration,
tol: tolerance ε, default is ε = 1.0D−4,
itmax: maximal number N of iterations, default is N = 100,
iteration terminates if kDf (x)k ≤ ε or if the number of itera-
tions exceeds N .
The function augmented lagrangian implements an augmented La-
grangian algorithm for constrained nonlinear optimization problems
minimize f (x) subject to
fi (x) ≤ 0, 1 ≤ i ≤ p,
fj (x) = 0, p+1≤j ≤m
with augmented Lagrange function
p
" + #2
X 1 yi
Λ(x, y, r) = f (x) + ri fi (x) +
i=1
2 ri
m  2
X 1 yj
+ rj fj (x) +
j=p+1
2 rj
m
X 1 y2 k

k=1
2 rk

(see [7, Algorithmus III.6.9]). It prints results according to its argument


prl and has the following arguments the first eight being mandatory:
xp: initial guess for the vector x,
yp: initial guess for the vector y,
rp: augmentation factor,
the vector r is given by r = rp (1, . . . , 1)t ,
fct: objective function f ,
gfct: gradient Df of f (gfct should be a column-vector!),
fctc: vector valued constraint function (f1 , . . . , fm )t
(fctc should be a column-vector!),
gfctc: gradient of fctc (gradients of the component functions of
fctc should be column-vectors!),
NUMERICS USER GUIDE 25

nic: number p of inequality constraints,


the values p = 0, no inequality constraints, and p = m, no
equality constraints, are allowed,
the first nic components of the vector yp must be positive,
prl: print level, default is 2,
(0) print optimal value of f ,
(1) print optimal value of f and corresponding point x,
(2) print current value of f at every iteration,
(3) print current value of f and x at every iteration,
tol: tolerance ε, default is ε = 1.0D−4,
itmax: maximal number N of iterations, default is N = 100,
iteration terminates if the KKT-conditions are satisfied up to
tolerance ε or if the number of iterations exceeds N .
The function nelder mead implements the simplex method of Nelder
and Mead for unconstrained nonlinear optimization problems
minimize f (x) in Rn
(see [7, Algorithmus III.8.1]). It prints results according to its argument
prl and has the following arguments the first two being mandatory:
fct: objective function f ,
inx: either the dimension n of the problem or an n × (n + 1)
matrix with the initial guesses x0 , . . ., xn as column vectors,
in the first case the initial guesses are automatically set to the
origin x0 = 0 and to the unit vectors xi = ei , 1 ≤ i ≤ n,
prl: print level, default is 2,
(0) print optimal value of f ,
(1) print optimal value of f and corresponding point x,
(2) print current best value of f at every iteration,
(3) print current best value of f and corresponding point x at
every iteration,
itm: maximal number N of iterations, default is N = 100,
tol: tolerance ε, default is ε = 0.001,
iteration terminates if the variation of the function values is less
than ε or if the number of iterations exceeds N .

References
[1] Michaël Baudin, Introduction to Scilab.
http://wiki.scilab.org/Tutorials
[2] , Programming in Scilab.
http://wiki.scilab.org/Tutorials
[3] Arieh Iserles, A First Course in the Numerical Analysis of Differential Equa-
tions. Cambridge Texts in Applied Mathematics, Cambridge University Press
1997
[4] Eike Rietsch, An Introduction to Scilab from a Matlab User’s Point of View.
http://wiki.scilab.org/Tutorials
26 R. VERFÜRTH

[5] Rüdiger Verfürth, Einführung in die Numerische Mathematik.


http://www.rub.de/num1/files/lectures/EinfNumerik.pdf
[6] , Numerik I. Gewöhnliche Differentialgleichungen und Differenzenver-
fahren für partielle Differentialgleichungen.
http://www.rub.de/num1/files/lectures/NumDgl1.pdf
[7] , Optimierung.
http://www.rub.de/num1/files/lectures/Optimierung.pdf
[8] , Numerische Mathematik für Maschinenbauer, Bauingenieure und Um-
welttechniker.
http://www.rub.de/num1/files/lectures/NumIng.pdf
[9] , Vertiefung Numerische Mathematik für den Masterstudiengang
UTRM.
http://www.rub.de/num1/files/lectures/VertNum.pdf
[10] , Numerical Methods and Stochastics. Part I: Numerical Methods.
http://www.rub.de/num1/files/lectures/NMS.pdf
Index
adams bashforth fs, 13 number, 3
adams moulton fs, 13 numerics help, 5
augmented lagrangian, 24 nystroem fs, 13
autostart simplex, 19
ode fs, 14
bdf fs, 14 ode vs, 16
bicg stab, 10
boolean, 2 pbicg stab ssor, 10
branch and bound, 21 pcg ssor, 10
poisson FD, 17
cg, 10 power iteration, 11
compare solvers, 10
crank nicolson fs, 13 rayleigh, 12
crank nicolson vs, 16 regula falsi, 9
cubic spline, 7 richardson, 9
cutting planes, 21 rkf fs, 13
rk fs, 13
descent, 23 rkf vs, 16
dijkstra, 21 rk vs, 16
dual simplex, 20 romberg, 8

explicit euler fs, 12 sdirk fs, 13


explicit euler vs, 15 sdirk vs, 16
secant rule, 9
floyd warshall, 22 simplex, 19
ford fulkerson, 22 simpson rule, 8
function, 4 sparse matrix, 3
ssor, 9
gauss rule, 8
string, 3
gauss seidel, 9
goertzel, 7 trapezoidal rule, 8
heat FD 1D, 18 vector, 3
hermite, 6

implicit euler fs, 12


implicit euler vs, 15
inner point, 20
inverse power iteration, 11
inverse rayleigh, 12

jacobi, 9

lagrange, 6
lax friedrichs, 18
lax wendroff, 18

matrix, 3
midpoint rule, 8
minimal cost flow, 22

nelder mead, 25
neville, 5
newton, 8
27

You might also like