Umist: Interval Analysis in MATLAB
Umist: Interval Analysis in MATLAB
UMIST
G. I. Hargreaves
December 2002
DEPARTMENTS OF MATHEMATICS
Reports available from: And over the World-Wide Web from URLs
Department of Mathematics http://www.ma.man.ac.uk/MCCM
University of Manchester http://www.ma.man.ac.uk/~nareports
Manchester M13 9PL
England
Interval Analysis in Matlab
Gareth I. Hargreaves∗
December 18, 2002
Abstract
The introduction of fast and efficient software for interval arithmetic, such as
the MATLAB toolbox INTLAB, has resulted in the increased popularity of the use
of interval analysis. We give an introduction to interval arithmetic and explain how
it is implemented in the toolbox INTLAB. A tutorial is provided for those who
wish to learn how to use INTLAB.
We then focus on the interval versions of some important problems in numerical
analysis. A variety of techniques for solving interval linear systems of equations are
discussed, and these are then tested to compare timings and accuracy. We consider
univariate and multivariate interval nonlinear systems and describe algorithms that
enclose all the roots.
Finally, we give an application of interval analysis. Interval arithmetic is used
to take account of rounding errors in the computation of Viswanath’s constant, the
rate at which a random Fibonacci sequence increases.
1 Introduction
The concept of interval analysis is to compute with intervals of real numbers in place
of real numbers. While floating point arithmetic is affected by rounding errors, and
can produce inaccurate results, interval arithmetic has the advantage of giving rigorous
bounds for the exact solution. An application is when some parameters are not known
exactly but are known to lie within a certain interval; algorithms may be implemented
using interval arithmetic with uncertain parameters as intervals to produce an interval
that bounds all possible results.
If the lower and upper bounds of the interval can be rounded down and rounded up
respectively then finite precision calculations can be performed using intervals, to give
an enclosure of the exact solution. Although it is not difficult to implement existing
algorithms using intervals in place of real numbers, the result may be of no use if the
interval obtained is too wide. If this is the case, other algorithms must be considered or
new ones developed in order to make the interval result as narrow as possible.
The idea of bounding rounding errors using intervals was introduced by several people
in the 1950’s. However, interval analysis is said to have begun with a book on the subject
∗
Department of Mathematics, University of Manchester, Manchester, M13 9PL, England
([email protected], http://www.ma.man.ac.uk/~hargreaves/).
1
by Moore [12] in 1966. Since then, thousands of articles have appeared and numerous
books published on the subject.
Interval algorithms may be used in most areas of numerical analysis, and are used
in many applications such as engineering problems and computer aided design. Another
application is in computer assisted proofs. Several conjectures have recently been proven
using interval analysis, perhaps most famously Kepler’s conjecture [4], which remained
unsolved for nearly 400 years.
The ability to alter the rounding mode on modern computers has allowed a variety
of software to be produced for handling interval arithmetic, but only recently has it been
possible to exploit high-performance computers. A specification for Basic Linear Alge-
bra Subroutines is defined which covers linear algebra algorithms such scalar products,
matrix–vector and matrix multiplication. These algorithms are collected in level 1, 2 and
3 BLAS. Computer manufacturers implement these routines so that BLAS are fast on
their particular machines, which allows for fast portable codes to be written. Rump [17]
showed that by expressing intervals by the midpoint and radius, interval arithmetic can
be implemented entirely using BLAS. This gives a fast and efficient way of performing
interval calculations, in particular, vector and matrix operations, on most computers.
Rump used his findings to produce the MATLAB toolbox INTLAB, which will be used
throughout this report.
The early sections of this report serve as an introduction to interval analysis and
INTLAB. The properties of intervals and interval arithmetic are discussed with details
of how they are implemented in INTLAB. A tutorial to INTLAB is given in Section 4
which shows how to use the various routines provided. Sections 5, 6 and 7 deal with
solving interval linear systems of equations and interval nonlinear equations. Various
methods are described, including the INTLAB functions for such problems, which are
then tested. Finally an application of interval analysis is given: interval arithmetic is
used to take account of rounding errors in the calculation of Viswanath’s constant.
2 Interval Arithmetic
2.1 Notation
Intervals will be represented by boldface, with the brackets “[·]” used for intervals defined
by an upper bound and a lower bound. Underscores will be used to denote lower bounds
of intervals and overscores will denote upper bounds. For intervals defined by a midpoint
and a radius the brackets “< · >” will be used.
x = [x, x] = {x ∈ R : x ≤ x ≤ x},
where x is called the infimum and x is called the supremum. The set of all intervals over
R is denoted by IR where
IR = {[x, x] : x, x ∈ R, x ≤ x}.
2
The midpoint of x,
1
mid(x) = x̌ = (x + x)
2
and the radius of x,
1
rad(x) = (x − x),
2
may also be used to define an interval x ∈ IR. An interval with midpoint a and radius r
will be denoted by <a, r>. If an interval has zero radius it is called a point interval or
thin interval, and contains a single point represented by
[x, x] ≡ x.
Although (1) defines the elementary operations mathematically, they are implemented
with
x + y = [x + y, x + y],
x − y = [x − y, x − y],
x × y = [min {xy, xy, xy, xy}, max {xy, xy, xy, xy}], (2)
1/x = [1/x, 1/x] if x > 0 or x < 0,
x ÷ y = x × 1/y.
For the elementary interval operations, division by an interval containing zero is not
defined. It is often useful to remove this restriction to give what is called extended
interval arithmetic, which will be used in later sections. Extended interval arithmetic
3
must satisfy (1), which leads to the following rules. If x = [x, x] and y = [y, y] with
y ≤ 0 ≤ y and y < y, then the rules for division are as follows:
[x/y, ∞] if x ≤ 0 and y = 0,
[−∞, x/y] ∪ [x/y, ∞] if x ≤ 0 and y < 0 < y,
[−∞, x/y] if x ≤ 0 and y = 0,
x/y = [−∞, ∞] if x < 0 < x, (3)
[−∞, x/y] if x ≥ 0 and y = 0,
[−∞, x/y] ∪ [x/y, ∞] if x ≥ 0 and y < 0 < y,
[x/y, ∞] if x ≥ 0 and y = 0.
The addition and subtraction of infinite or semi-infinite intervals are then defined by
the following:
x(y + z) 6= xy + xz,
except in special cases, therefore the distributive law does not hold. Instead there is the
sub-distributive law
x(y + z) ⊆ xy + xz. (4)
Another example of a rule valid in real arithmetic that does not hold in IR is that for
thick intervals,
x − x 6= 0.
As an example consider x = [2, 3], which gives x − x = [−1, 1] 6= 0. This is because an
interval containing the difference between all possible results of two independent numbers
lying within x is calculated, rather than the difference between two identical numbers.
An important result is the inclusion property, often labelled as the fundamental the-
orem of interval analysis.
x1 ⊆ z 1 , . . . , x n ⊆ z n ,
then
f (x1 , . . . , xn ) ⊆ f (z1 , . . . , zn ).
4
Proof. From the definition of the interval arithmetic operations (1) it follows that, if
v ⊆ w and x ⊆ y then
v+x⊆w+y
v−x⊆w−y
vx ⊆ wy
v/x ⊆ w/y.
w ⊆ x and x ⊆ y =⇒ w ⊆ y,
kyk∞ = max{|y i | : i = 1, . . . , n}
z = x + iy = {x + iy : x ∈ x, y ∈ y},
and produces a rectangle of complex numbers in the complex plane with sides parallel
to the coordinate axes. Complex interval operations are defined in terms of the real
intervals x ∈ R and y ∈ R in the same way that complex operations on z = x + iy are
5
16
14
12
10
−6 −4 −2 0 2 4 6
defined in terms of x and y. For example, multiplication of two complex interval numbers
z1 = x1 + iy1 and z2 = x2 + iy2 is defined by
z1 × z2 = x1 x2 − y1 y2 + i(x1 y2 + x2 y1 ), (5)
There are several possible definitions for multiplication of circular complex intervals which
result in an overestimation that is less than when rectangular complex intervals are used.
A simple definition is that introduced by Gargantini and Henrici [3]; for two complex
intervals z1 = <a1 , r1 > and z2 = <a2 , r2 >,
6
2.5 Outward Rounding
The interval x = [x, x] may not be representable on a machine if x and x are not machine
numbers. On a machine x and x must be rounded and the default is usually rounding to
the nearest representable number. A rounded interval in this way, x̃, may not bound the
original interval x.
In order that x ⊆ x̃, x must be rounded downward and x must be rounded upward,
which is called outward rounding. The ubiquitous IEEE standard for floating point arith-
metic [9] has four rounding modes, nearest, round down, round up and round towards
zero, thus making interval arithmetic possible on essentially all current computers.
The MATLAB toolbox INTLAB uses the routine setround to change the rounding
mode of the processor between nearest, round up and round down. This routine has been
used to create functions for input, output and arithmetic of intervals. An introduction
to INTLAB is given in the next section.
3 Introduction to INTLAB
Wherever MATLAB and IEEE 754 arithmetic is available INTLAB is able to be used.
The INTLAB toolbox by Siegfried Rump is freely available from
http://www.ti3.tu-harburg.de/~rump/intlab/index.html
with information on how to install and use it. Here version 4 is used. All routines
in INTLAB are MATLAB M-files, except for one routine, setround, for changing the
rounding mode, which allows for portability and high speed on various systems.
7
Interval vectors and matrices are entered in a similar way with the arguments being
vectors or matrices of the required size.
The function intval provides another way to enter an interval variable or can be used
to change a variable to interval type. This function gives an interval with verified bounds
of a real or complex value and is a vital part of verification algorithms. However, care has
to be taken when using it. It is widely known that the value 0.1 cannot be expressed in
binary floating point arithmetic so intval may be used to give a rigorous bound. Using
>> x = intval(0.1);
the variable x will not necessarily contain an interval including 0.1, since 0.1 is converted
to binary format before being passed to intval. The result is a thin interval with
>> rad(x)
ans =
0
Rigorous bounds can be obtained using intval with a string argument such as
>> x = intval(’0.1’)
intval x =
[0.09999999999999, 0.10000000000001]
which uses an INTLAB verified conversion to binary. It can be seen that x contains 0.1
since the radius is nonzero.
>> rad(x)
ans =
1.387778780781446e-017
Using a string argument with more than one value will produce an interval column vector
with components that are intervals that enclose the values entered. The structure may
be altered to a matrix using reshape or a column vector using the transpose command
“’”.
The infimum, supremum, midpoint and radius of a interval x can be obtained with
the commands inf(x), sup(x), mid(x), rad(x) respectively. The default output is to
display uncertainties by “ ”. This means that an enclosure of the interval is given by the
midpoint as the digits displayed and the radius as 1 unit of the last displayed figure. For
example, the interval y = <1, 0.1> entered in midpoint/radius notation gives
>> y = midrad(1,0.1)
intval y =
1.0_____________
This output is the same as the interval that was entered. If the interval is changed to
y = <1, 0.2> then an overestimation is given.
>> y = midrad(1,0.2)
intval y =
1.______________
This suggests that the interval is in the interior of [0, 2] although it is stored as [0.8, 1.2].
The default may be changed, or the commands infsup(x) and midrad(x) can be
used to give the output in infimum/supremum or midpoint/radius notation. The output
is as the notation from Section 2.1, for example
>> x = infsup(0,1); infsup(x), midrad(x)
intval x =
[0.00000000000000, 1.00000000000000]
intval x =
8
<0.50000000000000, 0.50000000000001>
and interval output is verified to give rigorous bounds.
9
and B, with the rounding mode set down for C1 and up for C2 . An outer estimate of the
solution C = A × B is then calculated with upward rounding with the midpoint as
1
mid(C) = C1 + (C2 − C1 ),
2
and the radius given by
This gives an overestimation of the solution in 8n3 flops. If the intervals of the matrices
are narrow then the overestimation will be small, and for any size of interval the over-
estimation is limited by a factor of 1.5. The default is to use the fast interval matrix
multiplication, but this can be changed so that the slow but sharp method is used.
where ti and gi are scalar functions and only one elementary operation occurs in each
g(i).
The functions gi are all differentiable since the derivatives of the elementary operations
are known. Using the chain rule on (7) gives
i−1
(
∂ti X ∂gi ∂tk 1 if i = j,
= δij + where δij = (8)
∂tj k=j
∂t k ∂tj 0 otherwise.
10
where
0 ···
∂g2
0 ···
½ ¾
∂gi
∂t1
Dg = = ,
∂g3 ∂g3
∂dtj ∂t1 ∂t2
0 ···
.. ... ... ...
.
and
1 0 ···
∂t2 ...
1
½ ¾
∂ti ∂t1
Dt = = ∂t3 ∂t3 .
...
∂tj
∂t1 ∂t2
1
.. ... ... ...
.
This implies that (I − Dg)Dt = I, which can be solved for any of the columns in Dt
using forward substitution. To find all the partial derivatives of f , the first m columns
of Dt can be computed.
An interval scalar, vector or matrix x is initialised to be of gradient type by the
following command.
>> x = gradientinit(x)
If an expression involving a variable of gradient type is computed then the value and
gradient are stored in the substructures .x and .dx respectively. For example, if an M-
file is created as
function y = f(x)
y = 2*x^2+sin(x);
the thin interval x = [1, 1] can be initialised to be of gradient type
>> x = intval(1);
>> x = gradientinit(x);
The values f (1) and f 0 (1) can be evaluated by
>> y = f(x)
intval gradient value y.x =
[2.84147098480789, 2.84147098480790]
intval gradient derivative(s) y.dx =
[4.54030230586813, 4.54030230586815]
which gives intervals containing f (1) and f 0 (1). Entering y.x gives f (x) and y.dx gives
the partial derivatives of f .
4 INTLAB Tutorial
Here we give a brief tutorial to using INTLAB. The user should work through this
tutorial by typing what appears after the command prompt “>>”. It is assumed that the
user is familiar with MATLAB and has INTLAB correctly installed. To start INTLAB
type:
>> startintlab
This adds the INTLAB directories to the MATLAB search path and initialises the re-
quired global variables.
11
4.1 Input and Output
There are four ways to enter a real interval. The first is the function intval which
allows the input of a floating point matrix. The following creates a 2 × 2 matrix of point
intervals.
>> A = intval([0.1,2;3,4])
intval A =
0.1000 2.0000
3.0000 4.0000
The matrix will not necessarily contain an interval enclosing 0.1, since 0.1 is converted to
binary format before intval is called. Using a string argument overcomes this problem.
This example creates a vector with interval components that encloses the values entered.
>> A = intval(’0.1 2 3 4’)
intval A =
0.1000
2.0000
3.0000
4.0000
The first component now encloses 0.1. This can be seen since the radius is nonzero.
>> rad(A(1,1))
ans =
1.3878e-017
Notice that A is a column vector. All output using a string argument produces a vector
of this form. It may be changed to a 2 × 2 matrix by:
>> A = reshape(A,2,2)
intval A =
0.1000 3.0000
2.0000 4.0000
The third alternative is to give an interval by its midpoint and radius.
>> B = midrad([1,0;0.5,3],1e-4)
intval B =
1.000_ 0.000_
0.500_ 3.0000
Finally an interval may be input by its infimum and supremum.
>> C = infsup([-1,0;2,4],[-0.9,0;2.4,4.01])
intval C =
-0.9___ 0.0000
2.____ 4.00__
Complex intervals can be entered by midpoint/radius notation with the midpoint as
a complex number.
12
>> c = midrad(3-2i,0.01)
c =
3.00__ - 2.00__i
If a complex number has midpoint on the real line the above method will result in a real
interval. Instead the following should be used.
>> z = cintval(2,0.01)
intval z =
2.00__ + 0.0___i
This can also be used to enter any complex interval by its midpoint and radius.
The default output is to display uncertainties by “ ”. This means that an enclosure
of the interval is given by the midpoint as the digits displayed and the radius as 1 unit
of the last displayed figure.
The midpoint, radius, infimum and supremum of an interval may be obtained.
>> x = infsup(0.99,1.01);
>> mid(x)
ans =
1
>> rad(x)
ans =
0.0100
>> inf(x)
ans =
0.9900
>> sup(x)
ans =
1.0100
Also an interval may be output in midpoint/radius of infimum/supremum notation.
>> infsup(x)
intval x = [0.9899, 1.0101]
>> midrad(x)
intval x = <1.0000, 0.0101>
The default output may be changed to display all intervals in one of these two notations.
To change to midpoint/radius:
>> intvalinit(’displaymidrad’)
===> Default display of intervals by midpoint/radius
>> x
intval x = <1.0000, 0.0101>
To change back to the original default:
>> intvalinit(’display_’)
===> Default display of intervals with uncertainty
>> x
intval x =
1.00__
13
To change to infimum/supremum:
>> intvalinit(’displayinfsup’)
===> Default display of intervals by infimum/supremum
>> x
intval x = [0.9899, 1.0101]
The MATLAB output format may also be changed to give varying precision in the
output of intervals.
>> format long; x
intval x =
[ 0.98999999999998, 1.01000000000001]
>> format short; x
intval x =
[ 0.9899, 1.0101]
4.2 Arithmetic
All the standard arithmetic operations can be used on intervals. An arithmetic operation
uses interval arithmetic if one of the operands is of interval type.
>> y = 3*midrad(3.14,0.01)^2
intval
y = [ 29.3906, 29.7676]
For multiplication of two matrices containing thick intervals, the default is to use a fast
matrix multiplication which gives a maximum overestimation by a factor of 1.5 in radius.
>> A = midrad(rand(100),1e-4);
>> tic; A*A; toc;
elapsed_time =
0.0500
This can be changed to the slower but sharper matrix multiplication.
>> intvalinit(’sharpivmult’)
===> Slow but sharp interval matrix multiplication in use
>> tic; A*A; toc;
elapsed_time =
1.1000
To change back to the default:
>> intvalinit(’fastivmult’)
===> Fast interval matrix multiplication in use
(maximum overestimation factor 1.5 in radius)
Standard functions, such as sin, cos and log can be used with intervals. The default is
to use rigorous standard functions which have been verified to give correct and narrow
enclosures. The alternative is to use faster approximate functions which give a wider
enclosure.
14
>> x = intval(1e5);
>> intvalinit(’approximatestdfcts’), rad(sin(x))
===> Faster but not rigorous standard functions in use
ans =
2.2190e-011
>> intvalinit(’rigorousstdfcts’), rad(sin(x))
===> Rigorous standard functions in use
ans =
1.3878e-017
INTLAB provides functions for many of the properties of intervals from Section 2.2.
The intersection of two intervals can be calculated:
>> hull(x,y)
intval ans =
[ -1.0000, 3.0000]
>> abss(x)
ans =
2
>> mig(x)
ans =
0
>> in0(p,x)
ans =
0
In this case the result is false since the infimum of p and x are the same.
15
4.3 Verified functions
INTLAB provides a function which either performs a verified calculation of the solution of
a linear system of equations, or computes an enclosure of the solution hull of an interval
linear system. The following example gives verified bounds on the solution of linear
system of equations.
>> A = rand(5); b = ones(5,1);
>> verifylss(A,b)
intval ans =
[ 0.6620, 0.6621]
[ 0.7320, 0.7321]
[ 0.5577, 0.5578]
[ -0.2411, -0.2410]
[ 0.0829, 0.0830]
If a component of the linear system is of interval type then the backslash command can
be used to produce an enclosure on the solution hull.
> A = midrad(A,1e-4);
>> A\b
intval ans =
[ 1.8192, 1.8365]
[ -0.2210, -0.2167]
[ -1.1849, -1.1618]
[ -0.1182, -0.1129]
[ 1.1484, 1.1555]
If A is sparse then a sparse solver is automatically used.
Verified solutions of nonlinear equations may be found using verifynlss. One di-
mensional nonlinear functions can be entered directly with the unknown as “x”.
>> verifynlss(’2*x*exp(-1)-2*exp(-x)+1’,1)
intval ans =
[ 0.4224, 0.4225]
The second parameter is an approximation to a root. To solve a system of nonlinear
equation, a function has to be created. For example, suppose the following function is
entered as an M-file.
function y = f(x)
y = x;
y(1) = 3*x(1)^2-x(1)+3*x(2)-5;
y(2) = 4*x(1)+2*x(1)^2+x(2)-7;
This nonlinear system can be solved for one of the roots, y = (1, 1)T , provided a close
enough approximation is given.
>> verifynlss(@f,[2,3])
intval ans =
[ 0.9999, 1.0001]
[ 0.9999, 1.0001]
16
Verified bounds may also be obtained for eigenvalue/eigenvector pairs.
This generates the 9 × 9 Wilkinson eigenvalue test matrix, approximates the eigenvalues
and eigenvectors of A, and then produces verified bounds for the eigenvalue/eigenvector
pair approximated by D(1,1) and V(:,1).
Wilkinson’s eigenvalue test matrices have pairs of eigenvalues that are nearly but not
exactly the same. The function verifyeig can be used to bound clusters of eigenvalues.
>> verifyeig(A,D(8,8),V(:,8:9))
intval ans =
< 4.7462 + 0.0000i, 0.0010>
This produces bounds on eigenvalues approximated by D(8,8) and D(9,9). The required
input is an approximation of an eigenvalue, in this case D(8,8), and an approximation
of the corresponding invariant subspace, which is given by V(:,8:9).
>> y = getround
y =
0
The result y = 0 means that the rounding mode is currently set to nearest. This may
be changed to rounding downwards by
>> setround(-1)
17
or rounding upwards by
>> setround(1)
It is good practice to set the rounding mode to the original setting at the end of a routine.
>> setround(y)
As an example of the effect changing the rounding mode makes, try entering the
following:
Now try changing the rounding mode to nearest and upward rounding, and observe the
difference in result.
4.5 Gradients
INTLAB contains a gradient package which uses automatic differentiation. An example
of initialising a variable to be used with the gradient package is given by:
>> u = gradientinit(2)
gradient value u.x =
2
gradient derivative(s) u.dx =
1
>> intvalinit(’displayinfsup’);
===> Default display of intervals by infimum/supremum
>> x = gradientinit(infsup(0.999,1.001))
intval gradient value x.x =
[ 0.9989, 1.0010]
intval
gradient derivative(s) x.dx =
[ 1.0000, 1.0000]
>> y = sin(x)*(4*cos(x)-2)^2
intval gradient value y.x =
[ 0.0209, 0.0229]
intval gradient derivative(s) y.dx =
[ -0.9201, -0.8783]
18
The values of the expression and the derivative are stored as y.x and y.dx respectively.
For functions of several unknowns the gradient package can be used to obtain the Jacobian
matrix. For example, bounds on the Jacobian of the function stored in the M-file f.m
from Section 4.3 can be evaluated.
>> x = gradientinit(midrad([1,2],1e-6));
>> y = f(x)
intval gradient value y.x =
[ 2.9999, 3.0001] [ 0.9999, 1.0001]
intval gradient derivative(s) y.dx =
intval (:,:,1) =
[ 4.9999, 5.0001] [ 7.9999, 8.0001]
intval (:,:,2) =
[ 3.0000, 3.0000] [ 1.0000, 1.0000]
Bounds are produced on the Jacobian of the function f.m, and are stored as y.dx.
19
5
−1
−2
−3
−4
−5
−6 −4 −2 0 2 4 6
The solution set is shown by the shaded area of Figure 2. The hull of the solution set is
the interval vector with smallest radius containing Σ(A, b) and is denoted by Σ(A, b).
For the previous example this is shown by the dashed line in Figure 2. An outer estimate
of Σ(A, b) is sought.
20
then the upper triangular system is given by
[1.95, 1.05] [3.95, 4.05] [6.95, 7.05]
U = [0, 0] [−4.31, −3.71] [−6.46, −5.56] .
[0, 0] [0, 0] [−1.23, 0.23]
This causes division by zero when using back-substitution. All the elements began with
a radius of 0.05, but the radius of U 3,3 is 0.7301.
The feasibility of using intgauss.m depends on the matrix A ∈ IRn×n . For a general
A, problems may occur for dimensions as low as n = 3 if the radii of the elements are too
large. As the width of the elements decreases the algorithm becomes feasible for larger n.
However, even when intgauss.m is used with thin matrices, it is likely for the algorithm
to break down for n larger than 60.
Despite interval Gaussian elimination not being effective in general, it is suitable for
certain classes of matrices. In particular, realistic bounds for the solution set are obtained
for M-matrices, H-matrices, diagonally dominant matrices, tridiagonal matrices and 2 × 2
matrices. In the case where A is an M-matrix the exact hull Σ(A, b) is obtained for
many b; Neumaier [13] shows that if b ≥ 0, b ≤ 0 or 0 ∈ b then the interval hull of the
solution set is obtained.
21
Since kC b̃||∞ ≤ kCbk∞ and β < 1 is very likely for C being the inverse of the midpoint
matrix of A, we define the initial interval vector to be
The iterations can be terminated if the radii of the components of x(i) are no longer
rapidly decreasing. The sum of these radii can be computed after each iteration and
compared with the previous sum. An INTLAB implementation of the algorithm is given
by the function kraw.m in Appendix A.1.
and
αi = hAii i − 1/di , βi = ui /di − |bi |.
Then Σ(A, b) is contained in the vector x with components
bi + [−βi , βi ]
xi = .
Aii + [−αi , αi ]
A simplified proof is given by Neumaier [14].
In order to give a rigorous enclosure of the interval hull using floating point arithmetic,
rigorous upper bounds are required for αi and βi . These are obtained if a rigorous bound
B for hAi−1 is used. The following explanation of how this is achieved is based on that
given in [14].
A property of the H-matrix A is that hAi−1 is nonnegative. This suggests that an
upper bound B for hAi−1 can be expressed in terms of B̃, an approximation to hAi−1 ,
and vectors v ∈ Rn , w ∈ Rn satisfying I − hAiB̃ ≤ hAivw T by
B = B̃ + vwT . (11)
By the definition of an H-matrix, there exists a vector v > 0 such that u = hAiv > 0.
This vector v can be used to satisfy (11) by taking the vector w with components
−Rik
wk = max ,
i ui
22
where
R = hAiB̃ − I.
It is now left to find the vector v. Assuming there is a positive vector ũ such that
v = B̃ ũ ≈ hAi−1 ũ > 0 then A is an H-matrix, and if u = hAiv ≈ ũ is positive then the
approximation B̃ is good enough. Since hAi−1 is nonnegative, the vector ũ = (1, . . . , 1)
is sufficient to produce hAi−1 ũ > 0.
The values u and R must be calculated with downward rounding, w and B calculated
with upward rounding, while B̃ and v can be calculated with nearest rounding. The
function hsolve.m in Appendix A.1 implements the method described above.
The implication
a consequence of Brouwer’s fixed point theorem [2], is used to determine whether or not
to terminate the algorithm. After an inclusion of the difference has been found, the
approximate solution xs is added to give an enclosure of Σ(A, b).
We can now show how the enclosure given by this method and the method described
in Section 5.3 compare. Since
23
and by a comparison with (10) suggests that if the initial enclosures are the same then
the Krawczyk iteration gives better results than the residual version. In practice, if A
and b are thin, the residual b − Axs can be enclosed with fewer rounding errors than b,
which leads to a tighter enclosure of the solution. This is shown is Section 5.6 where the
functions are tested.
For most matrices A an enclosure will be found; however, if after 7 iterations none is
found, the function reverts to the Hansen-Bliek-Rohn-Ning-Kearfott-Neumaier method
described in the previous section.
Table 1: Time taken in seconds to give verified bounds of solution to point system.
The aim of the verification algorithms is to produce a tight bound on the true solution.
Therefore, to measure the accuracy, the sum of the radii of the solution components is
calculated and displayed in Table 2.
Tables 1 and 2 show that the INTLAB function verifylss produces the tightest
bounds and does so in the least time. On all occasions, the first stage of verifylss
produces a suitable enclosure. The function requires 2n3 flops for calculating the inverse
of a matrix, 4n3 flops for interval matrix multiplication and O(n2 ) flops for all the other
calculations. This gives a dominant term of 6n3 flops, which in theory should produce
a factor of 9 between the time taken for the “\” and verifylss functions. The factor
is smaller in practice since matrix inversion and matrix multiplication have a larger
proportion of “level 3” flops.
24
We now consider thick systems by using A and b with random midpoints in the range
[−1, 1] with radius 1e-10. The timings for various dimensions are displayed in Table 3,
which show that again verifylss is the quickest function.
The accuracy of the solutions are measured as before and are displayed in Table 4.
The results shows that kraw produces the tightest bounds while being only marginally
slower than verifylss.
Table 5 shows the accuracy results when the radii are increased to 1e-8. The function
hsolve now produces the tightest bounds with kraw outperforming the INTLAB function.
The times are similar to those in Table 3, so although hsolve gives the most accurate
solution it is significantly slower than the other two functions.
For all linear systems tested the INTLAB function was quickest, and when used to
produce a verified bound of a point system it gave the sharpest bounds. As the radius of
the components were increased, the function kraw gave tighter bound than verifylss,
and when increased further hsolve gave better bounds still. Since the sum of radii
are very similar for the three functions then the quickest function, verifylss, should be
used. If very high accuracy is required then the other two functions should be considered.
25
differentiable in the initial interval x(0) ∈ IR. Enclosures of a real simple root may be
obtained, where a simple root x∗ satisfies f (x∗ ) = 0 and f 0 (x∗ ) 6= 0. Furthermore, by
using a recursive procedure, bounds on all real simple roots of a function can be found.
The mean value theorem states that
0 = f (x) = f (x̃) + f 0 (ξ)(x − x̃),
for some ξ between x and x̃. If x and x̃ lie within the interval x(0) ∈ IR then so does ξ
and hence
f (x̃) f (x̃)
x = x̃ − 0 ⇒ x ∈ N (x(0) ) = x̃ − 0 (0) .
f (ξ) f (x )
Therefore the intersection x(0) ∩ N (x(0) ) must contain the root, if x(0) contained a root
of f . Since x̃ ∈ x(0) , a suitable choice for x̃ is to take the midpoint x̌(0) . To take
account of rounding errors, x̌(0) must be a thin interval containing an approximation to
the midpoint of x(0) , and therefore we write it in boldface to show this.
Given an initial interval x(0) that contains a root, this leads to the interval version of
Newton’s method
f (x̌(i) )
µ ¶
(i+1) (i) (i)
x = x ∩ x̌ − 0 (i) , i = 1, 2, 3, . . . (12)
f (x )
The stopping criterion is taken to be x(i+1) = x(i) .
If f 0 has a zero in the interval x then clearly N (x) is not defined. Therefore if f has
a multiple root, or two distinct roots in x then the iteration (12) can not be used. A
necessary condition for the iteration to work is that x(0) must contain at most one root,
and the root must be a simple one.
Moore [12] shows the following properties in the simple root case. If N (x) is defined
then either N (x) ∩ x is empty, in which case x does not contain a root, or N (x) ∩ x is
an interval which contains a root if x does. Furthermore if x does not contain a root,
then after a finite number of steps the iteration (12) will stop with an empty x (i) . The
convergence is also shown to be quadratic since
2
rad(x(k+1) ) ≤ αrad(x(k) ) ,
where α is a constant.
The iteration (12) can be used to find a simple root of a function if at most one
simple root is contained in the initial interval x(0) . By modifying this iteration and using
a recursive procedure with extended interval arithmetic, bounds can be obtained for all
the simple roots within x(0) . If f 0 (x) = [a, b] contains zero then using extended interval
arithmetic from (3), this interval can be split so that f 0 (x) = [a, 0] ∪ [0, b] and hence
f (x̌)
N (x) = x̌ − .
[a, 0] ∪ [0, b]
This produces two intervals which can then be considered separately.
As an example consider f (x) = x2 − 2, with x(0) = [−2, 3]. This gives x̌(0) = 0.5,
f (x̌(0) ) = −1.75 and f 0 (x(0) ) = [−4, 0] ∪ [0, 6]. N (x(0) ) is calculated as
(−1.75)
N (x(0) ) = 0.5 − ,
[−4, 0] ∪ [0, 6]
= [−∞, 0.0625] ∪ [0.7917, ∞],
26
which gives
x(1) = [−2, 0.0625] ∪ [0.7917, 3].
One of the intervals, say [0.7917, 3], is then considered to give the sequence of intervals
27
intval Xnew =
[0.99999999999999, 1.00000000000001]
Root in the interval
intval Xnew =
[1.99999999999994, 2.00000000000005]
Root in the interval
intval Xnew =
[2.99999999999960, 3.00000000000044]
Root in the interval
intval Xnew =
[3.99999999999955, 4.00000000000036]
Root in the interval
intval Xnew =
[4.99999999999973, 5.00000000000034]
Tight bounds are obtained for all five roots.
INTLAB does not have a univariate solver, however the multivariate solver, described
in Section 7.3, can be used for this purpose. If a univariate equation and a close enough
approximation of a root are entered then tight bounds on the root can be obtained.
∂2f
Jij = i, j = 1, . . . , n,
∂xi ∂xj
28
which can be formed using automatic differentiation, and x̃ ∈ x. If x∗ is a zero of f then
f (x∗ ) = 0 and therefore
−f (x̃) ∈ J(x)(x∗ − x̃). (13)
The interval linear system (13) can then be solved for x∗ to obtain an outer bound on
the solution set, say N (x̃, x). The notation includes both x̃ and x to show the dependence
on both terms. This gives
for k = 0, 1, . . . and x(k) ∈ x(k) . A reasonable choice for x̃(k) is the centre, x̌(k) , of
x, however better choices are available. One example of determining x̃(k) is to use a
noninterval version of Newton’s method to find an approximation of a solution in x.
The various interval Newton methods are determined by how N (x̃(k) , x(k) ) is defined.
Essentially variations on the three operators described below are commonly used.
where C is the preconditioning matrix, the midpoint inverse of J(x), and x̃ ∈ x. Although
an inverse is still required, it is the inverse of a real matrix rather than an interval matrix.
Replacing N (x̃, x) by K(x̃, x) in (15) leads to the Krawczyk iteration
29
Changing the notation N (x̃, x) to H(x̃, x) and defining
M = CJ(x), b = Cf (x̃),
the interval Gauss–Seidel procedure proceeds component by component to give the iter-
ation
Pi−1 (k+1) (k+1) Pn (k) (k)
(k) (k) (k) b i + j=1 M ij (x j − x̃ j ) + j=i+1 Mij (xj − x̃j )
H(x̃ , x )i = x̃i − , (16)
Mii
(k+1) (k)
xi = H(x̃(k) , x(k) )i ∩ xi , (17)
2x1 − x2 − 2 = 0, (18)
7
x2 − x21 − 4x1 + 5 = 0,
2
which have real solutions x = (1, 0)T and x = (2, 2)T . Using the function nlinkraw.m
with the initial interval x = ([0.8, 1.2], [−0.2, 0.2])T produces the following output.
Number of iterations
8
intval ans =
[0.99999999999999, 1.00000000000001]
[-0.00000000000001, 0.00000000000001]
30
In this case, appropriate bounds are given around one of the roots in 8 iterations. Using
the same starting interval with nlinhs.m, the same bounds are given in 3 iterations.
Number of iterations
3
intval ans =
[0.99999999999999, 1.00000000000001]
[-0.00000000000001, 0.00000000000001]
If the width of the initial interval is increased slightly to x = ([0.5, 1.5], [−0.5, 0.5]) T
the function nlinkraw.m fails to find tighter the bounds on a root, and the initial interval
is output. The function nlinhs.m produces a tight enclosure on the root y = (1, 0) T , as
before.
Increasing the width further so that the initial interval, x = ([0.5, 3], [−0.5, 3]) T ,
contains both real roots, produces the following output for the function nlinkraw.m.
Number of iterations
1
intval ans =
[0.50000000000000, 3.00000000000000]
[-0.50000000000001, 3.00000000000000]
Although the output is correct in that both roots are contained within the interval, a
tight enclosure of a root is not obtained. Applying the same initial interval with the
function nlinhs.m produces an error due to division by zero.
These numerical examples show the importance of providing an interval of small radius
containing a root. If such an interval is given then the functions can be used to give tight
verified bounds on a root. However, if this is not the case then the width of the interval is
likely not to be decreased. The INTLAB function for solving nonlinear equations avoids
the problem of providing an initial interval with small radius, by requiring the user to
supply a noninterval approximation to a zero.
K(x̃, x) ⊆ interior of x =⇒ x∗ ∈ x,
31
>> verifynlss(@f,[-10,-10])
intval ans =
[ 0.99999999999999, 1.00000000000001]
[ -0.00000000000001, 0.00000000000001]
If the approximation is closer to the root (2, 2)T then bounds on this root are obtained.
For example, using the approximation (1.51, 1)T is only marginally closer to (2, 2)T but
bounds are given for this root.
>> verifynlss(@f,[1.51,1])
intval ans =
[ 1.99999999999999, 2.00000000000001]
[ 1.99999999999999, 2.00000000000001]
where α is a constant such that 0 < α < 1. If a box is sufficiently reduced then the
iteration is continued on the box, and if not, then it is split into subboxes.
In order to split a box, x, a dimension must be chosen to be split. The obvious choice
of splitting the largest component of x may result in the problem being badly scaled.
A good choice is to try and choose the component in which f varies most over x. The
quantity
n
X
Tj = rad(xj ) |Jij (x)|,
i=1
Tr ≥ T j , j = 1, . . . , n.
These basic ideas together with the Krawczyk operator are implemented as the func-
tion allroots.m in Appendix A.3. This function bounds all the roots of a nonlinear
system and is intended only for low dimensions, since it is not efficient for a large number
32
of equations. The function solves the problem of finding both solutions of (18). By using
the initial box µ ¶
(0) [−10, 10]
x = ,
[−10, 10]
bounds are given on both solutions, with the function being called recursively on 36
occasions.
>> allroots(@f,infsup([-10,-10],[10,10]))
intval Xnew =
[0.99999999999999, 1.00000000000001]
[-0.00000000000001, 0.00000000000001]
intval Xnew =
[1.99999999999999, 2.00000000000001]
[1.99999999999999, 2.00000000000001]
Applying the function to the nonlinear system
x22
y1 = 3x1 (x2 − 2x1 ) + ,
4
y2 = 3x2 (x3 − 2x2 + x1 ) + (x3 − x1 )2 ,
y3 = 3x3 (20 − 2x3 + x2 ) + (20 − x2 )2 ,
with the initial box
[−10, 5]
x(0) = [−10, 5] ,
[−10, 5]
produces bounds on the solutions at a much slower rate. Four roots are found in the
given box with the function being called 1312 times.
>> attempt3(@f,infsup([-10,-10,-10],[5,5,5]))
intval Xnew =
[-3.49832993885909, -3.49832993885907]
[-6.10796755232559, -6.10796755232557]
[-7.73708170177157, -7.73708170177155]
intval Xnew =
[0.38582828617158, 0.38582828617159]
[-5.30358277070613, -5.30358277070610]
[-7.28997002509309, -7.28997002509307]
intval Xnew =
[-0.06683877075395, -0.06683877075394]
[0.91876351654562, 0.91876351654564]
[-4.15284241315239, -4.15284241315238]
intval Xnew =
[0.73257037812118, 0.73257037812119]
[1.27904348004932, 1.27904348004933]
[-3.99217902192722, -3.99217902192720]
Further ideas for speeding up the process of finding all solutions, including a full
algorithm, are given in [7]. The bisection part of the algorithm makes the process suitable
to use on a parallel machine, for which an algorithm and application is given in [19]. This
makes it feasible for larger problems to be solved.
33
8 Viswanath’s Constant Using Interval Analysis
In this section an application of interval analysis is shown. The rate at which a random
Fibonacci sequence increases is calculated, with rounding errors being taken into account
by the use of interval arithmetic.
with equal probability 21 at each step. If the random matrix Mi , chosen at the ith step,
is A or B with probability 21 then the recurrence can be written as
µ ¶ µ ¶
tn−1 1
= Mn−2 · · · M1 .
tn 1
Known results from the theory of random matrix products imply that
log
p ||Mn · · · M1 || → γf as n → ∞,
n
|tn | → eγf as n → ∞,
for a constant γf with probability 1. The aim is to determine eγf as accurately as possible.
34
8.2 Viswanath’s Calculations
To find γf , Viswanath uses Furstenberg’s formula
Z ∞
γf = 2 amp(m)dν(m), (19)
0
where
1 + 4m4
µ ¶
1
amp(m) = log
4 (1 + m2 )2
and ν is an invariant probability measure on an interval [a, b] with ±1 ∈
/ [a, b], given by
µ· ¸¶ µ· ¸¶
1 1 1 1 1
ν([a, b]) = ν , + , . (20)
2 −1 + b −1 + a 1+b 1+a
[ −1 , 1]
0 0
ÃÃÃhhhhhh
ÃÃ ÃÃÃ h hhhh
ÃÃÃ
[ −1 , 0]
0 1
[ 01 , 01 ]
©©HH ©H
©© H ©© HH
©©
HH
H ©© HH
H
©
[ −1
0
, −1
1
] [ −1 , 0]
1 1
[ 01 , 11 ] [ 11 , 10 ]
·T ·T ·T ·T
· T · T · T · T
· T · T · T · T
· T · T · T · T
· T · T · T · T
[ −1 , −2 ] [ −2
1
, −1
1
] [ −1
1
, −1
2
] [ −1 , 0]
2 1
[ 01 , 21 ] [ 12 , 11 ] [ 11 , 12 ] [ 12 , 10 ]
0 1
··T
.
.... ... . ... .
.... .
....
.. .. ....
.
.... .... .... .. .. .. ..
.. .. .. .. .. .. .. .. .. ..
· T
.. .. .. .. .. .. .. .. .. .. .. .. .. ..
.. ... .. .. .. .. .. .. .. ... .. ... .. ..
.. .. ... .. ... .. ... .. .. .. .. ...
.. .. .. .. .. .. .. ..
T
... ... ... .. .. ... ...
.. .. .. ... .. ... .. .. ..
·
. .. . .. . .. . .. . .. . .. . ..
.. .. .. .. .. .. .. .. .. .. .. .. .. ..
.. .. .. .. .. .. ..
T
.. .. .. .. .. .. ..
.. .. .. .. .. .. .. .. ... .. .. .. .. ..
.. ..
·
.. .. .. .. .. .. . .. .. .. .. ..
... .. ... .. ... .. ... .. ... .. ... .. ... ..
TT
. .. . .. . .. ... .. . .. . .. . ..
... .. ... ... .. ... ... .. ...
·
.. .. .. ..
. . .. .. .
..
..
. .. . . . . .
[ 12 , 32 ] [ 32 , 11 ]
... ...
.... ....
.. .. .. ..
.. .. .. ..
.. ... .. ...
.. .. .. ..
... .. ... ..
. .. . ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
.. .. .. ..
... .. ... ..
. .. . ..
... .. ... ..
.. .. .. ..
To find the invariant measure ν, Viswanath uses the Stern-Brocot tree. This is an
infinite binary tree that divides the real line R recursively. In the Stern-Brocot tree, ∞
is represented by 10 and 0 as 10 . The root of the tree is the real line [ −1 , 1 ] and its left and
0 0
right children are [ −1 , 0 ] and [ 10 , 10 ] respectively. For a node [ ab , dc ], other than the root,
0 1
the left child is defined as [ ab , a+c
b+d
] and the right child is defined as [ a+c b+d d
, c ]. Part of the
tree is shown in Figure 3.
Viswanath proves that every interval in the Stern-Brocot tree satisfies the invariant
measure (20) and can therefore be used to find an approximation of the integral (19). If
Ij , 1 ≤ j ≤ 2d , are the set of positive Stern-Brocot intervals at depth d + 1 then
2 Z d
Z ∞ X
γf = 2 amp(m)dν(m) = 2 amp(m)dν(m),
0 j=1 Ij
35
and hence, by forming an upper and lower approximation to this integral, γf ∈ [pd , qd ]
where
2d
X X2d
pd = 2 min amp(m)ν(Ij ), qd = 2 max amp(m)ν(Ij ).
m∈Ij m∈Ij
j=1 j=1
m = intval(stern([k+2:k+nrt+1]))./ ...
intval(stern(n-k:-1:n-(k+nrt-1)));
To calculate amp(m) the function amp.m is used. Since the variable m input into the
function is an interval, an enclosure of amp(m) is output, thus taking account of rounding
errors.
36
8.4 How to Use viswanath.m
The M-file viswanath.m, used for calculating Viswanath’s constant, is given in Ap-
pendix A.4, along with additional functions called from within the program. The value
of d, the depth d + 1 of the Stern-Brocot tree used at the beginning of the code, can be
changed. Also the appropriate line in flip.m has to be uncommented depending on the
value of d chosen.
To run the M-file type viswanath at the command prompt.
8.5 Results
The M-file viswanath.m was ran on a machine with 511Mb of memory running Linux
for d between 10 and 28. Increasing d divides the real line into smaller intervals and
hence produces tighter bounds on γf . This also increases the time taken by the program.
Table 6 shows the enclosures computed for γf and Table 7 shows the corresponding
enclosures for eγf .
37
diameter of 1.29 × 10−9 . Here we have computed an enclosure of γf with approximately
the same diameter, without the need for rounding error analysis.
A MATLAB M-files
A.1 Linear Systems of Equations
intgaus.m
This routine performs interval Gaussian elimination.
function x = intgauss(A,b)
%INTGAUSS Interval Gaussian Elimination with mignitude pivoting.
%
% x = INTGAUSS(A,b)
%
% Solves Ax = b for linear systems of equations.
%
% INPUT: A coefficient matrix
% b right hand side vector
% OUTPUT: x interval solution
n = length(A);
for i = 1:n-1;
[maxmig,index] = max(mig(A(i:n,i)));
if maxmig <= 0
error(’All possible pivots contain zero.’)
end
k = index+i-1;
if k ~= i % Swap the rows if necessary.
A([i k],i:n) = A([k i],i:n);
b([i k]) = b([k i]);
end
for j = i+1:n;
mult = A(j,i)/A(i,i);
A(j,i+1:n) = A(j,i+1:n)-mult*A(i,i+1:n);
b(j) = b(j)-mult*b(i);
end
end
38
kraw.m
This routine solves an interval linear system of equations by Krawczyk’s method.
function x= kraw(A,b)
%KRAW Solves Ax = b for interval linear systems.
%
% x = KRAW(A,b)
%
% Solves linear systems using Krawczyk’s method.
% If A and b have all real components then x is a
% verified bound on the solution. If A or b are of interval
% type then an outer estimate of the interval hull of the
% system is given.
%
% INPUT: A coefficient matrix
% b right hand side vector
% OUTPUT: x interval solution
n = length(A);
midA = mid(A);
C = inv(midA); % Preconditions system.
b = C*b;
CA = C*intval(A);
A = eye(n)-CA;
beta = norm(A,inf);
if beta >= 1;
error(’Algorithm not suitable for this A’)
end;
alpha = norm(b,inf)/(1-beta);
x(1:n,1) = infsup(-alpha,alpha);
s_old = inf;
s = sum(rad(x));
mult = (1+beta)/2;
while s < mult*s_old
x = intersect(b+A*x,x); % Krawczyk’s iteration.
s_old = s;
s = sum(rad(x));
end
hsolve.m
This routine solves an interval linear system of equations by a method based on work by
Hansen, Bliek, Rohn, Ning, Kearfott and Neumaier.
function x = hsolve(A,b)
%HSOLVE Solves Ax = b for interval linear systems.
%
% x = HSOLVE(A,b)
39
%
% Solves linear systems using a method motivated by Hansen,
% Bliek, Rohn, Ning, Kearfott and Neumaier.
% If A and be have all real components then x is a
% verified bound on the solution. If A or b are of interval
% type then an outer estimate of the interval hull of the
% system is given.
%
% INPUT: A coefficient matrix
% b right hand side vector
% OUTPUT: x interval solution
n = dim(A);
C = inv(mid(A));
A = C*A;
b = C*b;
dA = diag(A); % Diagonal entries of A.
A = compmat(A); % Comparison matrix.
B = inv(A);
v = abss(B*ones(n,1));
setround(-1)
u = A*v;
if ~all(min(u)>0) % Check positivity of u.
error(’A is not an H-matrix’)
else
dAc = diag(A);
A = A*B-eye(n); % A contains the residual matrix.
setround(1)
w = zeros(1,n);
for i=1:n,
w = max(w,(-A(i,:))/u(i));
end;
dlow = v.*w’-diag(B);
dlow = -dlow;
B = B+v*w; % Rigorous upper bound for exact B.
u = B*abss(b);
d = diag(B);
alpha = dAc+(-1)./d;
beta = u./dlow-abss(b);
x = (b+midrad(0,beta))./(dA+midrad(0,alpha));
end
40
%INTNEWTON Finds all roots of a function in given range.
%
% INTNEWTON(f,X,tol)
%
% Uses an interval version of Newton’s method to provide
% rigorous bounds on the roots of a function f, to be
% specified seperately. Bounds are displayed as they are
% found. Roots are displayed if radius of enclosure < tol
% or if enclosure is no longer becoming tighter. If tol
% is not given then the later stopping criterion is used.
%
% INPUT: f function of nonlinear equation
% X intitial range
% tol tolerance used as stopping criterion
if nargin < 3
tol = 0;
end
x = intval(mid(X));
fx = feval(f,intval(x));
F = feval(f,gradientinit(X));
if X == Xnew
Xnew = infsup(NaN,NaN);
end
if ~isempty(Xnew)
intnewtall(@f,Xnew,tol);
end
41
else
N = x-fx/F.dx; %Performs Newton’s method.
Xnew = intersect(N,X);
if rad(Xnew) < tol | X == Xnew; %Stopping Criterion.
if intersect(0,feval(f,Xnew)) == 0
disp(’Root in the interval’)
Xnew
end
elseif ~isempty(Xnew)
intnewtall(@f,Xnew,tol);
end
end
function Y = nlinkraw(f,X)
%NLINKRAW Bounds roots of nonlinear systems of equations.
%
% X = NLINKRAW(f,X)
%
% Uses the Krawczyk operator to produce bounds on
% a root in a given interval of a nonlinear equation f.
%
% INPUT: f A MATLAB function
% X Initial interval
% OUTPUT: Y Interval containing root
X = X(:);
n = length(X);
ready = 0; k = 0;
N = intval(zeros(n,1));
while ~ready
k = k+1;
F = feval(f,gradientinit(X));
C = inv(mid(F.dx));
x = mid(X);
fx = feval(f,intval(x));
N = x-C*fx+(eye(n)-C*(F.dx))*(X-x); %Krawczyk operator.
Xnew = intersect(N,X);
if isempty(Xnew)
42
error(’No root in box’)
elseif X == Xnew %Stopping criterion.
ready = 1;
else
X = Xnew;
end
end
disp(’Number of iterations’)
disp(k)
Y = X;
nlinhs.m
This routine finds bounds on the solution of a nonlinear system of equations using qthe
Hansen–Sengupta operator.
function X = nlinhs(f,X)
%NLINHS Bounds roots of nonlinear systems of equations.
%
% X = NLINHS(f,X)
%
% Uses the Hansen-Sengupta operator to produce bounds on
% a root in a given interval of a nonlinear equation f.
%
% INPUT: f A MATLAB function
% X Initial interval
% OUTPUT: Y Interval containing root
X = X(:);
Y = X;
n = length(X);
ready = 0; k = 0;
N = intval(zeros(n,1));
while ~ready
k = k+1;
F = feval(f,gradientinit(X));
A = F.dx;
C = inv(mid(A));
x = mid(X);
fx = feval(f,intval(x));
M = C*A;
b = C*fx;
if in(0,M(1,1))
error(’division by zero’)
end
43
%Hansen-Sengupta operator.
N(1) = x(1)-(b(1)+M(1,2:n)*(X(2:n)-x(2:n)))/M(1,1);
Y(1) = intersect(N(1),X(1));
for i = 2:n
if in(0,M(i,i))
error(’division by zero’)
end
N(i) = x(i)-(b(i)+M(i,1:i-1)*(Y(1:i-1)-x(1:i-1))+ ...
M(i,i+1:n)*(X(i+1:n)-x(i+1:n)))/M(i,i);
Y(i) = intersect(N(i),X(i));
end
if any(isempty(Y))
error(’No root in box’)
elseif X == Y
ready = 1; %Stopping Criterion.
else
X = Y;
end
end
disp(’Number of iterations’)
disp(k)
allroots.m
This routine bounds all solutions of a nonlinear system of equations in a given box.
function allroots(f,X)
%ALLROOTS Finds all solutions to nonlinear system of equations.
%
% ALLROOTS(f,X)
%
% Uses the Krawczyk operator with generalised bisection
% to find bounds on all solutions to a nonlinear
% system, given by the function f, in the box X.
%
% INPUT: f function of nonlinear system
% X initial box
ready = 0;
X = X(:);
n = length(X);
N = intval(zeros(n,1));
while ~ready
F = feval(f,gradientinit(X));
44
C = inv(mid(F.dx));
x = mid(X);
fx = feval(f,intval(x));
N = x-C*fx+(eye(n)-C*(F.dx))*(X-x); %Krawczyk operator.
Xnew = intersect(N,X);
if any(isempty(Xnew))
ready = 1;
elseif max(rad(X)-rad(Xnew)) <= 0.5 * max(rad(X))
ready = 1; %Checks if width sufficiently reduced.
else
X = Xnew;
end
end
45
startnum = 10;
if d > 25
maxnum = 24;
else
maxnum = d;
end
n = 2^d;
nrt = sqrt(n);
itable = intval(zeros(d+1,1));
g = (intval(1)+sqrt(intval(5)))/intval(2);
for i = 0:d
itable(i+1) = (g^(d-i)*(1+g)^(-d))/4;
end
global num
num = zeros(2^maxnum,1);
num(1) = 1;
num(2) = 1;
for i = 2:2:2^startnum
num(i+1) = num(i/2+1);
num(i+2) = num(i/2+1)+num(i/2+2);
end
for i = startnum+1:maxnum %First 2^maxnum Stern-Brocot coefficients.
num(2^(i-1)+1:2:2^i+1) = num([2^(i-1):2:2^i]./2+1);
num(2^(i-1)+2:2:2^i+2) = num([2^(i-1):2:2^i]./2+1)+ ...
num([2^(i-1):2:2^i]./2+2);
end
for i = 0:nrt-1
k = i*nrt;
m1 = intval((stern(k+1)))/intval((stern(n-k+1)));
left1 = AMP(m1);
if i < nrt/4
left(1) = left1;
m = intval(stern([k+2:k+nrt+1]))./ ...
intval(stern(n-k:-1:n-(k+nrt-1)));
b = flip(k+(0:nrt-1),flipno);
measure(1:nrt,1) = itable([b(1:nrt)+1]);
right = AMP(m);
left(2:nrt,1) = right(1:nrt-1);
larray2 = measure’*inf(right);
uarray2 = measure’*sup(left);
elseif i < nrt-1
left(1) = left1;
m = intval(stern(k+2:k+nrt+1))./ ...
intval(stern(n-k:-1:n-(k+nrt-1)));
46
b = flip(k+(0:nrt-1),flipno);
measure(1:nrt,1) = itable([b(1:nrt)+1]);
right = AMP(m);
left(2:nrt,1) = right(1:nrt-1);
larray2 = measure’*inf(left);
uarray2 = measure’*sup(right);
else %i = nrt-1.
left(1) = left1;
m = intval(stern(k+2:k+nrt+1))./ ...
intval(stern(n-k:-1:n-(k+nrt-1)));
b = flip(k+(0:nrt-1),flipno);
measure(1:nrt,1) = itable([b(1:nrt)]+1);
right = AMP(m);
right(nrt) = log(intval(4));
left(2:nrt,1) = right(1:nrt-1);
larray2 = measure’*inf(left);
uarray2 = measure’*sup(right);
end
l = l+larray2;
u = u+uarray2;
end
stern.m
This routine finds the Stern-Brocot tree coefficients.
function y = stern(x);
%STERN Calculates the Stern-Brocot coefficients.
%
% y = STERN(x)
%
% If length(x) < 2^maxnum then the coefficients are taken
% from the variable num, in viswanath.m. Otherwise
% a recursive procedure is carried out.
global num
n = length(x);
y = zeros(n,1);
maxnum = 2^24;
47
if x(n) <= maxnum
y=num(x);
else
for i = 1:n
if mod(x(i),2) == 1
y(i) = stern((x(i)-1)/2+1);
else
xim2d2 = (x(i)-2)/2;
y(i) = stern(xim2d2+1)+stern(xim2d2+2);
end
end
end
flip.m
This routine finds the index i into the variable itable for finding ν(I j )/2.
function j = flip(x,flipno)
%FLIP Index used in viswanath.m.
%
% j = FLIP(x,flipno)
%
% Index into array itable. The binary representation
% of x is taken and then odd bits are fliped. The
% variable j is taken as the number of 1’s.
n = length(x);
y = bitxor(flipno,x);
Q = dec2bin([y(1:n)]);
q = Q == ’1’;
j = sum(abss(q’));
amp.m
This routine produces bounds on amp(m).
function y = AMP(m)
%AMP Produces enclosures on amp(m).
m2 = m.^2;
m2p1 = m2+1;
y = log(((m2.^2).*4+1)./(m2p1.^2));
References
[1] C. Bliek. Computer Methods for Design Automation. PhD thesis, Massachusetts
Institute of Technology, August 1992.
48
[2] L. E. J. Brouwer. Uber abbildung von mannigfaltigkeiten. Math. Ann., 71:97–115,
1912.
[3] I. Gargantini and P. Henrici. Circular arithmetic and the determination of polyno-
mial zeros. Numer. Math., 18:305–320, 1972.
[4] J. Hales. The Kepler conjecture. http://www.math.pitt.edu/~thales/countdown.
[5] E. R. Hansen. On solving systems of equations using interval arithmetic. Mathe-
matics of Computation, 22(102):374–384, April 1968.
[6] E. R. Hansen. Bounding the solution of interval linear equations. SIAM Journal on
Numerical Analysis, 29(5):1493–1503, October 1992.
[7] E. R. Hansen. Global Optimization Using Interval Analysis. Marcel Dekker, New
York, 1992.
[8] E. R. Hansen and S. Sengupta. Bounding solutions of systems of equations using
interval analysis. BIT, 21(2):203–211, 1981.
[9] IEEE standard for binary floating point arithmetic. Technical Report Std. 754–1985,
IEEE/ANSI, New York, 1985.
[10] Luiz Henrique de Figueiredo Joao Batista Oliveira. Interval computation of
Viswanath’s constant. Reliable Computing, 8(2):121–139, 2002.
[11] R. Krawczyk. Newton-Algorithmen zur Bestimmung von Nullstellen mit Fehler-
schranken. Computing, 4:187–201, 1969.
[12] R. Moore. Interval Arithmetic. Prentice-Hall, Englewood Cliffs, NJ, USA, 1966.
[13] A. Neumaier. Interval Methods for Systems of Equations. Cambridge University
Press, 1990.
[14] A. Neumaier. A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott enclosure
for linear interval equations. Reliable Computing, 5(2):131–136, 1999.
[15] S. Ning and R. B. Kearfott. A comparison of some methods for solving linear interval
equations. SIAM Journal on Numerical Analysis, 34(4):1289–1305, August 1997.
[16] J. Rohn. Cheap and tight bounds: The recent result by E. Hansen can be made
more efficient. Interval Computations, (4):13–21, 1993.
[17] S. M. Rump. Fast and parallel interval arithmetic. BIT, 39(3):534–554, 1999.
[18] S. M. Rump. INTLAB — INTerval LABoratory. In Tibor Csendes, editor, De-
velopments in Reliable Computing, pages 77–105. Kluwer, Dordrecht, Netherlands,
1999.
[19] C. A. Schnepper and M. A. Stadtherr. Robust process simulation using interval
methods. Comput. Chem. Eng., 20:187–199, 1996.
[20] Divakar Viswanath. Random Fibonacci sequences and the number 1.13198824 . . . .
Mathematics of Computation, 69(231):1131–1155, July 2000.
49