0% found this document useful (0 votes)
59 views24 pages

Numerical Methods: Session 1: Principles of Numerical Mathematics

This document discusses the principles of numerical mathematics. It begins by defining what makes a problem well-posed versus ill-posed. It then introduces the concept of condition number to quantify how well-posed or ill-posed a problem is. Specifically, it defines the relative and absolute condition numbers. The document next discusses how to extend these concepts of well-posedness and conditioning to numerical methods used to solve problems. It defines consistency and stability for numerical methods.

Uploaded by

Doris riveros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views24 pages

Numerical Methods: Session 1: Principles of Numerical Mathematics

This document discusses the principles of numerical mathematics. It begins by defining what makes a problem well-posed versus ill-posed. It then introduces the concept of condition number to quantify how well-posed or ill-posed a problem is. Specifically, it defines the relative and absolute condition numbers. The document next discusses how to extend these concepts of well-posedness and conditioning to numerical methods used to solve problems. It defines consistency and stability for numerical methods.

Uploaded by

Doris riveros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Numerical methods

Session 1: Principles of numerical mathematics

Pedro González Rodrı́guez

Universidad Carlos III de Madrid

January 29, 2020

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Consider the following expression:

F (x, d) = 0 (1)
in which we call x the unknown, d data and F is the relation
between x and d.
1 If F and d are known, finding x will be called the ”direct
problem”.
2 If F and x are known, finding d will be called the ”inverse
problem”.
3 If x and d are known, finding F will be called the
”identification problem”.
In this course we will study the direct problem.

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Definition: We say that a problem is well-posed (or stable) if it


admits a unique solution x which depends with continuity from the
data.
Definition: We say that a problem is ill-posed if it is not
well-posed.
Definition: We say that x depends with continuity from the data
if a little change δd in the data produces a small change in the
solution δx. Mathematically:
If F (d + δd, x + δx) = 0 then:

∀η > 0, ∃K (η, d) : kδdk < η ⇒ kδxk ≤ K (η, d)kδdk (2)


where K is a constant that depends on η and d.

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Example: Find the number of roots of the polynomial


p(x) = x 4 − (2a − 1)x 2 + a(a − 1) (a is the data of the problem).
Is easy to check that we have four real roots if a ≥ 1, two is
a ∈ [0, 1) and no real roots if a < 0. This is an ill posed problem
because the solution does not depend continuously from the data.

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Most problems are not so clearly ill posed. To quantify the well/ill
posedness of a problem we define:
Definition: Relative condition number

kδxk/kxk
K (d) = sup (3)
δd∈D kδdk/kdk

Definition: Absolute condition number

kδxk
Kabs (d) = sup (4)
δd∈D kδdk

D is a neighborhood of the origin that denotes the admissible


perturbations of the data.

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Note: You can use any norm you want.


Definition: We say a problem is ”ill-conditioned” if K is ”big”
where the definition of big depends on the problem.
It is important to understand that the conditioning of a problem
does not depend on the algorithm used to solve it. You can
develop stable and unstable algorithms for well-posed problems.
The concept of stability for algorithms will be defined later on.
Having a “big” or even infinite condition number does not imply
that the problem is ill-posed. Some ill-posed problems can be
reformulated as an equivalent problem (that is, one that has the
same solution) which are well-posed.

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

If a problem admits a unique solution, then there exist a mapping


G , called the resolvent, between the data and the solutions sets
such that:

x = G (d), that is, F (G (d), d) = 0 (5)


According to this, and assuming G is differentiable in d (G 0 (d)
exist), the Taylor expansion of G is

G (d + δd) − G (d) = G 0 (d)δd + o(kδdk) for δd → 0


This let us redefine the condition numbers in terms of the resolvent
G:

kdk
K (D) ≈ kG 0 (d)k and Kabs ≈ kG 0 (d)k
kG (d)k

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Example of ill-conditioning: Algebraic second degree


equation:
We want to calculatepthe solutions of x 2 − 2px + 1 with p ≥ 1.
Obviously x± = p ± p 2 − 1.
We can formulate this problem as F (x, p) = x 2 − 2px + 1 where p
is the data and
px± = (x+ , x−p) the solution. The resolvent
G (p) = (p + pp− 1, p − p 2 p
2 − 1) and its derivative
G 0 (p) = (1 + 1/ p 2 − 1, 1 − 1/ p 2 − 1).

Pedro González Rodrı́guez Numerical methods


Well-posedness and condition number of a problem

Then:

kdk (1p 2 /(p 2 − 1)1/2 ) p


K (d) ≈ kG 0 (d)k = 2 1/2
kpk = 2 |p|
kG (d)k (2(p − 1)) p −1
√ p
Kabs (d) ≈ kG 0 (d)k = 2p
2
p −1
If p >> 1 then the problem is well-conditioned (two distinct roots).
If p = 1 (one double root), then G is not differentiable but in the
limit p → 1+ the problem is ill conditioned as lim+ kG 0 (p)k = ∞.
p→1
However, the problem is not ill-posed. We can reformulate
p it as
F (x, t) = x 2 − ((1 + t 2 )/t)x + 1 with t = p + p 2 − 1. In this
case x+ = t and x− = 1/t are the same for t = 1, and
K (t) ≈ 1 ∀t ∈ R

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Let’s assume the the problem F (x, d) = 0 is well-posed. Then, a


numerical method to approximate its solution will consist, in
general, of a sequence of approximate problems

Fn (xn , dn ) = 0 n≥1
We would expect that xn → x. For that it is necessary that
n→∞
dn → d and that Fn approximates F when n → ∞.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Definition: We say that Fn (xn , dn ) = 0 is consistent if

Fn (x, d) = Fn (x, d) − F (x, d) → 0


n→∞

where x is the solution of F (x, d) = 0 for the datum d.


Definition: We say that a method is strongly consistent if
Fn (x, d) = 0 ∀n.
In some cases when iterative methods are used, we can write them
as

F (xn , xn−1 , · · · , xn−q , dn ) = 0


where xn , xn−1 , · · · , xn−1 are given. In this case the property of
strong consistency becomes Fn (x, x, · · · , x, d) = 0 ∀n ≥ q.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Examples:
f (xn−1 )
1 Newton’s method: xn = xn−1 − f 0 (xn−1 ) is strongly consistent.
Rb
2 Composite midpoint rule: If x = a f (t)dt,
xn = H nk=1 f ( tk +t2k+1 ) n ≥ 1 with H = (b − a)/n and
P
tk = a + (k − 1)H. This method to calculate the integral is
consistent, but only strongly consistent if f is a piecewise
linear polynomial.
In general, numerical methods obtained from the
mathematical problem by truncation of limit operations (like
integrals, derivatives, series,...) are not strongly consistent.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Definition: We say that a numerical method Fn (xn , dn ) = 0 is


well-posed (or stable) if for any fixed n there exists a unique
solution xn corresponding to the datum dn , that the computation
of xn as a function of dn is unique, and that xn depends
continuously on the data, i.e:

∀η > 0, ∃Kn (η, dd ) : kδdn k < η ⇒ kδx −nk ≤ Kn (η, dn )kδdn k (6)

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods
We can also define:

kδxn k/kxn k kδxn k


Kn (dn ) = sup Kabs,n (dn ) = sup (7)
δdn ∈Dn kδdn k/kdn k δdn ∈Dn kδdn k

and from these:

K num (dn ) = lim sup Kn (dn ) (8)


n→∞ n≥k

num
Kabs (dn ) = lim sup Kabs,n (dn ) (9)
n→∞ n≥k

num is the
Knum is the relative asymptotic condition number and Kabs
absolute asymptotic condition number of the numerical method
corresponding to the datum dn .
The numerical method is said to be well-conditioned if the
condition number K num is “small” for any admissible datum dn
and ill-conditioned otherwise.
Pedro González Rodrı́guez Numerical methods
Stability of numerical methods

We can also define the resolvent Gn for the numerical method:

xn = G (dn ), that is F (Gn (dn ), dn ) = 0


Assuming it is differentiable:

kdn k
Kn (dn ) ≈ kGn0 (dn )k and Kabs ≈ kGn0 (dn )k
kGn (dn )k

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Examples:
Sum and subtraction. The sum defined as

f : R2 → R
(a,b) a+b

has derivative f 0 (a, b) = (1, 1)T , and thus, its condition


number K ((a, b)) ≈ |a|+|b|
|a+b| ≈ 1 The subtraction defined as

f : R2 → R
(a,b) a−b

has derivative f 0 (a, b) = (1, −1)T , and thus, its condition


number K ((a, b)) ≈ |a|+|b|
|a−b| which can be very big if a ≈ b.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Finding the roots of x 2 − 2px + 1 = 0 is well-conditioned,


p but
we can develop an unstable algorithm: x− = p − p 2 − 1
because this formula is subject to errors due to numerical
cancellation of digits in the subtraction. The Newton’s
method could be a stable algorithm to solve this problem:
2
xn−1 − 2pxn−1 + 1
xn = xn−1 −
2xn−1 − 2p

The method’s condition number is Kn (p) = |xn|p|


−p| . To
compute Knnum (p) we notice that if the algorithm
p converges,
2
then xn → x+ or x− , therefore, |xn − p| → p − 1 and
kn (p) → Knnum (p) ≈ √ |p|2 which is similar to the condition
p −1
number of the exact problem. Then, if p ≈ 1 the problem is
ill-conditioned.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Definition: We say that the numerical method Fn (xn , dn ) = 0 is


convergent if and only if

∀ > 0 ∃n0 (), ∃δ(n0 , ) : ∀n > n0 , ∀kδdn k < δ(n0 , ) ⇒

kx(d) − xn (d + δdn )k < 


where d is an admissible datum, x(d) the corresponding solution,
and x(d + δdn ) is the solution of the numerical problem
(Fn (xn , dn )) with datum d + δdn .

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Definition: Absolute and relative errors:

|x − xn |
E (xn ) = |x − xn | Erel (xn ) = (x 6= 0)
|x|
Definition: Error by component:

c |(x − xn )i,j |
Erel (xn ) = max (xi,j 6= 0)
i,j |xi,j |

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Relations between stability and convergence


The concepts of stability and convergence are strongly connected.
If a (numerical) problem is well-posed, stability is a necessary
condition for convergence. Moreover, if the numerical problem is
consistent, stability is a sufficient condition for convergence. This
is known as ”equivalence” or ”Lax-Richtmyer” theorem: For a
consistent numerical method, stability is equivalent to convergence.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods
Sources of errors in computational models
Whenever the numerical problem (NP) is an approximation of a
mathematical problem (MP) and this latter is in turn a model of a
physical problem (PP), we say that NP (Fn (xn , dn ) = 0) is a
computational model for PP.
The global error e = |xph − xn | can be interpreted as the sum of
the MP error em = x − xph and the computational problem error
ec = x̂ − x (e = em + ec ).
ea : Error induced by the numerical algorithm and the rounding
errors.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

In general, we can enumerate the following sources of error:


1 Errors due to the model, that can be reduced by using a
proper model.
2 Errors due to data, that can be reduced improving the
measurement’s accuracy.
3 Truncation errors, arising from the approximation (truncation)
of limit operations (integrals, derivatives,...).
4 Rounding errors.
Type 3 and 4 errors give rise to the computational error. A
numerical method will be convergent if this error can be made
arbitrarily small increasing the computational effort. Although
convergence is the primary goal of a numerical method, there are
also the accuracy, the reliability and the efficiency.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Accuracy means that the errors are small with respect to a fixed
tolerance. It is usually quantified by the infinitesimal order of the
error en with respect to the discretization characteristic parameter
(for example the largest grid spacing).
Note: Machine precision does not limit, theoretically, the accuracy.
Reliability means that it is very likely that the global error is below
a certain tolerance.
Efficiency mean that the computational (effort) complexity needed
to control the error (number of operation and memory) is as small
as possible.

Pedro González Rodrı́guez Numerical methods


Stability of numerical methods

Definition: Algorithm is a directive that indicates, through


elementary operations, all the passages needed to solve a problem.
It should finish after a finite number of steps, and as a consequence
the executor (man or machine) must find within the algorithm
itself all the instructions to completely solve the problem.
Complexity of an algorithm is a measure of its executing time.
Complexity of a problem is the complexity of the algorithm with
smallest complexity capable of solving the problem.

Pedro González Rodrı́guez Numerical methods

You might also like