Numerical Methods For CSE Problem Sheet 4: Problem 1. Order of Convergence From Error Recursion (Core Prob-Lem)
Numerical Methods For CSE Problem Sheet 4: Problem 1. Order of Convergence From Error Recursion (Core Prob-Lem)
Numerical Methods For CSE Problem Sheet 4: Problem 1. Order of Convergence From Error Recursion (Core Prob-Lem)
Hiptmair AS 2015
ETH Zürich
G. Alberti,
D-MATH
F. Leonardi Numerical Methods for CSE
Problem Sheet 4
(1a) Guess the maximal order of convergence of the method from a numerical exper-
iment conducted in MATLAB.
1
4 f o r k=2:20, e(k+1) = e(k)* s q r t (e(k-1)); end
5 le = l o g (e); d i f f (le(2:end))./ d i f f (le(1:end-1)),
(1b) Find the maximal guaranteed order of convergence of this method through ana-
lytical considerations.
H INT: First of all note that we may assume equality in both the error recursion (11) and the
bound �e(n+1) � ≤ C�e(n) �p that defines convergence of order p > 1, because in both cases
equality corresponds to a worst case scenario. Then plug the two equations into each other
and obtain an equation of the type . . . = 1, where the left hand side involves an error norm
that can become arbitrarily small. This implies a condition on p and allows to determine
C > 0. A formal proof by induction (not required) can finally establish that these values
provide a correct choice.
Solution: Suppose �e(n) � = C�e(n−1) �p (p is the largest convergence order and C is some
constant).
Then
In (11) we may assume equality, because this is the worst case. Thus,
i.e.
C p �e(n−1) �p = 1.
2 −p− 1
2 (14)
p2 − p − =0
1
√ √
2
1+ 3 1− 3
p= or p= (dropped).
2 2
For C we find the maximal value 1.
2
Problem 2. Convergent Newton iteration (core problem)
As explained in [1, Section 2.3.2.1], the convergence of Newton’s method in 1D may only
be local. This problem investigates a particular setting, in which global convergence can
be expected.
We recall the notion of a convex function and its geometric definition. A differentiable
function f ∶ [a, b] � R is convex, if and only if its graph lies on or above its tangent at
any point. Equivalently, differentiable function f ∶ [a, b] � R is convex, if and only if its
derivative is non-decreasing.
Give a “graphical proof” of the following statement:
If F (x) belongs to C 2 (R), is strictly increasing, is convex, and has a unique zero, then the
Newton iteration [1, (2.3.4)] for F (x) = 0 is well defined and will converge to the zero of
F (x) for any initial guess x(0) ∈ R.
Solution: The sketches in Figure 8 discuss the different cases.
(3a) Write a M ATLAB script that computes the order of convergence to the point x∗
of this iteration for the function f (x) = xex − 1 (see [1, Exp. 2.2.3]). Use x(0) = 1.
Solution:
3
5 x0 = 1;
6 x_star = f z e r o (f,x0);
7
8 x = x0; upd = 1;
9 w h i l e (abs(upd) > e p s )
10 fx = f(x(end)); % only 2 evaluations of f at each
step
11 i f fx ≠ 0;
12 upd = fx^2 / (f(x(end)+fx)-fx);
13 x = [x, x(end)-upd];
14 e l s e upd = 0;
15 end
16 end
17 residual = f(x);
18 err = abs(x-x_star);
19 log_err = l o g (err);
20 ratios = (log_err(3:end)-log_err(2:end-1))...
21 ./(log_err(2:end-1)-log_err(1:end-2));
22
The output is
log(en+1 )−log(en )
x error en log(en )−log(en −1)
1.000000000000000 0.432856709590216
0.923262600967822 0.356119310558038
0.830705934728425 0.263562644318641 1.542345498206531
0.727518499997190 0.160375209587406 1.650553641703975
0.633710518522047 0.066567228112263 1.770024323911885
0.579846053882820 0.012702763473036 1.883754995643305
0.567633791946526 0.000490501536742 1.964598248590593
0.567144031581974 0.000000741172191 1.995899954235929
0.567143290411477 0.000000000001693 1.999927865685712
0.567143290409784 0.000000000000000 0.741551601040667
4
(3b)
x
The function g(x) contains a term like exe , thus it grows very fast in x and the
method can not be started for a large x(0) . How can you modify the function f (keeping
the same zero) in order to allow the choice of a larger initial guess?
H INT: If f is a function and h ∶ [a, b] → R with h(x) ≠ 0, ∀x ∈ [a, b], then (f h)(x) = 0 ⇔
f (x) = 0.
Solution: The choice f˜(x) = e−x f (x) = x − e−x prevents the blow up of the function g
and allows to use a larger set of positive initial points. Of course, f˜(x) = 0 exactly when
f (x) = 0.
(4a) Find an equation satisfied by the smallest positive initial guess x(0) for which
Newton’s method does not converge when it is applied to F (x) = arctan x.
H INT: Find out when the Newton method oscillates between two values.
H INT: Graphical considerations may help you to find the solutions. See Figure 9: you
should find an expression for the function g.
Solution: The function arctan(x) is positive, increasing and concave for positive x,
therefore the first iterations of Newton’s method with initial points 0 < x(0) < y (0) sat-
isfy y (1) < x(1) < 0 (draw a sketch to see it). The function is odd, i.e., arctan(−x) =
− arctan(x) for every x ∈ R, therefore the analogous holds for initial negative values
(y (0) < x(0) < 0 gives 0 < x(1) < y (1) ). Moreover, opposite initial values give opposite
iterations: if y (0) = −x(0) then y (n) = −x(n) for every n ∈ N.
All these facts imply that, if �x(1) � < �x(0) �, then the absolute values of the following iter-
ations will converge monotonically to zero. Vice versa, if �x(1) � > �x(0) �, then the absolute
values of the Newton’s iterations will diverge monotonically. Moreover, the iterations
change sign at each step, i.e., x(n) ⋅ x(n+1) < 0.
It follows that the smallest positive initial guess x(0) for which Newton’s method does not
converge satisfies x(1) = −x(0) . This can be written as
f (x(0) )
x(1) = x(0) − = x(0) − (1 + (x(0) )2 ) arctan x(0) = −x(0) .
f ′ (x(0) )
5
Therefore, x(0) is a zero of the function
g(x) = 2x − (1 + x2 ) arctan x with g ′ (x) = 1 − 2x arctan x.
(4b) Use Newton’s method to find an approximation of such x(0) , and implement it
with Matlab.
Solution: Newton’s iteration to find the smallest positive initial guess reads
2x(n) − (1 + (x(n) )2 ) arctan x(n) −x(n) + �1 − (x(n) )2 � arctan x(n)
(n+1)
=x (n)
− =
1 − 2x arctan x(n) 1 − 2x(n) arctan x(n)
x .
12 figure;
13 x1 = x0- a t a n (x0)*(1+x0^2); x2 = x1- a t a n (x1)*(1+x1^2);
14 X=[-2:0.01:2];
15 p l o t (X, a t a n (X),'k',...
16 X, 2*(X)-(1+(X).^2).* a t a n ((X)),'r--',...
17 [x0, x1, x1, x2, x2], [ a t a n (x0), 0, a t a n (x1), 0,
a t a n (x2)],...
18 [x0,x1],[0,0],'ro',[-2,2], [0,0],'k','linewidth',2);
19 l e g e n d ('arctan', 'g', 'Newton critical iteration'); a x i s
equal;
6
20 p r i n t -depsc2 'ex_NewtonArctan.eps'
In other words, ✏0 tells us which distance of the initial guess from x∗ still guarantees local
convergence.
Solution:
lim x(k) = x∗ ⇐⇒ lim �x(k) − x∗ � = 0
k→∞ k→∞
Thus we seek an upper bound B(k) for �x(k) − x∗ � and claim that: lim B(k) = 0.
k→∞
�x(k) − x∗ � ≤ C�x(k−1) − x∗ �p
≤ C ⋅ C p �x(k−2) − x∗ �p
2
≤ C ⋅ C p ⋅ C p �x(k−3) − x∗ �p
2 3
⋮
≤ �x(0) − x∗ �p
k−1 k
C�C p
∑ pi
k−1
= C i=0 �x(0) − x∗ �p
k
pk −1
= �x(0) − x∗ �p
geom. series k
C p−1
pk −1
≤ ✏p0 = C 1−p ⋅�C p−1 ✏0 � = B(k)
k 1 1 pk
�
C p−1
const.
k→∞
7
�⇒ 0 < ✏0 < C 1−p
1
(5b) Provided that �x(0) − x∗ � < ✏0 is satisfied, determine the minimal kmin =
kmin (✏0 , C, p, ⌧ ) such that
�x(k) − x∗ � < ⌧.
Solution: Using the previous upper bound and the condition ⌧ , we obtain:
Solving for the minimal k (and calling the solution kmin ), with the additional requirement
that k ∈ N, we obtain:
��� � � � � � � � � � � �� � � � � � � � � � � � �
<0
� ln �⌧ ) + p−1 ln (C) �
1
k > ln ⋅ kmin ∈ N
1
� � ln (p)
,
ln (C ✏0 )
1
p−1
� �
� � ln �⌧ ) + p−1 ln (C) � 1 ��
�
1
kmin = � ln ⋅
� � (C p−1 ✏ ) � ln (p) ��
� �
1
ln 0
and plot kmin = kmin (✏0 , ⌧ ) for the values p = 1.5, C = 2. Test you implementation for
every (✏0 , ⌧ ) ∈ linspace(0, C 1−p )2 ∩ (0, 1)2 ∩ {(i, j) � i ≥ j}
1
H INT: Use a M ATLAB pcolor plot and the commands linspace and meshgrid.
Solution: See k_min_plot.m.
8
3 eps_max = C^(1/(1-p));
4
22 % Plotting
23 p c o l o r (eps_msh,tau_msh,k)
24 c o l o r b a r ()
25 t i t l e ('Minimal number of iterations for error < \tau')
26 x l a b e l ('\epsilon_0')
27 y l a b e l ('\tau')
28 xlim([0,eps_max])
29 ylim([0,eps_max])
30 s h a d i n g flat
9
(6a) What is the purpose of the following MATLAB code?
1 f u n c t i o n y = myfn(x)
2 l o g 2 = 0.693147180559945;
3
4 y = 0;
5 w h i l e (x > s q r t (2)), x = x/2; y = y + l o g 2 ; end
6 w h i l e (x < 1/ s q r t (2)), x = x*2; y = y - l o g 2 ; end
7 z = x-1;
8 dz = x* exp(-z)-1;
9 w h i l e (abs(dz/z) > e p s )
10 z = z+dz;
11 dz = x* exp(-z)-1;
12 end
13 y = y+z+dz;
Solution: The MATLAB code computes y = log (x), for a given x. The program can be
regarded as Newton iterations for finding the zero of
f (z) = ez − x (15)
(6b) Explain the rationale behind the two while loops in lines #5, 6.
Solution: The purpose of the two while loops is to shift the function values of (15) and
modify the initial z0 = x − 1 in such a way that good convergence is reached (according to
the function derivative).
10
(6e) Replace the while-loop of lines #9 through #12 with a fixed number of itera-
tions that, nevertheless, guarantee that the result has a relative accuracy eps.
Solution: Denote the zero of f (z) with z ∗ , and e(n) = z (n) − z ∗ . Use Taylor expansion of
f (z), f ′ (z):
= x + xe(n) + O((e(n) )2 )
(z (n) )
,
f (z (n)
e(n+1) = e(n) −
f ′ (z (n)
xe(n) + 12 x(e(n) )2 + O((e(n) )3 )
= e(n) −
x + xe(n) + O((e(n) )2 )
� (e )
1 (n) 2
= �
2
� ( )1+2+�+2 (e0 )2
1 n n+1
2
= 2 ⋅ ( )2 (e0 )2
1 n+1 n+1
2
= 2 ⋅ ( e0 )2 ,
1 n+1
2
where e0 = z 0 − z ∗ = x − 1 − log (x). So it is enough for us to determine the number n of
iteration steps by � elog x � = eps. Thus
(n+1)
log (log (2) − log (log (x)) − log (eps)) − log (log (2) − log (�e0 �))
n = −1
log (2)
log (− log (eps)) − log (− log (�e0 �))
≈ −1
log (2)
The following code is for your reference.
11
function y = myfn(x)
log2 = 0.693147180559945;
y = 0;
w h i l e (x > sqrt(2)), x = x/2; y = y + log2; end
w h i l e (x < 1/sqrt(2)), x = x*2; y = y - log2; end
z = x-1;
dz = x*exp(-z)-1;
e0=z-log(x);
k=(log(-log(eps))-log(-log(abs(e0))))/log(2);
f o r i=1:k
z = z+dz;
dz = x*exp(-z)-1;
end
y = y+z+dz;
12
If the starting value is chosen to be less than the zero
point, then xk > x⇤ for any k 1, and then f (xk ) > 0.
x⇤
x0
x1
f(xk )
xk+1 = xk f 0 (xk ) < xk , for any k 1,
xk+1 xk xk 1
gk (xk )
13
1.5 arctan
g
Newton critical iteration
0.5
−0.5
−1
−1.5
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
14
0.2
12
10
0.15
τ
0.1
6
4
0.05
0 0
0 0.05 0.1 0.15 0.2 0.25
ϵ0
14