Solving Equations 1.1 Bisection Method 1
Solving Equations 1.1 Bisection Method 1
Solving Equations:
1.1 Bisection Method
1.2 Fixed-Point Iteration
1.3 Limits of Accuracy
1.4 Newton’s Method
1.5 Root-Finding without Derivatives
Roots
• “Roots” problems occur when some function f can be
written in terms of one or more dependent variables x,
where the solutions to f(x)=0 yields the solution to the
problem.
• These problems often occur when a design problem
presents an implicit equation for a required parameter.
24 51 10
1 1.41421296
60 60 2 603
GRAPHICAL METHOD
Your boss at the bungee-jumping company wants you to determine the mass at
which the free call velocity is exceeds 36 m/s after 4 s of free fall given a drag
coefficient of 0.25 kg/m.
gm gcd
The free fall velocity as a function of time : v (t ) tanh t
cd m
gm gcd
A functional form : f ( m) tanh t v(t )
cd m
GRAPHICAL METHOD
root 145
1
% Bungee-jumper 0
mp=linspace(50,200);
fp=sqrt(g*mp/cd).*tanh(sqrt(g*cd./mp)*t)-v;
f(m)
-2
plot(mp,fp),grid
xlabel('mass') -3
ylabel('f(m)')
-4
-5
50 100 150 200
g (145) gcd
v(t ) tanh t
cd (145)
BISECTION METHOD
Bracketing a root
The function f (x) has a root at x r if f (r ) 0
f (r ) 0 ( a r b)
Bracketing a root
General Approach to improve Accuracy: f (a0 ) f (b0 ) 0
a0 b0
a1
2
a1 a2 a b
b1 1 0
2
a0 b1 b0 a b
a2 1 1
2
a2
a0 a1 b1 b0
Bracketing a root
Example 1.1: Find a root of the function f ( x) x x 1 by using
3
TOL = 1e-4
Bracketing a root
Example 1.1:
0.80
iteration a f(a) c f(c) b f(b) 0.75
1 0.0000 -1 0.5000 -1 1.0000 1 0.70
2 0.5000 -1 0.7500 1 1.0000 1
3 0.5000 -1 0.6250 -1 0.7500 1
0.65
f(x)
4 0.6250 -1 0.6875 1 0.7500 1 0.60
5 0.6250 -1 0.6563 -1 0.6875 1 0.55
6 0.6563 -1 0.6719 -1 0.6875 1
0.50
7 0.6719 -1 0.6797 -1 0.6875 1
8 0.6797 -1 0.6836 1 0.6875 1 0.45
9 0.6797 -1 0.6816 -1 0.6836 1 0.40
10 0.6816 -1 0.6826 1 0.6836 1 1 3 5 7 9
Iteration
Interval: [0.6816, 0.6826]
(0.6816 0.6826)
r 0.6821 r 0.6821 0.0005
2
by calculator: r = 0.6823278038......, -0.3412+1.1615i, -0.3412-1.1615i
Bracketing a root
Example 1.1:
iteration a f(a) c f(c) b f(b)
1 0.0000 -1 0.5000 -1 1.0000 1
2 0.5000 -1 0.7500 1 1.0000 1
3 0.5000 -1 0.6250 -1 0.7500 1
0.80
4 0.6250 -1 0.6875 1 0.7500 1
5 0.6250 -1 0.6563 -1 0.6875 1 0.75
6 0.6563 -1 0.6719 -1 0.6875 1 0.70
7 0.6719 -1 0.6797 -1 0.6875 1 0.65
8 0.6797 -1 0.6836 1 0.6875 1
f(x)
f(x)
9 0.6797 -1 0.6816 -1 0.6836 1
0.60
10 0.6816 -1 0.6826 1 0.6836 1 0.55
11 0.6816 -1 0.6821 -1 0.6826 1 0.50
12 0.6821 -1 0.6824 1 0.6826 1
0.45
13 0.6821 -1 0.6823 -1 0.6824 1
14 0.6823 -1 0.6823 -1 0.6824 1 0.40
15 0.6823 -1 0.6823 1 0.6824 1 1 3 5 7 9 11 13 15 17 19
16 0.6823 -1 0.6823 -1 0.6823 1
Iteration
17 0.6823 -1 0.6823 1 0.6823 1
18 0.6823 -1 0.6823 1 0.6823 1
19 0.6823 -1 0.6823 1 0.6823 1
20 0.6823 -1 0.6823 1 0.6823 1
Bracketing a root
MATLAB code:
%Program 1.1 Bisection Method
%Computes approximate solution of f(x)=0 save as
%Input: inline function f; a,b such that f(a)*f(b)<0, ‘bisect.m’
% and tolerance tol
%Output: Approximate solution xc
function xc = bisect(f,a,b,tol)
if sign(f(a))*sign(f(b)) >= 0
error('f(a)f(b)<0 not satisfied!') %ceases execution
end Implementation:
fa=f(a); >> f=inline (‘x^3+x-1’);
fb=f(b); >> xc=bisect(f, 0, 1, 5e-5)
k = 0;
while (b-a)/2>tol
c=(a+b)/2;
fc=f(c);
if fc == 0 %c is a solution, done
break
end
if sign(fc)*sign(fa)<0 %a and c make the new interval
b=c;fb=fc;
else %c and b make the new interval
a=c;fa=fc;
end
end
xc=(a+b)/2; %new midpoint is best estimate
How accurate and how fast?
Efficiency of the Bisection Method:
(bn an )
the approximate solution is the midpoint: xc
2
(b a )
solution error: xc r
2 n 1
actual root
Function evaluations = n+2
f(x)
0.60
9 0.738281 + 0.739258 - 0.740234 -
10 0.738281 + 0.738770 + 0.739258 -
0.55
11 0.738770 + 0.739014 + 0.739258 - 0.50
12 0.739014 + 0.739136 - 0.739258 - 0.45
13 0.739014 + 0.739075 + 0.739136 - 0.40
14 0.739075 + 0.739105 - 0.739136 - 1 3 5 7 9 11 13 15 17 19 21 23
15 0.739075 + 0.739090 - 0.739105 -
Iteration
16 0.739075 + 0.739083 + 0.739090 -
17 0.739083 + 0.739086 - 0.739090 -
18 0.739083 + 0.739085 + 0.739086 -
19 0.739085 + 0.739086 - 0.739086 -
20 0.739085 + 0.739085 + 0.739086 -
21 0.739085 + 0.739085 - 0.739086 -
22 0.739085 + 0.739085 - 0.739085 -
Exercises
Exercises 1.1-1:
1. Use the Intermediate Value Theorem to find an interval of length one
that contains a root of the equation.
x3 9
∴ Interval = [2, 3]
Exercises
Computer problems 1.1-1:
1. Use the Bisection Method to find the root to six correct decimal
places.
x3 9
In MATLAB:
>> f=inline(’xˆ3-9’);
>> xc=bisect(f,2,3,5e-7)
Homework-1.1
Find the maximum deflection of the given structure using the Bisection
Method, TOL=1e-6.
Develop MATLAB code and plot the deflection function by using
‘plot’ command in MATLAB.
For example, PLOT(X,Y,'c+:') plots a cyan dotted line with a plus at each data point;
PLOT(X,Y,'bd') plots blue diamond at each data point but does not draw any line.
Open Method
a) Bracketing method
b) Diverging open method
c) Converging open method - note speed!
cos(r ) r
g ( x) x
g ( x0 ) x1 x0 = initial guess
g ( x1 ) x2
g ( x2 ) x3
g ( xi ) xi 1
g (r ) g lim xi lim g ( xi ) lim xi 1 r
i i i
Fixed points of a function
General procedure of Fixed-Point Iteration:
function xc=fpi(g,x0,k)
x(1)=x0;
for i=1:k
x(i+1)=g(x(i));
end
x' %transpose output to a column
xc=x(k+1);
1 2 x3
Case 1: g ( x) 1 x Case 2: g ( x ) 1 x Case 3: g ( x)
3 3
iteration xi
3x 2 1
iteration xi 0 0.50000000 iteration xi
0 0.50000000 1 0.79370053 0 0.50000000
1 0.87500000 2 0.59088011
1 0.71428571
2 0.33007813 3 0.74236393
4 0.63631020
2 0.68317972
3 0.96403747
5 0.71380081 3 0.68232842
4 0.10405419
6 0.65900615 4 0.68232780
5 0.99887338 7 0.69863261 5 0.68232780
6 0.00337606 8 0.67044850 6 0.68232780
7 0.99999996 9 0.69072912
7 0.68232780
8 0.00000012 10 0.67625892
11 0.68664554
9 1.00000000
12 0.67922234 g(0.68232780)
10 0.00000000
11 1.00000000
13 0.68454401 =0.68232780
14 0.68073737
12 0.00000000 15 0.68346460
16 0.68151292
17 0.68291073
g(0)=1; 18 0.68191019
g(1)=0; 19 0.68262667
20 0.68211376
FPI fails! 21 0.68248102 g(0.6823)=0.6823
22 0.68221809
23 0.68240635
24 0.68227157
25 0.68236807
1 2 x3
Case 1: g ( x) 1 x Case 2: g ( x ) 1 x Case 3: g ( x)
3 3
3x 2 1
1.2
1.0
0.8
0.6
case1
xi
0.4
case2
0.2 case3
0.0
0 10 20 30
iteration
Geometry of fixed-point iteration
Cobweb Diagram: x3 x 1 0
3x 1
3 5 1 3
g1 ( x) x g 2 ( x) x
2 2 2 2
r 1 r 1
1
3
g1 (1) 1 g 2 (1) 1
2 2
Linear convergence of fixed-point iteration
Generalization:
3 5
g1 ( x) x
2 2
3
g1 ( x) ( x 1) 1 x r , where r 1
2
3
g1 ( x) 1 ( x 1)
2
3
xi 1 1 ( xi 1) if the error at step i is ei xi r
2
3 3
ei 1 ei errors increase at each step by a factor of
2 2
1 3
g 2 ( x) x
2 2
1 x r , where r 1
g 2 ( x) ( x 1) 1
2
1
g 2 ( x) 1 ( x 1)
2
1
xi 1 1 ( xi 1) if the error at step i is ei r xi
2
1 1
ei 1 ei errors decrease at each step by a factor of
2 2
Linear convergence of fixed-point iteration
Definition 1.5:
Let ei denote the error at step i of an iterative method.
ei 1
lim S 1
i e i
ei 1 g (ci ) ei
( S 1)
If S g ( r ) 1 Then, S g ( x ) 1
2
( S 1) ( S 1)
ei 1 ei the error decreases by a factor of
2 2
ei 1
lim g (ci ) g (r ) S ei 1 Sei
i ei lim i
Linear convergence of fixed-point iteration
Theorem 1.6: Assume that g is continuously differentiable, that g(r)=r, and that
S g (r ) 1. Then, FPI converges linearly with rate S to the fixed point r for
initial guesses sufficiently close to r. S (S+1)/2 (S+1)/3 (2S+1)/3 (S+1)/1.5
0.000 0.000 0.000 0.000 0.000
S 1 0.100 0.050 0.033 0.067 0.067
0.200 0.100 0.067 0.133 0.133
1 S 0 0.300 0.150 0.100 0.200 0.200
0.400 0.200 0.133 0.267 0.267
1 S 0.500 0.250 0.167 0.333 0.333
0 0.600 0.300 0.200 0.400 0.400
2 2 0.700 0.350 0.233 0.467 0.467
0.800 0.400 0.267 0.533 0.533
1 S 0.900 0.450 0.300 0.600 0.600
S S 1.000 0.500 0.333 0.667 0.667
2 2 1.2
1 S
S 1.0
2 2 0.8 S
(S+1)/2
1 S 0.6
S (S+1)/3
2 0.4 (2S+1)/3
(S+1)/1.5
0.2
0.0
0.0 0.5 1.0
f ( x) x 3 x 1; r 0.6823
x0 0.1
x1 g ( x0 ) 0.2700
x2 g ( x1 ) 0.6831
x3 g ( x2 ) 1.4461
x4 g ( x3 ) 1.9579
g (1.8) 0.8 1
g (0) 2.8 1
24 51 10
1 1.41421296
60 60 2 603
Stopping criteria
FPI requires TOL set:
xi 1 xi
relative error stopping criterion: TOL
xi 1
(solution is not near
zero)
hybrid absolute/relative error xi 1 xi
stopping criterion:
TOL 0
max( xi 1 , )
(solution is near zero)
(a) g ( x) (2 x 1) , r 1
1/ 3
(b) g ( x) ( x 3 1) / 2, r 1
(c) g ( x) sin x x, r 0
LIMITS OF ACCURACY
By Bisection Method
The Wilkinson polynomial
Wilkinson polynomial:
W ( x) x 20 210 x19 20615 x18 1256850 x17 53327946 x16 1672280820 x15
40171771630 x14 756111184500 x13 111310276995381x12
2432902008176640000
wilkpoly.m
>>fzero(‘x.^20-210*x.^19-...........’,16)
W ( x) x 20 210 x19 20615 x18 1256850 x17 53327946 x16 1672280820 x15
40171771630 x14 756111184500 x13 111310276995381x12
2432902008176640000
wilkpoly.m
>>format long
>>fzero(‘x.^20-210*x.^19-...........’,16)
Conditioning
This is the first appearance of the concept of condition number, a
measure of error magnification. Numerical Analysis is the study of
algorithms, which take data defining the problem as input and deliver
an answer as output. Condition number refers to the part of this
magnification that is inherent in the theoretical problem itself,
irrespective of the particular algorithm used to solve it.
It is important to note that the error magnification factor measures
only magnification due to the problem. Along with conditioning, there
is a parallel concept, stability, that refers to the magnification of small
input errors due to the algorithm, not the problem itself. An algorithm
is called stable if it always provides an approximate solution with small
backward error. If the problem is well-conditioned and the algorithm is
stable, we can expect both small backward and forward error.
NEWTON’S METHOD
Newton-Raphson Method:
• converges much faster than the linearly convergent methods
• special case of FPI
f (x i ) 0
f ' (x i )
x i x i1
f (x i )
x i1 x i
f ' (x i )
NEWTON’S METHOD
Newton-Raphson Method:
point-slope formula:
y f ( x0 ) f ( x0 )( x x0 )
f ( x0 )( x x0 ) 0 f ( x0 )
f ( x0 )
x x0
f ( x0 )
f ( x0 )
x x0 approximate root
f ( x0 ) x1
f ( x0 )
x1 x0
f ( x0 )
initial guess
f ( xi )
xi 1 xi for i 0,1,2,
f ( xi )
NEWTON’S METHOD
Newton-Raphson Method:
NEWTON’S METHOD
Example 1.11: x3 x 1 0
initial guess
iteration xi ei=|xi-r| ei/(ei-1 )^2
0 -0.70000000 1.38232780
1 0.12712551 0.55520230 0.2906
2 0.95767812 0.27535032 0.8933
3 0.73482779 0.05249999 0.6924
4 0.68459177 0.00226397 0.8214
5 0.68233217 0.00000437 0.8527
6 0.68232780 0.00000000 0.8541
7 0.68232780 0.00000000
Quadratic convergence of Newton’s Method
Definition 1.11: The iteration is quadratically convergent if
iteration xi ei=|xi-r| ei/(ei-1 )^2
0 -0.70000000 1.38232780
1 0.12712551 0.55520230 0.2906
e 2 0.95767812 0.27535032 0.8933
M lim i 21 3 0.73482779 0.05249999 0.6924
i ei 4 0.68459177 0.00226397 0.8214
5 0.68233217 0.00000437 0.8527
6 0.68232780 0.00000000 0.8541
7 0.68232780 0.00000000
ei 1 f (r )
lim M where M ei 1 Mei2
2 f (r )
2
i ei
1. Newton’s Method:
f (r ) the value of M is
ei 1 Mei2 where M
2 f (r ) less critical
1
ei 1 Sei where S
2
Quadratic convergence of Newton’s Method
Example 1.11: Find the Newton’s formula for the equation x 3 x 1 0
f (r ) 6(0.6823)
if xc 0.6823 M 0.85
2 f (r ) 2(3 (0.6823) 2 1)
f ( x) 2 x f ( a ) 2 a 0
f ( x) 2 f ( a ) 2
f (r ) 2 1
ei 1 Mei2 where M
2 f (r ) 4 a 2 a
convergence rate
Linear convergence of Newton’s Method
Example 1.12: Use Newton’s Method to find a root of f ( x) x 2
f ( xi ) xi2 xi
xi 1 xi xi
f ( xi ) 2 xi 2
f ( xi ) xim m 1
xi 1 xi xi m1 xi
f ( xi ) mxi m
m 1 3 1 2 multiplicity m=3
by Theorem 1.12, ei 1 ei ei ei
m 3 3
the number of steps needed to get the error within six decimal places,
2
n
log10 (0.5) 6
0.5 10
6 n 35.78 36 steps
3 log10 (2 / 3)
f ( xi ) xi4 3xi2 2
xi 1 xi xi
f ( xi ) 4 x 3 6 xi
iteration xi
0 1.00000
1 -1.00000
2 1.00000
3 -1.00000
4 1.00000
5 -1.00000
y 0
1.0
y xe x
0.5
0.0
y
‐0.5
‐1.0
‐1 0 1 2 3 4 5
x
f ( xi )
Newton’s Method: xi 1 xi
f ( xi )
f ( xi )( xi xi 1 )
Secant Method: xi 1 xi for i 1,2,3,
f ( xi ) f ( xi 1 )
f ( xi )( xi xi 1 )
Secant Method: xi 1 xi
f ( xi ) f ( xi 1 )
• Two starting guesses are needed
• If f (r ) 0 and f (r ) 0 , the approximate error relationship
1
f (r ) f (r )
ei 1 ei ei 1 ei 1 ei
2 f (r ) 2 f (r )
1 5
where, 1.62
2
(compare with Theorem 1.11)
f ( xi )( xi xi 1 ) ( xi3 xi 1)( xi xi 1 )
xi 1 xi xi 3
f ( xi ) f ( xi 1 ) xi xi ( xi31 xi 1 )
Muller’s Method:
is a generalization of the Secant Method in a different direction. Using
3 previous points, draw a parabolic curve to get 0 or 2 intersection
points. y p( x)
p12 x 2 y 2
p22 ( x A2 ) 2 ( y B2 ) 2
p32 ( x A3 ) 2 ( y B3 ) 2
where,
A2 L3 cos x1
B2 L3 sin
A3 L2 cos( ) x2
L2 [cos cos sin sin ] x2
B3 L2 sin( ) y2
L2 [cos sin sin cos ] y2
3. Solve the forward kinematics problem for the planar Stewart platform
specified by L2 3 2 , L1 L3 3, / 4, p1 p2 5, p3 3 using an equation
solver.
Calculus can be employed to solve this equation for the height of the cable y as a
function of distance x :
T w T
y A cosh x y0 A
w TA w
(a) Use a numerical method to calculate a value for the parameter TA given values for the
parameters w=10 and y0=5, such that the cable has a height of y=15 at x=50.
(b) Develop a plot of y versus x for x=-50 to 100.