0% found this document useful (0 votes)
81 views44 pages

Solving Equations 1.1 Bisection Method 1

The document discusses various numerical methods for solving equations, including the bisection method. It provides an example of using the bisection method to find the root of the function f(x) = x^3 + x - 1 on the interval [0,1]. Over 20 iterations, the bisection method converges on a root of 0.6823. Graphical and analytical methods for solving equations are also briefly covered.

Uploaded by

Raksmey Sang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views44 pages

Solving Equations 1.1 Bisection Method 1

The document discusses various numerical methods for solving equations, including the bisection method. It provides an example of using the bisection method to find the root of the function f(x) = x^3 + x - 1 on the interval [0,1]. Over 20 iterations, the bisection method converges on a root of 0.6823. Graphical and analytical methods for solving equations are also briefly covered.

Uploaded by

Raksmey Sang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Solving Equations

Instructor: WooSeok Kim, Ph.D.


042-821-6584, [email protected]

Solving Equations:
1.1 Bisection Method
1.2 Fixed-Point Iteration
1.3 Limits of Accuracy
1.4 Newton’s Method
1.5 Root-Finding without Derivatives
Roots
• “Roots” problems occur when some function f can be
written in terms of one or more dependent variables x,
where the solutions to f(x)=0 yields the solution to the
problem.
• These problems often occur when a design problem
presents an implicit equation for a required parameter.

24 51 10
1    1.41421296
60 60 2 603
GRAPHICAL METHOD
Your boss at the bungee-jumping company wants you to determine the mass at
which the free call velocity is exceeds 36 m/s after 4 s of free fall given a drag
coefficient of 0.25 kg/m.

gm  gcd 
The free fall velocity as a function of time : v (t )  tanh  t 
cd  m 
gm  gcd 
A functional form : f ( m)  tanh  t   v(t )
cd  m 

The root of the problem : f ( m)  0

GRAPHICAL METHOD

• A simple method for obtaining the estimate of


the root of the equation f(x)=0 is to make a plot
of the function and observe where it crosses
the x-axis.
• Graphing the function can also indicate where
roots may be and where some root-finding
methods may fail:
a) Same sign, no roots
b) Different sign, one root
c) Same sign, two roots
d) Different sign, three roots
GRAPHICAL METHOD

root  145
1

% Bungee-jumper 0

clear all; clc; clf;


cd=0.25; g=9.81; v=36; t=4; -1

mp=linspace(50,200);
fp=sqrt(g*mp/cd).*tanh(sqrt(g*cd./mp)*t)-v;

f(m)
-2

plot(mp,fp),grid
xlabel('mass') -3

ylabel('f(m)')
-4

-5
50 100 150 200

f (145)  0.0456 mass

g (145)  gcd 
v(t )  tanh  t 
cd  (145) 

BISECTION METHOD
Bracketing a root
The function f (x) has a root at x  r if f (r )  0

f (x) is a continuous function between an interval [a, b]: f (a )  f (b)  0

f (r )  0 ( a  r  b)

Bracketing a root
General Approach to improve Accuracy: f (a0 )  f (b0 )  0

a0  b0
a1 
2
a1 a2 a b
b1  1 0
2
a0 b1 b0 a b
a2  1 1
2

Step 1: a0  a1 f (a1 )  f (b0 )  0 a1 ,b0 


Step 2: b0  b1 f (a1 )  f (b1 )  0 a1 ,b1 
Step 3: a1  a2 f (a2 )  f (b1 )  0 a2 ,b1 
Bracketing a root
General Approach to improve Accuracy: f (a0 )  f (b0 )  0
given condition

a2

a0 a1 b1 b0

while (b-a)/2 > TOL


c=(a+b)/2
if f(c)=0, stop, end
Final interval: [a, b]
if f(a)f(c)<0
the approximate root:
b=c
(a+b)/2
else
a=c
end
end

Bracketing a root
Example 1.1: Find a root of the function f ( x)  x  x  1 by using
3

the Bisection Method on the interval [0, 1].

f (0)  f (1)  (1)(1)  0 Root exists between the interval


[0, 1].

TOL = 1e-4
Bracketing a root
Example 1.1:

0.80
iteration a f(a) c f(c) b f(b) 0.75
1 0.0000 -1 0.5000 -1 1.0000 1 0.70
2 0.5000 -1 0.7500 1 1.0000 1
3 0.5000 -1 0.6250 -1 0.7500 1
0.65

f(x)
4 0.6250 -1 0.6875 1 0.7500 1 0.60
5 0.6250 -1 0.6563 -1 0.6875 1 0.55
6 0.6563 -1 0.6719 -1 0.6875 1
0.50
7 0.6719 -1 0.6797 -1 0.6875 1
8 0.6797 -1 0.6836 1 0.6875 1 0.45
9 0.6797 -1 0.6816 -1 0.6836 1 0.40
10 0.6816 -1 0.6826 1 0.6836 1 1 3 5 7 9
Iteration
Interval: [0.6816, 0.6826]

(0.6816  0.6826)
r  0.6821 r  0.6821  0.0005
2
by calculator: r = 0.6823278038......, -0.3412+1.1615i, -0.3412-1.1615i

Bracketing a root
Example 1.1:
iteration a f(a) c f(c) b f(b)
1 0.0000 -1 0.5000 -1 1.0000 1
2 0.5000 -1 0.7500 1 1.0000 1
3 0.5000 -1 0.6250 -1 0.7500 1
0.80
4 0.6250 -1 0.6875 1 0.7500 1
5 0.6250 -1 0.6563 -1 0.6875 1 0.75
6 0.6563 -1 0.6719 -1 0.6875 1 0.70
7 0.6719 -1 0.6797 -1 0.6875 1 0.65
8 0.6797 -1 0.6836 1 0.6875 1
f(x)

9 0.6797 -1 0.6816 -1 0.6836 1


0.60
10 0.6816 -1 0.6826 1 0.6836 1 0.55
11 0.6816 -1 0.6821 -1 0.6826 1 0.50
12 0.6821 -1 0.6824 1 0.6826 1
0.45
13 0.6821 -1 0.6823 -1 0.6824 1
14 0.6823 -1 0.6823 -1 0.6824 1 0.40
15 0.6823 -1 0.6823 1 0.6824 1 1 3 5 7 9 11 13 15 17 19
16 0.6823 -1 0.6823 -1 0.6823 1
Iteration
17 0.6823 -1 0.6823 1 0.6823 1
18 0.6823 -1 0.6823 1 0.6823 1
19 0.6823 -1 0.6823 1 0.6823 1
20 0.6823 -1 0.6823 1 0.6823 1
Bracketing a root
Example 1.1:
iteration a f(a) c f(c) b f(b)
1 0.0000 -1 0.5000 -1 1.0000 1
2 0.5000 -1 0.7500 1 1.0000 1
3 0.5000 -1 0.6250 -1 0.7500 1
0.80
4 0.6250 -1 0.6875 1 0.7500 1
5 0.6250 -1 0.6563 -1 0.6875 1 0.75
6 0.6563 -1 0.6719 -1 0.6875 1 0.70
7 0.6719 -1 0.6797 -1 0.6875 1 0.65
8 0.6797 -1 0.6836 1 0.6875 1

f(x)
9 0.6797 -1 0.6816 -1 0.6836 1
0.60
10 0.6816 -1 0.6826 1 0.6836 1 0.55
11 0.6816 -1 0.6821 -1 0.6826 1 0.50
12 0.6821 -1 0.6824 1 0.6826 1
0.45
13 0.6821 -1 0.6823 -1 0.6824 1
14 0.6823 -1 0.6823 -1 0.6824 1 0.40
15 0.6823 -1 0.6823 1 0.6824 1 1 3 5 7 9 11 13 15 17 19
16 0.6823 -1 0.6823 -1 0.6823 1
Iteration
17 0.6823 -1 0.6823 1 0.6823 1
18 0.6823 -1 0.6823 1 0.6823 1
19 0.6823 -1 0.6823 1 0.6823 1
20 0.6823 -1 0.6823 1 0.6823 1

Bracketing a root
MATLAB code:
%Program 1.1 Bisection Method
%Computes approximate solution of f(x)=0 save as
%Input: inline function f; a,b such that f(a)*f(b)<0, ‘bisect.m’
% and tolerance tol
%Output: Approximate solution xc
function xc = bisect(f,a,b,tol)
if sign(f(a))*sign(f(b)) >= 0
error('f(a)f(b)<0 not satisfied!') %ceases execution
end Implementation:
fa=f(a); >> f=inline (‘x^3+x-1’);
fb=f(b); >> xc=bisect(f, 0, 1, 5e-5)
k = 0;
while (b-a)/2>tol
c=(a+b)/2;
fc=f(c);
if fc == 0 %c is a solution, done
break
end
if sign(fc)*sign(fa)<0 %a and c make the new interval
b=c;fb=fc;
else %c and b make the new interval
a=c;fa=fc;
end
end
xc=(a+b)/2; %new midpoint is best estimate
How accurate and how fast?
Efficiency of the Bisection Method:

starting interval [an , bn]


n bisection steps
[a, b] interval length:
interval length: (b  a )
b-a 2n

(bn  an )
the approximate solution is the midpoint: xc 
2
(b  a )
solution error: xc  r 
2 n 1
actual root
Function evaluations = n+2

A solution is correct within p decimal places if the error is less


than 0.5 x 10-p.

How accurate and how fast?


Example 1.2: Use the Bisection Method to find a root of f ( x)  cos x  x
in the interval [0,1] to within six correct decimal places.

Determine how many steps of bisection are required:

error after n steps:


(b  a ) (1  0) 1
xc  r  n 1
 n 1  n 1  0.5 10  p  0.5  10 6
2 2 2
6 6
n   19.9 20 steps required!
log10 2 0.301
How accurate and how fast?
Example 1.2: Use the Bisection Method to find a root of f ( x)  cos x  x
in the interval [0,1] to within six correct decimal places.
iteration a f(a) c f(c) b f(b)
0 0.000000 + 0.500000 + 1.000000 -
1 0.500000 + 0.750000 - 1.000000 -
2 0.500000 + 0.625000 + 0.750000 -
3 0.625000 + 0.687500 + 0.750000 -
4 0.687500 + 0.718750 + 0.750000 - 0.80
5 0.718750 + 0.734375 + 0.750000 - 0.75
6 0.734375 + 0.742188 - 0.750000 - 0.70
7 0.734375 + 0.738281 + 0.742188 - 0.65
8 0.738281 + 0.740234 - 0.742188 -

f(x)
0.60
9 0.738281 + 0.739258 - 0.740234 -
10 0.738281 + 0.738770 + 0.739258 -
0.55
11 0.738770 + 0.739014 + 0.739258 - 0.50
12 0.739014 + 0.739136 - 0.739258 - 0.45
13 0.739014 + 0.739075 + 0.739136 - 0.40
14 0.739075 + 0.739105 - 0.739136 - 1 3 5 7 9 11 13 15 17 19 21 23
15 0.739075 + 0.739090 - 0.739105 -
Iteration
16 0.739075 + 0.739083 + 0.739090 -
17 0.739083 + 0.739086 - 0.739090 -
18 0.739083 + 0.739085 + 0.739086 -
19 0.739085 + 0.739086 - 0.739086 -
20 0.739085 + 0.739085 + 0.739086 -
21 0.739085 + 0.739085 - 0.739086 -
22 0.739085 + 0.739085 - 0.739085 -

Exercises
Exercises 1.1-1:
1. Use the Intermediate Value Theorem to find an interval of length one
that contains a root of the equation.

x3  9

f ( x)  x 3  9  f (2)  1 and f (3)  27  9  18

 By the Intermediate Value Theorem,


f (2) f (3)  0 implies the existence of a root between x  2 and x  3.

∴ Interval = [2, 3]
Exercises
Computer problems 1.1-1:
1. Use the Bisection Method to find the root to six correct decimal
places.
x3  9

In MATLAB:

>> f=inline(’xˆ3-9’);
>> xc=bisect(f,2,3,5e-7)

Homework-1.1
Find the maximum deflection of the given structure using the Bisection
Method, TOL=1e-6.
Develop MATLAB code and plot the deflection function by using
‘plot’ command in MATLAB.

% plotting two functions and their root


rx=input('x-coordinate of the root'); %type in x-axis coordinate of the root
ry=input('y-coordinate of the root'); %type in x-axis coordinate of the root
xl=-1;xr=2;yb=-5;yt=10;
x=xl:0.01:xr;
y1=3*x.^3+x.^2;
y2=x+5;
w0
figure(1);
grid on; hold on;
plot(x,y1,'-b',x,y2,'-.g','linewidth',2);
plot(rx,ry,'o','linewidth',2,'MarkerEdgeColor','r',...
'MarkerFaceColor','r',...
'MarkerSize',10); w0
legend('y1','y2','root');
y ( x 5  2 L2 x 3  L4 )
120 EIL
plot([xl xr],[0 0],'k',[0 0],[yb yt],'k','linewidth',2); %axis
xlabel('x'); ylabel('y'); L  600 cm; E  50,000 kN/cm 2 ;
title('Root of 3x^3+x^2=x+5');
axis([xl,xr,yb,yt]);
I  30,000 cm 4 ; w0  2.5 kN/cm
PLOT command
Various line types, plot symbols and colors may be obtained with PLOT(X,Y,S) where
S is a character string made from one element from any or all the following 3
columns:

b blue . point - solid


g green o circle : dotted
r red x x-mark -. dashdot
c cyan + plus -- dashed
m magenta * star (none) no line
y yellow s square
k black d diamond
w white v triangle (down)
^ triangle (up)
< triangle (left)
> triangle (right)
p pentagram
h hexagram

For example, PLOT(X,Y,'c+:') plots a cyan dotted line with a plus at each data point;
PLOT(X,Y,'bd') plots blue diamond at each data point but does not draw any line.

>> help plot

Open Method

• Open methods differ from bracketing methods, in


that open methods require only a single starting
value or two starting values that do not
necessarily bracket a root.
• Open methods may diverge as the computation
progresses, but when they do converge, they
usually do so much faster than bracketing
methods.
Open Method

a) Bracketing method
b) Diverging open method
c) Converging open method - note speed!

FIXED-POINT ITERATION (FPI)


cos(cos(cos( (n))))  0.739 085 133 2
n =arbitrary number in radian

cos(0.739 085 133 2)  0.739 085 133 2

cos(r )  r

The real number r is a fixed point of the function g if g(r) = r .


Fixed points of a function
Fixed points of a function:

g ( x)  cos( x)  r  0.739 085 133 2


g ( x)  x 3  r  1, 0, 1

Fixed points of a function


and solutions

Every equation f ( x)  0 can be turned into a fixed point


problem g ( x)  x .

Fixed points of a function


General procedure of Fixed-Point Iteration:

g ( x)  x
g ( x0 )  x1 x0 = initial guess
g ( x1 )  x2
g ( x2 )  x3

g ( xi )  xi 1

if g is continuous and the xi converges to a number r,

 
g (r )  g  lim xi   lim g ( xi )  lim xi 1  r
 i   i  i 
Fixed points of a function
General procedure of Fixed-Point Iteration:

MATLAB Code: fpi.m


%Program 1.2 Fixed-Point Iteration
%Computes approximate solution of g(x)=x
%Input: inline function g, starting guess x0,
% number of steps k
%Output: Approximate solution xc

function xc=fpi(g,x0,k)
x(1)=x0;
for i=1:k
x(i+1)=g(x(i));
end
x' %transpose output to a column
xc=x(k+1);

Fixed points of a function


Turning into a Fixed Point Problem: x  x  1  0
3

Case 1: Case 2: Case 3:


x3  x  1  0 x3  x 1  0 x3  x  1  0
x  1  x3 x3  1  x x3  x  2 x3  1  2 x3
g ( x)  1  x 3 x  3 1 x 3x 3  x  1  2 x 3
g ( x)  3 1  x (3 x 2  1) x  1  2 x 3
1  2 x3
g ( x)  2
3x  1
Fixed points of a function
Turning into a Fixed Point Problem: x  x  1  0
3

1  2 x3
Case 1: g ( x)  1  x Case 2: g ( x )  1  x Case 3: g ( x) 
3 3

iteration xi
3x 2  1
iteration xi 0 0.50000000 iteration xi
0 0.50000000 1 0.79370053 0 0.50000000
1 0.87500000 2 0.59088011
1 0.71428571
2 0.33007813 3 0.74236393
4 0.63631020
2 0.68317972
3 0.96403747
5 0.71380081 3 0.68232842
4 0.10405419
6 0.65900615 4 0.68232780
5 0.99887338 7 0.69863261 5 0.68232780
6 0.00337606 8 0.67044850 6 0.68232780
7 0.99999996 9 0.69072912
7 0.68232780
8 0.00000012 10 0.67625892
11 0.68664554
9 1.00000000
12 0.67922234 g(0.68232780)
10 0.00000000
11 1.00000000
13 0.68454401 =0.68232780
14 0.68073737
12 0.00000000 15 0.68346460
16 0.68151292
17 0.68291073
g(0)=1; 18 0.68191019
g(1)=0; 19 0.68262667
20 0.68211376
FPI fails! 21 0.68248102 g(0.6823)=0.6823
22 0.68221809
23 0.68240635
24 0.68227157
25 0.68236807

Fixed points of a function


Turning into a Fixed Point Problem: x  x  1  0
3

1  2 x3
Case 1: g ( x)  1  x Case 2: g ( x )  1  x Case 3: g ( x) 
3 3
3x 2  1
1.2
1.0
0.8
0.6
case1
xi

0.4
case2
0.2 case3
0.0
0 10 20 30
iteration
Geometry of fixed-point iteration
Cobweb Diagram: x3  x  1  0

Case 1: Case 2: Case 3:


g ( x)  1  x 1  2 x3
g ( x)  1  x g ( x)  2
3 3

3x  1

What makes FPI spiral in toward the fixed point,


or spiral out away from the fixed point?

Linear convergence of fixed-point iteration


Cobweb Diagram:

3 5 1 3
g1 ( x)   x  g 2 ( x)   x 
2 2 2 2

r 1 r 1

1
3
g1 (1)    1 g 2 (1)   1
2 2
Linear convergence of fixed-point iteration
Generalization:

3 5
g1 ( x)   x 
2 2
3
g1 ( x)   ( x  1)  1 x  r , where r  1
2
3
g1 ( x)  1   ( x  1)
2
3
xi 1  1   ( xi  1) if the error at step i is ei  xi  r
2
3 3
ei 1  ei errors increase at each step by a factor of
2 2

Linear convergence of fixed-point iteration


Generalization:

1 3
g 2 ( x)   x 
2 2
1 x  r , where r  1
g 2 ( x)   ( x  1)  1
2
1
g 2 ( x)  1   ( x  1)
2
1
xi 1  1   ( xi  1) if the error at step i is ei  r  xi
2
1 1
ei 1  ei errors decrease at each step by a factor of
2 2
Linear convergence of fixed-point iteration
Definition 1.5:
Let ei denote the error at step i of an iterative method.

ei 1
lim  S 1
i  e i

the method is said to obey linear convergence with rate S.

Theorem 1.6: (a posteriori)


Assume that g is continuously differentiable, that g ( r )  r , and that
S  g (r )  1 . Then Fixed-Point Iteration converges linearly with
rate S to the fixed point r for initial guesses sufficiently close to r.

Linear convergence of fixed-point iteration


Theorem 1.6:
g ( xi )  g (r )
 g (ci ) g ( xi )  xi 1 and g (r )  r
xi  r
xi 1  r  g (ci )( xi  r ) ei  xi  r
xi : best guess number at step i
ci : intermediate value between xi and r

ei 1  g (ci ) ei
( S  1)
If S  g ( r )  1 Then, S  g ( x )  1
2
( S  1) ( S  1)
ei 1  ei the error decreases by a factor of
2 2
ei 1
lim  g (ci )  g (r )  S ei 1  Sei
i  ei lim i 
Linear convergence of fixed-point iteration
Theorem 1.6: Assume that g is continuously differentiable, that g(r)=r, and that
S  g (r )  1. Then, FPI converges linearly with rate S to the fixed point r for
initial guesses sufficiently close to r. S (S+1)/2 (S+1)/3 (2S+1)/3 (S+1)/1.5
0.000 0.000 0.000 0.000 0.000
S 1 0.100 0.050 0.033 0.067 0.067
0.200 0.100 0.067 0.133 0.133
1 S  0 0.300 0.150 0.100 0.200 0.200
0.400 0.200 0.133 0.267 0.267
1 S 0.500 0.250 0.167 0.333 0.333
 0 0.600 0.300 0.200 0.400 0.400
2 2 0.700 0.350 0.233 0.467 0.467
0.800 0.400 0.267 0.533 0.533
1 S 0.900 0.450 0.300 0.600 0.600
 S S 1.000 0.500 0.333 0.667 0.667
2 2 1.2
1 S
 S 1.0
2 2 0.8 S
(S+1)/2
1 S 0.6
 S (S+1)/3
2 0.4 (2S+1)/3
(S+1)/1.5
0.2
0.0
0.0 0.5 1.0

Linear convergence of fixed-point iteration


Definition 1.7:
An iterative method is called locally convergent to r if the method
converges to r for initial guesses sufficiently close to r.

f ( x)  x 3  x  1; r  0.6823

Case 1: Case 2: Case 3:


g ( x)  1  x 3
g ( x)  3 1  x 1  2 x3
g ( x)  2
g ( x)  3 x 2 1 3x  1
g ( x) 
S  g (r ) 33 1  x  6 x( x 3  x  1)
2
g ( x) 
(1  3 x 2 ) 2
  3(0.6823) 2 S  g (r )
S  g (r )
 1.3966  1  0.716  1
0
Linear convergence of fixed-point iteration
Example 1.3:
Explain why the Fixed-Point Iteration g(x)=cos x converges.

apply Theorem 1.6:

g (r )  r S  g (r )  1 nearby guesses: r  0.74

S  g (r )   sin r   sin 0.74   0.67  1 converges!

Linear convergence of fixed-point iteration


Example 1.4:
Use Fixed-Point Iteration to find a root of cos x = sin x.
convert the equation to a fixed point problem:
cos x  sin x  x  x g ( x)  cos x  sin x  x
iteration xi g(xi) ei=|xi-r| ei/ei-1 1.0
0 0.0000000 1.0000000 0.7853982
1 1.0000000 0.6988313 0.2146018 0.273 0.9
2 0.6988313 0.8211025 0.0865669 0.403
3 0.8211025 0.7706197 0.0357043 0.412 0.8
g(xi)

4 0.7706197 0.7915188 0.0147785 0.414


5 0.7915188 0.7828630 0.0061206 0.414
0.7
6 0.7828630 0.7864483 0.0025352 0.414
0.6
7 0.7864483 0.7849632 0.0010501 0.414
8 0.7849632 0.7855783 0.0004350 0.414 0.5
9 0.7855783 0.7853235 0.0001801 0.414
10 0.7853235 0.7854291 0.0000747 0.415
0 10 20
11 0.7854291 0.7853853 0.0000309 0.414 iteration
12 0.7853853 0.7854035 0.0000129 0.418 
13 0.7854035 0.7853960 0.0000053 0.411 cos x  sin x  x   0.7853982
14 0.7853960 0.7853991 0.0000022 0.415 4
15 0.7853991 0.7853978 0.0000009 0.409 ei  0.414ei 1
16 0.7853978 0.7853983 0.0000004 0.444
ei
17 0.7853983 0.7853981 0.0000001 0.250  S  g (r )  0.414
18 0.7853981 0.7853982 0.0000001 1.000 ei 1
19 0.7853982 0.7853982 0.0000000
Linear convergence of fixed-point iteration
Example 1.5:
Find the fixed points of g ( x )  2.8 x  x
2

solve: 2.8 x  x 2  x fixed points 0 and 1.8

x0  0.1
x1  g ( x0 )  0.2700
x2  g ( x1 )  0.6831
x3  g ( x2 )  1.4461
x4  g ( x3 )  1.9579

g (1.8)  0.8  1
g (0)  2.8  1

Linear convergence of fixed-point iteration


Example 1.6:
Calculate 2 by using FPI up to the first 10 digits. 1 r  2
2
1
 2 average 13
initial guess: x0  1,  x1 
 1 2 2
3 4
 3 2  average 
x1  ,  x2  2 3  17  1.41 6
2 3  2 12
 2
17 24
17  
2 average 12 17  577  1.414215686
x2   ,  x3 
12 17  2 408
 12 

2
xi 
xi
xi 1 
2
Linear convergence of fixed-point iteration
Example 1.6:
Calculate 2 by the Babylonian’s Method:

24 51 10
1    1.41421296
60 60 2 603

Stopping criteria
FPI requires TOL set:

• the number of steps required for FPI to converge within a given


tolerance is rarely predictable.
• a decision must be made about terminating the algorithm

absolute error stopping criterion: xi 1  xi  TOL

xi 1  xi
relative error stopping criterion:  TOL
xi 1
(solution is not near
zero)
hybrid absolute/relative error xi 1  xi
stopping criterion:
 TOL  0
max( xi 1 ,  )
(solution is near zero)

• set a limit on the max. number of steps in case convergence fails


Homework-1.2
1. Use Theorem 1.6 to determine whether Fixed-Point Iteration of g(x) is
locally converge to the given fixed point r. (hand calculation)

(a) g ( x)  (2 x  1) , r 1
1/ 3

(b) g ( x)  ( x 3  1) / 2, r  1

(c) g ( x)  sin x  x, r  0

2. Calculate the square roots of ‘5’ to eight correct decimal places by


using Fixed-Point Iteration as in Example 1.6. State your initial guess and
the number of steps needed. (developing computer code will get
additional points)

3. Calculate the cube roots of ‘5’ to eight correct decimal places by


using FPI with g ( x)  ( 2 x  A / x 2 ) / 3, where A=5. State your initial
guess and the number of steps needed. (developing computer code will
get additional points)

LIMITS OF ACCURACY

One of the goals of numerical analysis is to compute answers within a


specified level of accuracy.

Double Precision = 16 decimal digits (52-bit accuracy)

Some cases may lose the accuracy in a calculation that a double


precision computer cannot make to anywhere near 16 correct digits,
even with the best algorithm.
Forward and backward error
Example 1.7: In some cases, pencil and paper can still outperform a
computer.
4 8
Find the root of f ( x)  x 3  2 x 2  x using the Bisection
3 27
Method within six correct significant digits.

Bracketing [0, 1]: f (0) f (1)  0 solution exists!


iteration a f(a) c f(c) b f(b)
0 0.0000000 - 0.5000000 - 1.0000000 + By hand:
1 0.5000000 - 0.7500000 + 1.0000000 + 3
2 0.5000000 - 0.6250000 - 0.7500000 +  2
3 0.6250000 - 0.6875000 + 0.7500000 +
f ( x)   x  
4 0.6250000 - 0.6562500 - 0.6875000 +  3
5 0.6562500 - 0.6718750 + 0.6875000 +
2
6 0.6562500 - 0.6640625 - 0.6718750 +  r   0.66666666 ;
7 0.6640625 - 0.6679688 + 0.6718750 + 3
8 0.6640625 - 0.6660156 - 0.6679688 +
9 0.6660156 - 0.6669922 + 0.6679688 + 2
10 0.6660156 - 0.6665039 - 0.6669922 + f 0
11 0.6665039 - 0.6667481 + 0.6669922 + 3
12 0.6665039 - 0.6666260 - 0.6667481 +
13 0.6666260 - 0.6666870 + 0.6667481 + By Bisection Method:
f 0.6666641  0
14 0.6666260 - 0.6666565 - 0.6666870 +
15 0.6666565 - 0.6666718 + 0.6666870 +
16 0.6666565 - 0.6666641 - 0.6666718 +

Forward and backward error


Example 1.7:

By Bisection Method
The Wilkinson polynomial
Wilkinson polynomial:

W ( x)  ( x  1)( x  2)( x  3)  ( x  20)

The roots are the integers from 1 to 20.

W ( x)  x 20  210 x19  20615 x18  1256850 x17  53327946 x16  1672280820 x15
 40171771630 x14  756111184500 x13  111310276995381x12
   2432902008176640000

root near 16 = 16.01468030580458

wilkpoly.m
>>fzero(‘x.^20-210*x.^19-...........’,16)

The Wilkinson polynomial


Wilkinson polynomial:

W ( x)  ( x  1)( x  2)( x  3)  ( x  20)

The roots are the integers from 1 to 20.

W ( x)  x 20  210 x19  20615 x18  1256850 x17  53327946 x16  1672280820 x15
 40171771630 x14  756111184500 x13  111310276995381x12
   2432902008176640000

root near 16 = 16.01468030580458

wilkpoly.m
>>format long
>>fzero(‘x.^20-210*x.^19-...........’,16)
Conditioning
This is the first appearance of the concept of condition number, a
measure of error magnification. Numerical Analysis is the study of
algorithms, which take data defining the problem as input and deliver
an answer as output. Condition number refers to the part of this
magnification that is inherent in the theoretical problem itself,
irrespective of the particular algorithm used to solve it.
It is important to note that the error magnification factor measures
only magnification due to the problem. Along with conditioning, there
is a parallel concept, stability, that refers to the magnification of small
input errors due to the algorithm, not the problem itself. An algorithm
is called stable if it always provides an approximate solution with small
backward error. If the problem is well-conditioned and the algorithm is
stable, we can expect both small backward and forward error.

NEWTON’S METHOD
Newton-Raphson Method:
• converges much faster than the linearly convergent methods
• special case of FPI

f (x i )  0
f ' (x i ) 
x i  x i1
f (x i )
x i1  x i 
f ' (x i )
NEWTON’S METHOD
Newton-Raphson Method:

point-slope formula:
y  f ( x0 )  f ( x0 )( x  x0 )

f ( x0 )( x  x0 )  0  f ( x0 )
f ( x0 )
x  x0  
f ( x0 )
f ( x0 )
x  x0  approximate root
f ( x0 ) x1
f ( x0 )
x1  x0 
f ( x0 )
initial guess
f ( xi )
xi 1  xi  for i  0,1,2,
f ( xi )

NEWTON’S METHOD
Newton-Raphson Method:

• Pro: The error of the i+1th iteration is roughly


proportional to the square of the error of the ith
iteration - this is called quadratic convergence

• Con: Some functions show slow or poor


convergence
NEWTON’S METHOD
Example 1.11: Find the Newton’s formula for the equation x 3  x  1  0
f ( x)  x 3  x  1
f ( x)  3x 2  1
f ( xi )
The Newton’s formula: xi 1  xi  for i  0,1,2,
f ( xi )
xi  xi  1 2 xi3  1
3
xi 1  xi   2
3xi  1
2
3xi  1
initial guess: x0  0.7
2 x03  1
x1  2  0.1271
3x0  1
2 x13  1
x2  2  0.9577
3x1  1

NEWTON’S METHOD
Example 1.11: x3  x  1  0

initial guess
iteration xi ei=|xi-r| ei/(ei-1 )^2
0 -0.70000000 1.38232780
1 0.12712551 0.55520230 0.2906
2 0.95767812 0.27535032 0.8933
3 0.73482779 0.05249999 0.6924
4 0.68459177 0.00226397 0.8214
5 0.68233217 0.00000437 0.8527
6 0.68232780 0.00000000 0.8541
7 0.68232780 0.00000000
Quadratic convergence of Newton’s Method
Definition 1.11: The iteration is quadratically convergent if
iteration xi ei=|xi-r| ei/(ei-1 )^2
0 -0.70000000 1.38232780
1 0.12712551 0.55520230 0.2906
e 2 0.95767812 0.27535032 0.8933
M  lim i 21   3 0.73482779 0.05249999 0.6924
i  ei 4 0.68459177 0.00226397 0.8214
5 0.68233217 0.00000437 0.8527
6 0.68232780 0.00000000 0.8541
7 0.68232780 0.00000000

Theorem 1.11: Let f be twice continuously differentiable and f(r)=0.


If f (r )  0 , then Newton’s Method is locally and quadratically
convergent to r.

ei 1 f (r )
lim M where M ei 1  Mei2
2 f (r )
2
i  ei

Quadratic convergence of Newton’s Method


Convergence Rate:

1. Newton’s Method:
f (r ) the value of M is
ei 1  Mei2 where M
2 f (r ) less critical

2. Linearly Convergent Method (FPI):

ei 1  Sei where S  g (r )  1 for convergence

3. Linearly Convergent Method (Bisection Method):

1
ei 1  Sei where S
2
Quadratic convergence of Newton’s Method
Example 1.11: Find the Newton’s formula for the equation x 3  x  1  0

iteration xi ei=|xi-r| ei/(ei-1 )^2


0 -0.70000000 1.38232780
1 0.12712551 0.55520230 0.2906 f ( x)  x 3  x  1
2 0.95767812 0.27535032 0.8933
3 0.73482779 0.05249999 0.6924
f ( x)  3x 2  1
4 0.68459177 0.00226397 0.8214 f ( x)  6 x
5 0.68233217 0.00000437 0.8527
6 0.68232780 0.00000000 0.8541
ei 1
7 0.68232780 0.00000000
lim e 2
M
i  i

f (r ) 6(0.6823)
if xc  0.6823 M   0.85
2 f (r ) 2(3  (0.6823) 2  1)

Quadratic convergence of Newton’s Method


Example 1.6:
Calculate 2 by using FPI up to the first 10 digits.

Let, f ( x)  x 2  a and find the roots using Newton’s Method


positive number a
xi 
f ( xi ) xi2  a xi2  a xi
xi 1  xi   xi   
f ( xi ) 2 xi 2 xi 2

f ( x)  2 x f ( a )  2 a  0
f ( x)  2 f ( a )  2

Newton’s Method is quadratically convergent since f ( a )  2 a  0

f (r ) 2 1
ei 1  Mei2 where M  
2 f (r ) 4 a 2 a
convergence rate
Linear convergence of Newton’s Method
Example 1.12: Use Newton’s Method to find a root of f ( x)  x 2

f ( xi ) xi2 xi
xi 1  xi   xi  
f ( xi ) 2 xi 2

iteration xi ei=|xi-r| ei/ei-1


0 1.00000000 1.00000000 1
1 0.50000000 0.50000000 0.5000 ei 1  ei
2 0.25000000 0.25000000 0.5000
2
3 0.12500000 0.12500000 0.5000 1
4 0.06250000 0.06250000 0.5000 S
5 0.03125000 0.03125000 0.5000
2
6 0.01562500 0.01562500 0.5000
7 0.00781250 0.00781250

Newton’s Method is NOT always converges quadratically.

Linear convergence of Newton’s Method


Example 1.13: Use Newton’s Method to find a root of f ( x )  x m
m is a positive integer

f ( xi ) xim m 1
xi 1  xi   xi  m1  xi
f ( xi ) mxi m

the only root is r = 0, so defining ei  xi  r  xi yields


m 1
ei 1  Sei where S 
m

Theorem 1.12: If the (m+1)-times continuously differentiable function


for [a, b] has a multiplicity m root at r, then Newton’s Method is
locally convergent to r, and the error ei at step i satisfies
ei 1 m 1
lim  S where S 
i  ei m
Linear convergence of Newton’s Method
Example 1.14: Use Newton’s Method to find the multiplicity of the
root r=0 of f ( x)  sin x  x cos x  x  x and estimate the number
2 2

of steps of Newton’s Method required to converge within six correct


places (use x0=1)

f ( x)  sin x  x 2 cos x  x 2  x f (0)  0


f ( x)  cos x  2 x cos x  x 2 sin x  2 x  1 f (0)  0
f ( x)   sin x  2 cos x  4 x sin x  x 2 cos x  2 f (0)  0
f ( x)   cos x  6 sin x  6 x cos x  x 2 sin x f (0)  1  0

m 1 3 1 2 multiplicity m=3
by Theorem 1.12, ei 1  ei  ei  ei
m 3 3
the number of steps needed to get the error within six decimal places,

2
n
log10 (0.5)  6
   0.5 10
6 n  35.78  36 steps
3 log10 (2 / 3)

Modified Newton’s Method


Theorem 1.13: If f is (m+1)-times continuously differentiable on [a, b],
which contains root r of multiplicity m>1,
then Modified Newton’s Method
m  f ( xi )
xi 1  xi 
f ( xi )
converges locally and quadratically to r.

by Modified Newton’s Method, Example 1.14 becomes:


iteration xi ei=|xi-r| ei/ei-1
0 1.000000000000 1.000000000000
1 0.164770719582 0.164770719582 0.1648
2 0.016207337711 0.016207337711 0.0984
3 0.000246541438 0.000246541438 0.0152
4 0.000000060721 0.000000060721 0.0002
5 -0.000000002390 0.000000002390 0.0394
Nonconvergence of Newton’s Method
Example 1.15: Apply Newton’s Method to f ( x)   x 4  3x 2  2 with
starting guess x0=1

f ( xi )  xi4  3xi2  2
xi 1  xi   xi 
f ( xi )  4 x 3  6 xi

iteration xi
0 1.00000
1 -1.00000
2 1.00000
3 -1.00000
4 1.00000
5 -1.00000

Newton’s Method can fail in other ways.

Nonconvergence of Newton’s Method

y  0
1.0
y  xe  x
0.5

0.0
y

‐0.5

‐1.0
‐1 0 1 2 3 4 5
x

Newton’s Method diverges for all x > 1 .


Homework-1.3
1. Apply two steps of Newton’s Method with initial guess x0=0
x4  x2  x 1  0

2. Sketch a function f and initial guess for which Newton’s Method


diverges.

3. Use Newton’s Method to approximate the root to eight correct


decimal places. (computer script will get additional points.)
x5  x  1

ROOT-FINDING WITHOUT DERIVATIVES


Newton’s Method is a useful and faster convergent method than
Bisection and FPI Method. However, the derivatives may not be
available in some cases.

f ( xi )
Newton’s Method: xi 1  xi 
f ( xi )

f ( xi )( xi  xi 1 )
Secant Method: xi 1  xi  for i  1,2,3,
f ( xi )  f ( xi 1 )

requires two initial points.

Brent’s Method: a hybrid method combining the best features of


iterative and bracketing methods.
xi , ai , bi
f (a) f (b)  0 Secant Method
Bisection Method
Secant Method and variants
f ( xi ) approximation for
Newton’s Method: xi 1  xi  the derivative
f ( xi )
f ( xi )  f ( xi 1 )
( xi  xi 1 )

f ( xi )( xi  xi 1 )
Secant Method: xi 1  xi 
f ( xi )  f ( xi 1 )
• Two starting guesses are needed
• If f (r )  0 and f (r )  0 , the approximate error relationship
 1
f (r ) f (r )
ei 1  ei ei 1 ei 1  ei
2 f (r ) 2 f (r )
1 5
where,   1.62
2
(compare with Theorem 1.11)

Secant Method and variants


Example 1.16: Apply the Secant Method with starting guesses xo=0, x1=1
to find the root of f ( x)  x  x  1
3

f ( xi )( xi  xi 1 ) ( xi3  xi  1)( xi  xi 1 )
xi 1  xi   xi  3
f ( xi )  f ( xi 1 ) xi  xi  ( xi31  xi 1 )

starting with x0  0 and x1  1


( x13  x1  1)( x1  x0 ) (1)(1  0) 1
i = 1: x2  x1  3  1  
x1  x1  ( x03  x0 ) 11 0 2
3 1
 (  1)
1 7
i = 2: x3   8 2 
2 3 11
 1
8
Secant Method and variants

Generalization of the Secant Method:

Method of False Position or Regula Falsi:


is an improvement on both the Bisection Method and the Secant
Method, taking the best properties of each. However, not always!

Muller’s Method:
is a generalization of the Secant Method in a different direction. Using
3 previous points, draw a parabolic curve to get 0 or 2 intersection
points. y  p( x)

Inverse Quadratic Interpolation (IQI):


is a similar generalization of the Secant Method to parabolas, but uses
x  p( y ) so that the curve intersects the x-axis in a single point.

MATLAB’s fzero function

MATLAB’s fzero provides the best qualities of both


bracketing methods and open methods.
Using an initial guess:
x = fzero(function, x0)
[x, fx] = fzero(function, x0)
function is a function handle to the function being evaluated
x0 is the initial guess
x is the location of the root
fx is the function evaluated at that root
Using an initial bracket:
x = fzero(function, [x0 x1])
[x, fx] = fzero(function, [x0 x1])
As above, except x0 and x1 are guesses that must bracket a sign change
MATLAB’s fzero function

• Options may be passed to fzero as a third input


argument - the options are a data structure created
by the optimset command
• options = optimset(‘par1’, val1, ‘par2’, val2,…)
– parn is the name of the parameter to be set
– valn is the value to which to set that parameter
– The parameters commonly used with fzero are:
• display: when set to ‘iter’ displays a detailed record of all the
iterations
• tolx: A positive scalar that sets a termination tolerance on x.

MATLAB’s fzero function

• options = optimset(‘display’, ‘iter’);


– Sets options to display each iteration of root
finding process
• [x, fx] = fzero(@(x) x^10-1, 0.5, options)
– Uses fzero to find roots of f(x)=x10-1 starting with
an initial guess of x=0.5.
• MATLAB reports x=1, fx=0 after 35
function counts
MATLAB’s fzero function
• options = optimset(‘display’, ‘iter’);
– Sets options to display each iteration of root finding process
• [x, fx] = fzero(@(x) x^10-1, 0.5, options)
– Uses fzero to find roots of f(x)=x10-1 starting with an initial guess of x=0.5.
• MATLAB reports x=1, fx=0 after 35 function counts

MATLAB’s fzero function


>> options = optimset('display','iter');
>> [x, fx] = fzero(@(x) x^10-1, 0.5, options)
Search for a zero in the interval [-0.14, 1.14]:
Func-count x f(x) Procedure
Search for an interval around 0.5 containing a sign change:
25 -0.14 -1 initial
Func-count a f(a) b f(b) Procedure
26 0.205272 -1 interpolation
1 0.5 -0.999023 0.5 -0.999023 initial interval
27 0.672636 -0.981042 bisection
3 0.485858 -0.999267 0.514142 -0.998709 search
28 0.906318 -0.626056 bisection
5 0.48 -0.999351 0.52 -0.998554 search
29 1.02316 0.257278 bisection
7 0.471716 -0.999454 0.528284 -0.998307 search
30 0.989128 -0.103551 interpolation
9 0.46 -0.999576 0.54 -0.997892 search
31 0.998894 -0.0110017 interpolation
11 0.443431 -0.999706 0.556569 -0.997148 search
32 1.00001 7.68385e-005 interpolation
13 0.42 -0.999829 0.58 -0.995692 search
33 1 -3.83061e-007 interpolation
15 0.386863 -0.999925 0.613137 -0.992491 search
34 1 -1.3245e-011 interpolation
17 0.34 -0.999979 0.66 -0.984317 search
35 1 0 interpolation
19 0.273726 -0.999998 0.726274 -0.959167 search
21 0.18 -1 0.82 -0.862552 search
Zero found in the interval [-0.14, 1.14]
23 0.0474517 -1 0.952548 -0.385007 search
x=
25 -0.14 -1 1.14 2.70722 search
1
fx =
0
MATLAB’s roots function
• MATLAB has a built in program called roots to determine all the roots
of a polynomial - including imaginary and complex ones.
• x = roots(c)
– x is a column vector containing the roots
– c is a row vector containing the polynomial coefficients
• Example:
– Find the roots of
f(x)=x5-3.5x4+2.75x3+2.125x2-3.875x+1.25
– x = roots([1 -3.5 2.75 2.125 -3.875 1.25])

>> x = roots([1 -3.5 2.75 2.125 -3.875 1.25])


x=
2.0000
-1.0000
1.0000 + 0.5000i
1.0000 - 0.5000i
0.5000

MATLAB’s ploy function


• MATLAB’s poly function can be used to determine
polynomial coefficients if roots are given:
– b = poly([0.5 -1])
• Finds f(x) where f(x) =0 for x=0.5 and x=-1
• MATLAB reports b = [1.000 0.5000 -0.5000]
• This corresponds to f(x)=x2+0.5x-0.5
• MATLAB’s polyval function can evaluate a
polynomial at one or more points:
– a = [1 -3.5 2.75 2.125 -3.875 1.25];
• If used as coefficients of a polynomial, this corresponds to
f(x)=x5-3.5x4+2.75x3+2.125x2-3.875x+1.25
– polyval(a, 1)
• This calculates f(1), which MATLAB reports as -0.2500
KINEMATICS OF THE STEWART PLATFORM

KINEMATICS OF THE STEWART PLATFORM


Consider a two-dimensional version of the Stewart platform:
Finding the position of the platform is called the forward kinematics
problem.

p1, p2, p3: strut length

L1, L2, L3: platform length

No closed-form solution is available while for motion planning it is


important to solve this problem as fast as possible, often real time.
KINEMATICS OF THE STEWART PLATFORM
Solve the following equations: find x, y, based on given p1, p2, p3

p12  x 2  y 2
p22  ( x  A2 ) 2  ( y  B2 ) 2
p32  ( x  A3 ) 2  ( y  B3 ) 2
where,

A2  L3 cos   x1
B2  L3 sin 
A3  L2 cos(   )  x2
 L2 [cos cos   sin  sin  ]  x2
B3  L2 sin(   )  y2
 L2 [cos sin   sin  cos  ]  y2

KINEMATICS OF THE STEWART PLATFORM


Solve the following equations: find x, y, based on given p1, p2, p3

p22  ( x  A2 ) 2  ( y  B2 ) 2  p12  2 A2 x  2 B2 y  A22  B22


p32  ( x  A3 ) 2  ( y  B3 ) 2  p12  2 A3 x  2 B3 y  A32  B32

N1 B3 ( p22  p12  A22  B22 )  B2 ( p32  p12  A32  B32 )


x 
D 2( A2 B3  B2 A3 )
N 2  A3 ( p22  p12  A22  B22 )  A2 ( p32  p12  A32  B32 )
y 
D 2( A2 B3  B2 A3 )
where, D  2( A2 B3  B2 A3 )  0

p12  x 2  y 2 f ( )  N12  N 22  p12 D 2  0


KINEMATICS OF THE STEWART PLATFORM
Possible Design Project Topic:
1. Write a MATLAB function file for f () based on previously discussed solving
methods in this class and compare error ratio, the number of steps required.
Propose the best solving method. To test your code, set the parameters
L1  2, L2  L3  2 ,    / 2
p1  p2  p3  5
Then, substituting    / 4 or  / 4 should make f    0 . Compare to the
next page figures.

2. Plot f () on [-π, π]. There should be roots at ±π/4.


>> plot ([u1, u2, u3, u1]), [v1, v2, v3, v1], ‘r’); hold on
>> plot ([0, x1, x2], [0, 0, y2], ‘bo’)
this MATLAB command will plot a red triangle with vertices (u1,v1), (u2,v2),
(u3, v3) and place small circles at the strut anchor points (0,0), (0,x1), (x2,y2).
In addition draw the struts.

3. Solve the forward kinematics problem for the planar Stewart platform
specified by L2  3 2 , L1  L3  3,    / 4, p1  p2  5, p3  3 using an equation
solver.

KINEMATICS OF THE STEWART PLATFORM

is restricted in [-]


Possible Design Project Topic
A catenary cable is one which is hung between two points not in the same vertical line.
As shown below, it is subject to no loads other than its own weight. Thus, its weight acts
as a uniform load per unit length along the cable w (N/m). A free-body diagram of a
section AB is also shown below, where TA and TB are the tension forces at the end. Based
on horizontal and vertical force balances, the following differential equation model of
the cable can be derived:
2
d2y w  dy 
 1  
dx 2 TA  dx 

Calculus can be employed to solve this equation for the height of the cable y as a
function of distance x :
T w  T
y  A cosh x   y0  A
w  TA  w
(a) Use a numerical method to calculate a value for the parameter TA given values for the
parameters w=10 and y0=5, such that the cable has a height of y=15 at x=50.
(b) Develop a plot of y versus x for x=-50 to 100.

Possible Design Project Topic

You might also like