0% found this document useful (0 votes)
89 views15 pages

Section 2 Rootfinding

This document discusses numerical methods for solving equations where f(x) = 0. It introduces the bisection method and false position method, which use bracketing to bound the root. The secant method is also presented, which uses linear interpolation. Newton-Raphson iteration is then covered in detail. It involves taking the derivative of f(x) and using the slope of the tangent line to iteratively estimate the root. An example application to finding implicit interest rates is shown. Analysis indicates Newton-Raphson has a quadratic rate of convergence, making it faster than bracketing methods. Engineering examples involving heat balances, mass balances and mechanics are given.

Uploaded by

Lindsay Green
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views15 pages

Section 2 Rootfinding

This document discusses numerical methods for solving equations where f(x) = 0. It introduces the bisection method and false position method, which use bracketing to bound the root. The secant method is also presented, which uses linear interpolation. Newton-Raphson iteration is then covered in detail. It involves taking the derivative of f(x) and using the slope of the tangent line to iteratively estimate the root. An example application to finding implicit interest rates is shown. Analysis indicates Newton-Raphson has a quadratic rate of convergence, making it faster than bracketing methods. Engineering examples involving heat balances, mass balances and mechanics are given.

Uploaded by

Lindsay Green
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

ROOTS OF EQUATIONS (C&C 4

th
, PT 2)
Objective: Solve for x, given that f(x) = 0
-or equivalently, solve for x such that
g(x) = h(x)
==> f(x) = g(x) h(x) = 0
Applicti!"# (C&C 4
th
, Tble PT2$%, p$ %&'):
heat balance
mass balance
total energy balance in mechanical
an structural systems
!irchhoff"s #a$%

V
i
i &
n
aroun a loo'
(inimi)ing an maximi)ing functions% 0
dS
dF
*n many esign a''lications+
- one as'ect of system is stuie
- equations are nonlinear an im'licit, no analytical or close form
solution is 'ossible
- force to consier numerical methos
Illustrative examples of performance of rootfinding methods can be found in C&C Ch 8.
(ete)*i"e )el )!!t# !+ :
,lgebraic -quations
.ranscenental -quations
/ombinations thereof
E",i"ee)i", Ec!"!*ic# E-*ple:
(unici'al bon has annual 'ayout of 0&,000 for 10 years2
*t costs 03,400 to 'urchase no$2 *m'licit interest rate, i 5
Solution+ 6resent-value, 67, is+
n
& (& i)
67 ,
i

1
+

1
1
]
in $hich+ 67 = 'resent value or 'urchase 'rice = 03,400
, = annual 'ayment = 0&,000
n = number of years = 10
i = interest rate = 5
8e nee to solve the equation for i+
( )
10
& & i
3, 400 &, 000
i

1
+
1
1
]
-quivalently, fin the root of+
( )
10
& & i
f (i) 3, 400 &, 000
i

1
+
1
1
]
= 0
T.! F/"0*e"tl App)!che#:
&2 Bracketing or Closed Methods
- 9isection (etho
- :alse-'osition (etho
12 Open Methods
- ;ne 6oint *teration
- <e$ton-=a'hson *teration
- Secant (etho
1i#ecti!" 2eth!0 (C&C 4
th
, 3$2, p$ %%')
>iven lo$er an u''er bouns, xl an xu $hich brac?et the root+
f(x
&
) f(x
u
) @ 0
&) -stimate the =oot by mi'oint+
l u
r
x x
x
1
+

1) =evise the brac?et+


f(x
l
) f(x
r
) @ 0, x
r
> x
u
,
f(x
&
) f(x
r
) > 0, x
r
> x
l
A) =e'eat ste's &-1 until+
(a) B f(x
r
) B @ (b)
a
@
s
, $ith

a

x
r
new
x
r
old
x
r
new
&00C
(c)
u l
x x

() maximum D of iterations is reache
f(x
1
) f(x
r
) > 0
x
f(x)
l u
r
x x
x
1
+

f(x
u
)
(x
u
)
(x
1
)
f(x
1
)
f(x
r
)
f(x
u
)
f(x
1
)
(x
1
)
(x
u
)
(x
r
)
f(x
r
)
f(x)
x
x
r
=> x
l
S/**)4 !+ 1i#ecti!" 2eth!0
Advantages:
&2 Sim'le
12 >oo estimate of maximum error
& u
max
x x
-
1

A2 /onvergence guarantee
i i
E E
max
&
max
4 2 0
+
Disadvantages:
&2 Slo$
12 =equires initial interval aroun root+
Ese gra'h of function,
incremental search, or
trial F error
Fl#e5p!#iti!" 2eth!0 (C&C 4
th
, 3$6, p$ %24)
Similar to bisection2 Eses linear inter'olation to a''roximate root x
r
&)
u & u
r u
& u
f (x ) (x x )
x x
f (x ) f (x )

1) =evise the brac?et+


f(x
&
) f(x
r
) @ 0, x
r
> x
u
,
f(x
&
) f(x
r
) > 0, x
r
> x
&
A) =e'eat ste's &-1 until+
(a) B f(x
r
) B @ (b)
a
@
s
, $ith

a

x
r
new
x
r
old
x
r
new
&00C

(c) Bxu x& B
() maximum D of iterations is reache
s
x
f(x)
f(x
u
)
(x
u
)
(x
1
)
f(x
1
)
f(x
r
)
u & u
r u
&
f (x )(x x )
x x
f (x ) f (xu)

f(x
1
) f(x
r
) > 0
x
1
= x
r
f(x)
f(x
u
)
f(x
1
)
(x
1
)
(x
u
)
(x
r
)
f(x
r
)
x
Sc!)e Sheet +!) R!!t+i"0i", E-*ple:
*nitial -st(s) <o2 of *terations <o2 of *terations
(etho Gexact = 02&&HA4I for
s
= 1x&0
-1
for
s
= 1x&0
-3
9isection (0200, &200) H 1J
(0204, 02&4) J 11
:alse-'os2 (0200, &200) && 1K
(0204, 02&4) A &L
S/**)4 !+ Fl#e5P!#iti!" *eth!0
Advantages
&2 Sim'le
12 9rac?ets root
A2 >ives maximum error
Disadvantages
&2 /an be 7-=M slo$
12 #i?e 9isection, nees an initial interval aroun the root
R!!t# !+ E7/ti!"# 5 Ope" 2eth!0# (C&C 4
th
, Ch$ ', p$ %66)
/haracteristics+
&2 *nitial estimates nee not brac?et root
12 >enerally converge faster
A2 NOT guarantee to converge
;'en (ethos /onsiere+
- ;ne 6oint *teration
- <e$ton-=a'hson *teration
- Secant (etho
O"e5p!i"t Ite)ti!" (C&C 4
th
, '$%, p$ %64)
'reict a value xiN& as a function of xi2
/onvert f(x) = 0 to x = g(x)
iteration ste's+ x
iN&
= g(x
i
)

ne$ ol
i i &
x x
+

Example I: 3400 = &000 G & (&Ni)


-10
IO i
i
iN&
= G & (&Ni
i
)
-10
I O 324
Example II:
( )
( ) sin x
f x &20 020
x

x = sin(x) x
iN&
= sin(x
i
) OR x = arcsin(x) x
iN&
= arcsin(x
i
)
C!"ve),e"ce:
Poes x move closer to real root (5)
Pe'ens on+
&2 nature of the function
12 accuracy of the initial estimate
*ntereste in+
&2 8ill it converge or $ill it diverge5
12 Qo$ fast $ill it converge 5
(rate of convergence)
C!"ve),e"ce !+ the O"e5p!i"t Ite)ti!" 2eth!0:
=oot satisfies+ x
r

= g(x
r
) g(xr) xr = 0
.he .aylor series for function g is+
x
iN&
= g(x
i
)
= g(x
r
) N g

"() (x
i
x
r
) x
r
@ @ x
i
Subtracting yiels
(x
r
x
iN&
) = g

"() (x
r
x
i
)
or

i & i
- g" ( ) -
+

&2 .rue error for next iteration smaller than true error
in 'revious iteration if Bg"()B @ &20 (it $ill converge)2
12 9ecause g

"() is almost constant, the ne$ error is irectly 'ro'ortional to the
ol error (linear rate of convergence )2
F/)the) C!"#i0e)ti!"#:
/onvergence e'ens on ho$ f(x)=0 is converte into x = g(x), so222
/onvergence may be im'rove by recasting the 'roblem2
C!"ve),e"ce P)!ble*:
:or slo$ly converging functions
ne$ ol
a
ne$
x x
&00C
x


can be small, even if x
ne$
is not close to root2
Remedy: Po not com'letely rely on
a
to ensure the 'roblem $as solve2
/hec? to ma?e sure that B f(x
ne$
) B @ 2
Ne.t!"5Rph#!" 2eth!0 (C&C 4
th
, '$2, p$ %68)
9e!*et)icl (e)ivti!":
Slo'e of tangent R xi is
i
i
i i &
f (x ) 0
f "(x )
x x
+

solve for xiN&+


i
i & i
i
f (x )
x x
f "(x )
+

G<ote that this is the same form as the generali)e one-'oint iteration, xiN& = g(xi)I
.aylor Series Perivation+ 0 = f(x
r
) f(x
i
) N f " (x
i
) (x
r
x
i
)
8e solve for x
r
to yiel next guess x
iN&
xr
i
i & i
i
f(x )
x x
f (x )
+

.his has the form x


iN&
= g(x
i
) $ith g" (x
r
) =
1
(f " f " f f "")
& 0
(f " )


<e$ton-=a'hson iteration+

x
i+&
x
i

f(x
i
)
f (x
i
)

ne$ ol
i i &
x x
+

.his iteration is re'eate until+


&2 f(x) 0, i2e2, B f(x
iN&
) B @
12

a

x
i+&
x
i
x
i+&
&00C
s
A2 (ax2 D iterations is reache
x
i
= x
i+1

i
i & i
i
f (x )
x x
f "(x )
+

Tangent
w/slope=f '(x
i
)
x
f(x)
f(x
i
)
x
i
f(x
i+1
)
i
i
i i &
f (x ) 0
f " (x )
x x
+

x
i+
1
x
f(x)
f(x
i
)
(x
i
)
f(x
i+1
)
i
i
i i &
f (x ) 0
f " (x )
x x
+

x
i+1
1!"0 E-*ple:
.o a''ly <e$ton-=a'hson metho to+
10
& (& i)
f (i) 3400 &000 0
i

1
+

1
1
]

<ee erivative of function+
10
1&
&000 & (& i)
f " (i) 10(& i)
i i

1
+
+
1 ' ;
1
]
Score Sheet for Newton-Raphson Example:
Method Initial Est(s). e
s
= 2x10
-2
e
s
= 2x10
-7
9isection (0200, &200) H 1J
(0204, 02&4) J 11
:alse-'os2 (0200, &200) && 1K
(0204, 02&4) A &L
<-= &20 iverges iverges
02 4 1, but $rong LK
0214 4 3
02&4 A 4
0204 L 4
E))!) A"l4#i# +!) N5R :
=ecall that
i
i & i
i
f(x )
x x
f " (x )
+

.aylor Series gives+
1
r i i r i r i
f "" ( )
f (x ) f (x ) f " (x )(x x ) (x x )
1S

+ +
$here x
r
x
i
an f(x
r
) = 0
Piviing through by f "(x
i
) yiels
1
i i r i r i
i
f "" ( )
0 f(x )Of " (x ) (x x ) (x x )
1f " (x )

+ +

1
i & i r i r i
i
f "" ( )
(x x ) (x x ) (x x )
1f " (x )
+

+ +
;=+
1
r i & r i
i
f "" ( )
(x x ) (x x )
1f " (x )
+


1
i & i
i
f "" ( )
- -
1f " (x )
+

-
iN&
is 'ro'ortional to -
i
1
, uadratic rate of convergence
-
i
small -
iN&
very small% -
i
large anything can ha''en2
S/**)4 !+ Ne.t!"5Rph#!" *eth!0
Advantages
/an be very fast
Disadvantages
may not converge
requires erivative to be evaluate
)ero erivative blo$s u'
Sec"t 2eth!0 (C&C 4
th
, '$6, p$ %43)
Secant metho solution+ ,''rox2 f " (x) $ith bac?$ar :PP+

i & i
i
i & i
f(x ) f(x )
f " (x )
x x

Substitute this into the <-= equation+


i
i & i
i
f(x )
x x
f " (x )


to obtain the iterative ex'ression+

x
i+&
x
i

f(x
i
)(x
i&
x
i
)
f(x
i&
) f(x
i
)
&) =equires t$o initial estimates+
xi-& an xi

.hese o <;. have to brac?et root S
1) (aintains a strict sequence+

x
i+&
x
i

f x
i
( )x
i&
x
i
( )
f x
i& ( )
f x
i ( )
=e'eate until+
a2 B f(x
iN&
) B @ $ith = small number
b2

a

x
i+&
x
i
x
i+&
&00C
s
c2 (ax2 D iterations reache (note no )
A) *f xi an xiN& $ere chosen to brac?et the root, this $oul be the same as the
:alse-6osition (etho2 9E. 8- P;<".S
x
i
= x
i+1

i i & i
i & i
i & i
f (x )(x x )
x x
f (x ) f (x )

x
f(x)
f(x
i
)
x
i
f(x
i-1
)
i-& i
i
i-& i
f(x ) - f(x )
f " (x )
x - x

f(x)
x
i-1
x
i+
1
x
f(x
i
)
x
i
f(x
i-1
)
i-& i
i
i-& i
f(x ) - f(x )
f " (x )
x - x

x
i-1
x
i+1
Score Sheet for Secant Example
no. of iterationsno. of iterations
Method Initial Est(s). !ith es = 2x10
-2
es = 2E-7

Bisection (0.00, 1.00) 9 26
(0.05, 0.15) 6 22

False-os. (0.00, 1.00) 11 2!
(0.05, 0.15) " 1#

$-% 1.0 &i'er(es &i'er(es
0.5 2, )ut *ron( #!
0.25 5 +
0.15 " 5
0.05 # 5

,ecant (0, 1) &i'er(es &i'er(es
(0.00, 0.50)#, )ut *ron( (near c-aotic) 2+
(0.05, 0.15) " 6
$-% 1.00 " #
.as i f(i)/ 0.150 2 #
0.050 # 5
0.0#+cra01 results 22
0.0" con'er(es to i=0
22
:h4 0! !pe" *eth!0# +il;
:unction may not loo? linear2
Remedy: recast into a linear form2 :or exam'le,


f(i) = 7,500 - 1,000
1-(1+i)
-20
i

1
]
1
= 0
is a 'oorly constraine 'roblem in that there is a large, nearly flat )one for $hich
the erivative is near )ero2 =ecast as+
i f(i) = 0 = 3,400 i &000 G & (&Ni)
-10
I
.he recast function, Ti f(i)T $ill have the same roots as f(i) 'lus an aitional
root at i = 02
*t $ill not have a large, flat )one2
.hus, h(i) = i f(i) = 3,400 i &000 G & (&Ni)
-10
I
<-= also nees the first erivative+ h"(i) = 3,400 10,000 (&Ni)
-1&
Fi,/)e: +(i) (#!li0 li"e) , i +(i) (0!tte0 li"e)
R!!t# !+ E7/ti!"# < C#e# !+ 2/ltiple R!!t# (C&C 4
th
, '$4, p$ %3&)
2/ltiple R!!t#: f(x) = (x 1)
1
(x L)
x = 1 re'resents t$o of the three roots2
P)!ble*# "0 App)!che#: C#e# !+ 2/ltiple R!!t#
&2 9rac?eting (ethos fail locating x = 12
f(x
l
) f(x
r
) > 02
12 ,t x = 1, f(x) = f

"(x) = 02
<e$ton-=a'hson an Secant may ex'erience 'roblems2
=ate of convergence ro's to linear2
#uc?ily, f(x) 0 faster than f

"(x) 0
A2 ;ther remeies, recasting 'roblem+
( )
( )
( )
f x
u x
f " x

= 0
u(x) an f(x) have same roots2
<;.-+ /F/ /h2 3 on U=oots of 6olynomialsV is not covere in etail in this
course exce't the useful sections 3212& an 3232
#.0 ".0 2.0 1.0
-"
-2
-1
0
1
2
"
-0.2
0.0 0.2 0.# 0.6 0.! 1.0
x
f
(
x
)
-#000
-2000
0
2000
#000
S/**)4 < Rte# !+ C!"ve),e"ce

i
m
i
i-&
-
lim , 0
-

>
m = &+ linear convergence
m = 1+ quaratic convergence
(etho m
9isection &
:alse 6osition &
Secant, mult2 =oot &
<=, mult2 =oot &
Secant, single root &2J&K Usu'er linearV
<=, single root 1
,ccel2 <=, mult2 =oot (/F/, 42L) 1
NR #!l/ti!"# !+ */ltiv)ite e7/ti!"# (C&C 4
th
, '$3$2, p$ %33)
3e -a'e n nonlinear e4uations, f
5
= 0, 5 = 1,6,n, eac- *it- n
in&een&ent 'aria)les, x
7
, 7 = 1,6,n. 3e see5 a solution for t-e
set of x 'alues t-at si8ultaneousl1 satis9es t-e con&ition t-at all
f
5
= 0.
:n 'ector an& 8atrix notation, t-e $e*ton-%a-son al(orit8
see5s t-e roots ;x< of ;f< = 0 )1 a se4uence of aroxi8ations
;x<
0
, ;x<
1
, 6, ;x<
i
, ;x<
i+1
,6, in *-ic- all t-e 'ectors in&icate&
)1 )races are of &i8ension n x 1 an& ;x<
0
is a suita)le initial
(uess of t-e roots. =i'en t-e i
t-
esti8ate, *e use a 9rst-or&er
>a1lor series exansion to see5 t-e (i+1)
st
?
{ } ( ) { }
{ } ( ) { }
{ } { } ( )
& &
0
i i i i
i
f
f x f x x x
x
+ +
1
+
1

]
(1)
@ere *e &e9ne t-e n x n s4uare 8atrix of artial &eri'ati'es
[ ]
i
i
f
!
x
1

]
as t-e Aaco)ian of t-e s1ste8, an& t-e ele8ent in
t-e 5
t-
ro* an& 7
t-
colu8n of .J/
i
is
"
#
f
x

. >-e su)scrit i on .J/


i

in&icates t-at it is e'aluate& at ;x<
i
. 3e also &e9ne t-e c-an(e
in t-e esti8ate of t-e root as { } { } { }
& i i i
x x x
+

. >-erefore, *-en
*e su)stitute t-ese notations into e4uation (1) an& rearran(e,
*e o)tain si8ultaneous linear al(e)raic e4uations for t-e i
t-

iteration of t-e $-% solution al(orit-8?
[ ] { } { } ( ) { }
i i i
! x f x
(2)
3e sol'e for { }
i
x
, 9n& t-e ne* esti8ate of t-e root usin(
{ } { } { }
& i i i
x x x
+
+
, (")
c-ec5 for con'er(enceBter8ination of t-e root-9n&in(, an& if
not con'er(e& erfor8 t-e next iteration.
>-e exa8le an& e4uations (i'en in CDC ,ection 6.5.2 are for n
= 2. CDC E4uations (6.19) correson& to e4uation (1) a)o'e,
*-ile CDC E4uations (6.20) are an exan&e& 'ersion of e4uation
(2) a)o'e. >o transfor8 to t-e text notation, f
1
= u, f
2
= ', x
1
=
x, x
2
= 1, an& t-e Aaco)ian is
[ ]
$ $
x y
!
v v
x y
1
1

1

1
1

]
CDC E4uations (6.21) reresents t-e solution to e4uations (2)
an& (") usin( Cra8erFs rule.
Alte)"tive St!ppi", C)ite)i
&2 ,l$ays limit the number of iterations using an outer P; loo'2
6roblem may not converge an coul try to go on forever2
12 ,bsolute error criteria for UsmallV ifferences+ G x
i
x
i-1
G H
A2 =elative error criteria for Urelatively smallV changes
G x
i
x
i-1
G H Gx
i
G
L2 /an combine error criteria 1 an A so that $or?s for both large an small
x-values+
G x
i
x
i-1
G H + Gx
i
G
42 Sto' if resiual small enough+ G f(x
i)
G H
Th)ee Pe)+!)*"ce C)ite)i
Sto''ing /riteria+
B x
i
x
i-&
B @ N Bx
i
B or B f(x
i)
B @ or (ax2 iterations
/onvergence /riteria+
B x
i
x
i-&
B @ N Bx
i
B and B f(x
i)
B @
<-= or Secant /onfirmation of /onvergent 9ehavior+
x in feasible region and B f(x
i)
B W 024 B f(x
i-&)
B
and B x
i
x
i-&
B W 02J B x
i-&
x
i-1
B
other$ise, o 9isection for a $hile2
A Th)ee Ph#e R!!t+i"0i", St)te,4
, real rootfining 'roblem can be vie$ as having thee 'hases+
%) Opening moves: one nees to fin the region of the 'arameter s'ace in $hich
the esire root can be foun2
8hen using brac?eting methos (bisection, false 'osition), this involves fining
an interval $ith f(x
l
)2f(x
u
) H 02
Enerstaning of the 'roblem, 'hysical insight, an common sense are valuable
here2 (*f it is feasible to gra'h the function, loo?ing at a gra'h hel's2)
Esing a 'o$erful <e$ton-=a'hson algorithm to loo? for a root $here none
exists is a futile effort2
2) Middle !ame: Qere one uses a robust algorithm to reuce the initial region of
uncertainty so that the value of the root can be roughly etermine2 *n one
imension, bisection is a goo strategy here2
*n multivariate 'roblems, a sequence of one-imensional line searches in the
graient irection may $or?% minimi)ing the sum of squares of the iniviual
errors (UresiualsV) allo$s an algorithm to Xuge if 'rogress is being mae2
6) "nd game: ;nce a relatively goo estimate of a root is foun, a <e$ton-
=a'hson or Secant algorithm can use use to generate a highly accurate
solution is a fe$ iterations2
.his may not have been 'ossible in ste' (1) because a linear a''roximation of
the function may not be an aequate escri'tion of the function unless one is
very near the root2
Q/e#ti!": :or the A cases belo$ escribe the 'rogress of the <e$ton-=a'hson metho $hen
solving fi(x) = 0 as either linear, su'erlinear, quaratic, ivergent, chaotic, stuc? near a singularity,
or another a''ro'riate term, an ex'lain 8QM2
555 CASE A 555 555 C#e 1 555 555 C#e C 555
- +(-) - +b(-) - +c(-)
12000 -&12JHJ A2000 -K2000 02&00 &200-N01
A23&K 1&2&0K A2JJ3 -12A30 02&40 L2LL-N0&
A210J L24H& L2&&& -02301 02114 &2HK-N0&
A2010 02L01 L2L03 -0210K 02AAK K23K-N00
A2000 0200L L2J04 -020J1 0240J A2H0-N00
A2000 02000 L23A3 -020&K 0234H &23A-N00
A2000 02000 L2K1L -02004 &2&AH 323&--0&
A2000 02000 L2KKA -02001 &230H A2LA--0&
A2000 02000 L2H11 02000 124JA &241--0&
A2000 02000 L2HLK 02000 A2KLL J233--01
A2000 02000 L2HJ4 02000 423J3 A20&--01
A2000 02000 L2H33 02000 K2J40 &2AL--01
A2000 02000 L2HK4 02000 &12H34 42HL--0A
A2000 02000 L2HH0 02000 &H2LJ1 12JL--0A
8hen <e$ton-=a'hson is $or?ing $ell, one has quaratic convergence $hen /#;S- to the
root2 .his means that near to the root x
r
+ (x
r
x
iN&
) = G024f

V(x
r
)Of

Y(x
r
)I (x
r
x
i
)
1

Qere the ifference bet$een the next estimate of the root x
iN&
,<P the real root x
r
is
'ro'ortional to (x
r
x
i
)
1
2 .hus, once the errors (x
r
x
i
) get small, they get very small very fast2 See
/ase ,2 Mou can create your o$n exam'les $ith f

Y(x
r
) Z 02 Similarly $hen the secant metho is
$or?ing $ell, one obtains su'erlinear convergence2 .his means the ex'onent is about &2J instea
of 1% but the 'erformance loo"s about the same2
,t a multi'le root, the quaratic convergence of <e$ton-=a'hson fails an instea one
obtains linear convergence $hich means that (x
r
x
iN&
) = g

Y() (x
r
x
i
) $here is a 'oint
$ithin the interval an x
iN&
= g(x
i
)2 .his occurs because the linear moel <= uses is no longer
goo near the root2 .he f

Y(x
r
) in the enominator of the error ex'ression for the <= metho
suggests there $ill be trouble2 See case 92 .ry fining $ith <= the root of (x-r)
?
for integer ? > &2
*n this case the <= metho yiels+ (x
r
x
iN&
) = G& (&O?)I (x
r
x
i
)
<ear a 'ole or singularity $here f(x) infinity, the <= algorithm can get stuc?+ it ta?es
very small ste's because the ste' length xiN& xi

= f (xi) O f " (xi) can be 7-=M small even
though f(x) is large2 , istinguishing feature of such cases is the large value of f "(x)2 ,lternatively
if the function in question is something li?e ex'(-x), $hich a''roaches )ero as x , the <=
algorithm can iverge in the sense that x continues to increase $ithout boun% but f(xi) oes get
smaller slo$lyS See case /2 Pivergent behavior $oul also occur if x
i
oscillate bet$een
increasing larger values2 G.ry f(x) = ln(x
1
) 023 = 0 for large x2I
/haotic behavior is obtaine if one searches for the root of f(x) = (x-1)
1
N 020&2 .ry it2 <=
can almost fin a root, an then it shoots 'ast an starts searching again from the other sie2
;ther exam'les $ere 'rovie in class2 %&ese 'e&aviors easily arise in m$ltivariate rootfinding
pro'lems and are t&en &arder to recogni(e

You might also like