3.8 Newton Method
3.8 Newton Method
f (x) = x3 + x − 1 = 0 .
In this case, the best we can ask is an approximate solution, accurate to a specified
number of decimal places, and this is all we need for any practical purpose.
We can start with a computer graph of y = f (x), which is just a display of
many plotted points (x, f (x)):
the Intermediate Value Theorem (§1.8) guarantees a solution 0.6 < a < 0.7;
thus we can improve our estimate to a ≈ 0.6. We could add a decimal place by
checking f (0.61), f (0.62), . . . , f (0.69) to see where the values change from negative
to positive, but this is clearly very tedious and inefficient.
Newton’s Method is an amazingly efficient way to refine an approximate solu-
tion to get more and more accurate ones, until the required accuracy is reached.
Let us call our first estimate x1 = 0.5. We are seeking the true solution x = a, the
x-intercept of y = f (x). As in §2.9, let us approximate y = f (x) by its tangent
line at our initial point at (x1 , f (x1 )), namely y = f (x1 ) + f 0 (x1 )(x−x1 ):
Notes by Peter Magyar [email protected]
∗
How do we know there is no other solution x = b? If there were, Rolle’s Theorem (§3.2) says
that there would be some x = c ∈ (a, b) with f 0 (c) = 0, namely a hill or valley of y = f (x). But
f 0 (x) = 3x2 + 1 = 0 clearly has no solutions, so y = f (x) has no hills or valleys, and there cannot
exist another solution x = b.
You can see how the tangent line (in red) is very close to the graph near x = x1 , and
fairly close even near the true solution x = a. We cannot solve for the x-intercept
of y = f (x), but we can find the x-intercept of the line, denoted x = x2 :
f (x1 )
f (x1 ) + f 0 (x1 )(x−x1 ) = 0 =⇒ x = x2 = x1 − .
f 0 (x1 )
This solution x2 is not exactly a, but it is closer than the initial estimate x1 .
Now we can iterate (green line), repeating the same computation starting with
x2 instead of x1 . The result is:
f (x2 )
x3 = x2 − ,
f 0 (x2 )
The xn’s will continue as real numbers to converge closer and closer to a, but
since we do not see any difference in our 3 decimal places after x4 , there is no
point in continuing. We already have our answer within the specified accuracy:
f (xn )
xn+1 = xn − ,
f 0 (xn )
2. Stop once xn ≈ xn+1 are the same up to the given accuracy. The final
approximation is a ≈ xn .
cos(x) = x .
Looking at the graph, we see that there is a unique solution somewhere around
x1 = 1. This seems different from the previous case, since we seek the intersection
of two graphs rather than the x-intercept of a single graph; but we can simply
rewrite the equation as f (x) = x − cos(x) = 0. Newton’s Method gives:
xn − cos(xn )
xn+1 = xn − ,
1 + sin(xn )
x1 x2 x3 x4
1.000 0.750 0.739 0.739
That is, the solution is a ≈ 0.739 to 3 places.
√
Numerical roots. √The number 2 is a “known value”: a calculator can imme-
diately tell us that 2 = 1.41421356 . . . . But just how does the calculator know
this? Newton’s Method,
√ that’s how!
By definition, 2 is the solution of x2 = 2, or f (x) = x2 − 2 = 0. Starting
2 −2
with x1 = 1, the Method gives xn+1 = xn − x2x n
n
and:
x1 1.00000000
x2 1.50000000
x3 1.41666667
x4 1.41421569
x5 1.41421356
x6 1.41421356
Here we see the power of the Method: with just a couple of dozen +, −, ×, ÷
calculator operations, it converged from 0 places to 8 places of accuracy.
We could also do the Method with√fractions rather than decimals to get very
accurate fractional approximations of 2:
x1 1
x2 3/2
x3 17/12
x3 577/408
Already x3 = 17 17 2
12 is a very good approximation, since ( 12√ ) = 289 1
144 = 2 144 , very
close to 2. However, no fraction or finite decimal can give 2 exactly: it is known
to be an irrational number.