Fixed Point Iteration
Fixed Point Iteration
Fall 2010
2. The second step is making a wise guess for a starting point: x0 . In almost all cases, the closer the initial guess is to the answer the more likely the xed point iteration will converge, and the faster the method will converge when is does converge. 3. The main step in the algorithm is then expressed as an iterative application of xn+1 = g(xn ) (3)
which we repeat until the change in x from one iteration to the next is insignicant in the context of our problem. In a computer implementation it is often convenient to simply carry out a xed number of iterations that we know (from prior manual experimentation) to be more than enough iterations.
A Simple Example
Consider the function f(x) = cos(x) x We want to nd x such that f(x) = 0. The obvious choice for rewriting (4) in the form of (2) is x = cos(x) (5) (4)
That is, we set g(x) = cos(x). Any solution to (5) is a root for (4). From what we know about the cosine function it starts at 1 for x = 0 and smoothly descends to 0 at x = /2 we can anticipate a root somewhere in the range x [0, /2]. We will use the mid-point of the interval as our starting guess: x0 = /4 (6) The results from this eort are summarized in Table 1 and diagramed in Figure 1. From a visual inspection of Figure 1, it appears that the convergence is pretty good after fteen or so iterations. From Table 1, however, we see that it took 70 iteration to get f(x) to something on the order of 1013 .
1 The xed point iteration method is also known as the iterative substitution method. There are many online sources of additional information about the xed point iteration method. See, for example, http://en.wikipedia.org/wiki/Fixed_ point_iteration, or simply do a Google search.
CE 3101
Fall 2010
Table 1: An application of the Fixed Point Iteration method to nding the root of f(x) = cos(x) x. n 0 1 2 3 . . . 68 69 70 xn 0.78540 0.70711 0.76024 0.72467 . . . 0.73909 0.73909 0.73909 g(xn ) 0.70711 0.76024 0.72467 0.74872 . . . 0.73909 0.73909 0.73909 f(x) -7.829E-02 5.314E-02 -3.558E-02 2.405E-02 . . . -1.688E-13 1.137E-13 -7.661E-14
0.8
0.78
0.76
0.74
0.72
0.7
10
20
30
40
50
60
70
Figure 1: An application of the Fixed Point Iteration method to nding the root of f(x) = cos(x) x.
A Heuristic Modication
Figure 1 shows the overshoot-undershoot oscillations that we have seen before. Lets try of taking the average heuristic2 on this problem. That is, we replace (3) with xn+1 = 1 xn + g(xn ) 2 (7)
The results from this modied algorithm are summarized in Table 2 and diagramed in Figure 2. From a visual inspection of Figure 2, it is clear that the modied, heuristic, version converges must faster than the original version in this specic problem. From Table 2, we see that it took only 14 iteration of the modied algorithm to achieve what the original algorithm did in 70 iteration.
CE 3101
Fall 2010
Table 2: An application of the Fixed Point Iteration method to nding the root of f(x) = cos(x) x. n 0 1 2 3 . . . 14 15 16 xn 0.78540 0.74625 0.74025 0.73927 . . . 0.73909 0.73909 0.73909 g(xn ) 0.70711 0.73424 0.73830 0.73896 . . . 0.73909 0.73909 0.73909 f(x) -7.829E-02 -1.201E-02 -1.942E-03 -3.165E-04 . . . -6.917E-13 -1.128E-13 -1.843E-14
0.76
0.74
0.72
0.7
10
20
30
40
50
60
70
Figure 2: A comparison of original and modied Fixed Point Iteration method to nding the root of f(x) = cos(x) x.
To demonstrate the diculty, we consider the following quadratic equation f(x) = x2 + 6x 16 = 0 (8)
By visual inspection we can see that x = 2 is a root. Alternatively, we could apply the quadratic formula and compute the two roots as x = 2 and x = 8. We can also apply a xed point iteration scheme. First, we must rearrange (8) to take the form of (2). There are many possible rearrangements; for example x = x2 5x + 16 that is, we set g(x) = x2 5x + 16 (10) Even with a starting guess of x0 = 1.99 the algorithm fails (diverges). See Table 3. The take the average heuristic modication does not remedy the situation. The algorithm still fails (diverges), just a little bit more slowly. See Table 4. We must go back to (8) and come up with a dierent form for our g(x). For example, x= x2 + 16 6 (11) (9)
CE 3101
Fall 2010
Table 3: An application of the Fixed Point Iteration method to nding the root of f(x) = x2 + 6x 16 using a poor choice for g(x). The algorithm fails rather rapidly. n 0 1 2 3 4 5 6 xn 1.990 2.090 1.183 8.687 -102.896 -10057.037 -101093682.342 g(xn ) 2.090 1.183 8.687 -102.896 -10057.037 -101093682.342 -10219932103927800.000 f(x) 0.100 -0.907 7.504 -111.583 -9954.141 -101083625.305 -10219932002834100.000
Table 4: An application of the modied Fixed Point Iteration method to nding the root of f(x) = x2 + 6x 16 using a poor choice for g(x). The algorithm still fails rather rapidly. n 0 1 2 3 4 5 6 7 8 that is, we set x2 + 16 (12) 6 With this, seemingly arbitrary, choice for g(x), the xed point iterations converge. See Table 5. Using the take the average heuristic with (11) speeds up convergence considerably. See Table 6. This all seems frighteningly random. g(x) = xn 1.990 2.040 1.839 2.629 -0.716 9.176 -52.450 -1262.576 -794516.289 g(xn ) 2.090 1.639 3.420 -4.062 19.068 -114.075 -2472.703 -1587770.00259 -631252161590.43000 f(x) 0.100 -0.401 1.580 -6.691 19.784 -123.251 -2420.254 -1586507.426 -631251367074.141
Useful Theory
There is a useful3 mathematical result that can guide our eorts in wisely selecting the form for g(x). The xed-point iteration method converges if, in the neighborhood of the xed point containing our initial guess, the derivative of g(x) has an absolute value that is smaller than 1: |g(x)| < 1. This condition is closely related to something called Lipschitz continuous4 . We apply this theoretical result to or two choices for g(x) . For (10), in the neighborhood of x = 2, we nd dg(x) = |2x 5| 9 >> 1 (13) dx On the other hand, for (12), in the neighborhood of x = 2, we nd dg(x) x 2 = <1 dx 3 3
3 In 4 See
(14)
this case useful means: clear, unambiguous, and easy to apply. http://en.wikipedia.org/wiki/Lipschitz_continuous
CE 3101
Fall 2010
Table 5: An application of the Fixed Point Iteration method to nding the root of f(x) = x2 + 6x 16 using a wise (lucky) choice for g(x). The algorithm converges, albeit rather slowly. n 0 1 2 3 . . . 63 64 65 xn 1.99000 2.00665 1.99556 2.00296 . . . 2.00000 2.00000 2.00000 g(xn ) 2.00665 1.99556 2.00296 1.99803 . . . 2.00000 2.00000 2.00000 f(x) 1.665E-02 -1.109E-02 7.398E-03 -4.930E-03 . . . -1.339E-13 8.904E-14 -5.929E-14
Table 6: An application of the modied Fixed Point Iteration method to nding the root of f(x) = x2 + 6x 16 using a wise (lucky) choice for g(x). The algorithm converges rapidly. n 0 1 2 3 . . . 13 14 15 xn 1.99000 1.99833 1.99972 1.99995 . . . 2.00000 2.00000 2.00000 g(xn ) 2.00665 2.00112 2.00019 2.00003 . . . 2.00000 2.00000 2.00000 f(x) 1.665E-02 2.791E-03 4.657E-04 7.762E-05 . . . 1.284E-12 2.141E-13 3.553E-14