Textbook 656 663
Textbook 656 663
8. Use functional iteration to find solutions to the following nonlinear systems, accurate to within 10−5 ,
using the l∞ norm.
a. x22 + x22 − x1 = 0 b. 3x12 − x22 = 0,
x12 − x22 − x2 = 0. 3x1 x22 − x13 − 1 = 0.
c. x12 + x2 − 37 = 0, d. x12 + 2x22 − x2 − 2x3 = 0,
x1 − x22 − 5 = 0, x12 − 8x22 + 10x3 = 0,
x1 + x2 + x3 − 3 = 0. x12
− 1 = 0.
7x2 x3
9. Use the Gauss-Seidel method to approximate the fixed points in Exercise 7 to within 10−5 , using the
l∞ norm.
10. Repeat Exercise 8 using the Gauss-Seidel method.
11. In Exercise 10 of Section 5.9, we considered the problem of predicting the population of two species
that compete for the same food supply. In the problem, we made the assumption that the populations
could be predicted by solving the system of equations
dx1 (t)
= x1 (t)(4 − 0.0003x1 (t) − 0.0004x2 (t))
dt
and
dx2 (t)
= x2 (t)(2 − 0.0002x1 (t) − 0.0001x2 (t)).
dt
In this exercise, we would like to consider the problem of determining equilibrium populations of
the two species. The mathematical criteria that must be satisfied in order for the populations to be at
equilibrium is that, simultaneously,
dx1 (t) dx2 (t)
=0 and = 0.
dt dt
This occurs when the first species is extinct and the second species has a population of 20,000 or
when the second species is extinct and the first species has a population of 13,333. Can an equilibrium
occur in any other situation?
12. Show that a function F mapping D ⊂ Rn into Rn is continuous at x0 ∈ D precisely when, given any
number ε > 0, a number δ > 0 can be found with property that for any vector norm · ,
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 639
gives quadratic convergence to the fixed point p of the function g (see Section 2.4). From this
condition Newton’s method evolved by choosing φ(x) = 1/f (x), assuming that f (x) = 0.
A similar approach in the n-dimensional case involves a matrix
⎡ ⎤
a11 (x) a12 (x) · · · a1n (x)
⎢a21 (x) a22 (x) · · · a2n (x)⎥
⎢ ⎥
A(x) = ⎢ . .. .. ⎥ , (10.5)
⎣ .. . . ⎦
an1 (x) an2 (x) · · · ann (x)
where each of the entries aij (x) is a function from Rn into R. This requires that A(x) be
found so that
gives quadratic convergence to the solution of F(x) = 0, assuming that A(x) is nonsingular
at the fixed point p of G.
The following theorem parallels Theorem 2.8 on page 80. Its proof requires being able
to express G in terms of its Taylor series in n variables about p.
Theorem 10.7 Let p be a solution of G(x) = x. Suppose a number δ > 0 exists with
(i) ∂gi /∂xj is continuous on Nδ = { x | x − p < δ }, for each i = 1, 2, . . . , n and
j = 1, 2, . . . , n;
(ii) ∂ 2 gi (x)/(∂xj ∂xk ) is continuous, and |∂ 2 gi (x)/(∂xj ∂xk )| ≤ M for some constant
M, whenever x ∈ Nδ , for each i = 1, 2, ..., n, j = 1, 2, . . . , n, and k = 1, 2, . . . , n;
(iii) ∂gi (p)/∂xk = 0, for each i = 1, 2, . . . , n and k = 1, 2, . . . , n.
(k) (k−1)
(0) by x = G(x
Then a number δ̂ ≤ δ exists such that the sequence generated ) converges
quadratically to p for any choice of x , provided that x − p < δ̂. Moreover,
(0)
n2 M (k−1)
x(k) − p∞ ≤ x − p2∞ , for each k ≥ 1.
2
Theorem 10.7 implies that we need ∂gi (p)/∂xk = 0, for each i = 1, 2, . . . , n and
k = 1, 2, . . . , n. This means that for i = k,
n
∂fj
0=1− bij (p) (p),
j=1
∂xi
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
640 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations
that is,
n
∂fj
bij (p) (p) = 1. (10.6)
j=1
∂xi
When k = i,
n
∂fj
0=− bij (p) (p),
j=1
∂xk
so
n
∂fj
bij (p) (p) = 0. (10.7)
j=1
∂xk
An appropriate choice for A(x) is, consequently, A(x) = J(x) since this satisfies condition
(iii) in Theorem 10.7. The function G is defined by
and the functional iteration procedure evolves from selecting x(0) and generating, for k ≥ 1,
−1
x(k) = G x(k−1) = x(k−1) − J x(k−1) F x(k−1) . (10.9)
This is called Newton’s method for nonlinear systems, and it is generally expected
to give quadratic convergence, provided that a sufficiently accurate starting value is known
and that J(p)−1 exists. The matrix J(x) is called the Jacobian matrix and has a number of
applications in analysis. It might, in particular, be familiar to the reader due to its application
in the multiple integration of a function of several variables over a region that requires a
The Jacobian matrix first
change of variables to be performed.
appeared in a 1815 paper by A weakness in Newton’s method arises from the need to compute and invert the matrix
Cauchy, but Jacobi wrote De J(x) at each step. In practice, explicit computation of J(x)−1 is avoided by performing
determinantibus functionalibus in the operation in a two-step manner. First, a vector y is found that satisfies J(x(k−1) )y =
1841 and proved numerous −F(x(k−1) ). Then the new approximation, x(k) , is obtained by adding y to x(k−1) . Algorithm
results about this matrix. 10.1 uses this two-step procedure.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 641
where
1
f1 (x1 , x2 , x3 ) = 3x1 − cos(x2 x3 ) − ,
2
f2 (x1 , x2 , x3 ) = x12 − 81(x2 + 0.1)2 + sin x3 + 1.06,
and
10π − 3
f3 (x1 , x2 , x3 ) = e−x1 x2 + 20x3 + .
3
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
642 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations
where
⎡ ⎤
y1(k−1)
−1
⎢ (k−1) ⎥ (k−1) (k−1) (k−1)
⎣ y2 ⎦ = − J x1 , x2 , x3 F x1(k−1) , x2(k−1) , x3(k−1) .
y3(k−1)
Thus, at the kth step, the linear system J x(k−1) y(k−1) = −F x(k−1) must be solved,
where
⎡ ⎤
3 x3(k−1) sin x2(k−1) x3(k−1) x2(k−1) sin x2(k−1) x3(k−1)
⎢ ⎥
J x(k−1) = ⎢ ⎣ 2x1(k−1) −162 x2(k−1) + 0.1 cos x3(k−1) ⎥,
⎦
(k−1) (k−1) (k−1) (k−1)
−x2(k−1) e−x1 x2
−x1(k−1) e−x1 x2
20
⎡ (k−1) ⎤
y1
⎢ (k−1) ⎥
y(k−1) = ⎣y2 ⎦ ,
y3(k−1)
and
⎡ ⎤
3x1(k−1) − cos x2(k−1) x3(k−1) − 21
⎢⎢
2 2 ⎥
⎥
F x(k−1) = ⎢ x1(k−1) − 81 x2(k−1) + 0.1 + sin x3(k−1) + 1.06⎥ .
⎣ ⎦
(k−1) (k−1) (k−1)
−x1
e x2
+ 20x3 + 3 10π−3
The results using this iterative procedure are shown in Table 10.3.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 643
The previous example illustrates that Newton’s method can converge very rapidly once
a good approximation is obtained that is near the true solution. However, it is not always easy
to determine good starting values, and the method is comparatively expensive to employ. In
the next section, we consider a method for overcoming the latter weakness. Good starting
values can usually be found using the Steepest Descent method, which will be discussed in
Section 10.4.
Figure 10.2
x2
8
4
x 21 ⫺ x 22 ⫹ 2x 2 ⫽ 0
2x1 ⫹ x 22 ⫺6⫽0 2
⫺8 ⫺6 ⫺4 4 6 8 x1
⫺2
⫺4
⫺6
⫺8
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
644 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations
The problem is more difficult in three dimensions. Consider the nonlinear system
E X E R C I S E S E T 10.2
1. Use Newton’s method with x(0) = 0 to compute x(2) for each of the following nonlinear
systems.
1 b. sin(4πx1 x2 ) − 2x2 − x1 = 0,
a. 4x12 − 20x1 + x22 + 8 = 0,
4
4π − 1
1 2 (e2x1 − e) + 4ex22 − 2ex1 = 0.
x1 x2 + 2x1 − 5x2 + 8 = 0. 4π
2
c. x1 (1 − x1 ) + 4x2 = 12, d. 5x12 − x22 = 0,
(x1 − 2) + (2x2 − 3) = 25.
2
x2 − 0.25(sin x1 + cos x2 ) = 0.
2
2. Use Newton’s method with x(0) = 0 to compute x(2) for each of the following nonlinear
systems.
1 b. x12 + x2 − 37 = 0,
a. 3x1 − cos(x2 x3 ) − = 0,
2
x1 − x22 − 5 = 0,
4x12 − 625x22 + 2x2 − 1 = 0,
10π − 3 x1 + x2 + x3 − 3 = 0.
e−x1 x2 + 20x3 + = 0.
3
c. 15x1 + x22 − 4x3 = 13, d. 10x1 − 2x22 + x2 − 2x3 − 5 = 0,
x12 + 10x2 − x3 = 11, 8x22 + 4x32 − 9 = 0,
x23 − 25x3 = −22. 8x2 x3 + 4 = 0.
3. Use the graphing facilities of Maple to approximate solutions to the following nonlinear
systems.
1 b. sin(4πx1 x2 ) − 2x2 − x1 = 0,
a. 4x12 − 20x1 + x22 + 8 = 0,
4
4π − 1
1 2 (e2x1 − e) + 4ex22 − 2ex1 = 0.
x1 x2 + 2x1 − 5x2 + 8 = 0. 4π
2
c. x1 (1 − x1 ) + 4x2 = 12, d. 5x12 − x22 = 0,
(x1 − 2)2 + (2x2 − 3)2 = 25. x2 − 0.25(sin x1 + cos x2 ) = 0.
4. Use the graphing facilities of Maple to approximate solutions to the following nonlinear systems
within the given limits.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 645
1 b. x12 + x2 − 37 = 0,
a. = 0,
3x1 − cos(x2 x3 ) −
2
x1 − x22 − 5 = 0,
4x12 − 625x22 + 2x2 − 1 = 0,
10π − 3 x1 + x2 + x3 − 3 = 0.
e−x1 x2 + 20x3 + = 0. −4 ≤ x1 ≤ 8, −2 ≤ x2 ≤ 2, −6 ≤ x3 ≤ 0
3
−1 ≤ x1 ≤ 1, −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1
c. 15x1 + x22 − 4x3 = 13, d. 10x1 − 2x22 + x2 − 2x3 − 5 = 0,
x12 + 10x2 − x3 = 11, 8x22 + 4x32 − 9 = 0,
x23 − 25x3 = −22. 8x2 x3 + 4 = 0.
0 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 2, 0 ≤ x3 ≤ 2 0 ≤ x1 ≤ 2, −2 ≤ x2 ≤ 0, 0 ≤ x3 ≤ 2
and 0 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 2, −2 ≤ x3 ≤ 0
5. Use
(k)the answers obtained in Exercise 3 as initial approximations to Newton’s method. Iterate until
x − x(k−1) < 10−6 .
∞
6. Use
(k)the answers obtained in Exercise 4 as initial approximations to Newton’s method. Iterate until
x − x(k−1) < 10−6 .
∞
7. Use Newton’s method to find a solution to the following nonlinear systems in the given domain. Iterate
until x(k) − x(k−1) ∞ < 10−6 .
a. 3x12 − x22 = 0, b. ln x12 + x22 − sin(x1 x2 ) = ln 2 + ln π,
3x1 x22 − x13 − 1 = 0. ex1 −x2 + cos(x1 x2 ) = 0.
(0)
Use x = (1, 1) . t
Use x(0) = (2, 2)t .
c. x13 + x12 x2 − x1 x3 + 6 = 0, d. 6x1 − 2 cos(x2 x3 ) − 1 = 0,
ex1 + ex2 − x3 = 0,
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.