0% found this document useful (0 votes)
96 views8 pages

Textbook 656 663

Uploaded by

mohmmad mahmood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views8 pages

Textbook 656 663

Uploaded by

mohmmad mahmood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

638 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations

8. Use functional iteration to find solutions to the following nonlinear systems, accurate to within 10−5 ,
using the l∞ norm.
a. x22 + x22 − x1 = 0 b. 3x12 − x22 = 0,
x12 − x22 − x2 = 0. 3x1 x22 − x13 − 1 = 0.
c. x12 + x2 − 37 = 0, d. x12 + 2x22 − x2 − 2x3 = 0,
x1 − x22 − 5 = 0, x12 − 8x22 + 10x3 = 0,
x1 + x2 + x3 − 3 = 0. x12
− 1 = 0.
7x2 x3
9. Use the Gauss-Seidel method to approximate the fixed points in Exercise 7 to within 10−5 , using the
l∞ norm.
10. Repeat Exercise 8 using the Gauss-Seidel method.
11. In Exercise 10 of Section 5.9, we considered the problem of predicting the population of two species
that compete for the same food supply. In the problem, we made the assumption that the populations
could be predicted by solving the system of equations
dx1 (t)
= x1 (t)(4 − 0.0003x1 (t) − 0.0004x2 (t))
dt
and
dx2 (t)
= x2 (t)(2 − 0.0002x1 (t) − 0.0001x2 (t)).
dt
In this exercise, we would like to consider the problem of determining equilibrium populations of
the two species. The mathematical criteria that must be satisfied in order for the populations to be at
equilibrium is that, simultaneously,
dx1 (t) dx2 (t)
=0 and = 0.
dt dt
This occurs when the first species is extinct and the second species has a population of 20,000 or
when the second species is extinct and the first species has a population of 13,333. Can an equilibrium
occur in any other situation?
12. Show that a function F mapping D ⊂ Rn into Rn is continuous at x0 ∈ D precisely when, given any
number ε > 0, a number δ > 0 can be found with property that for any vector norm  · ,

F(x) − F(x0 ) < ε,

whenever x ∈ D and x − x0  < δ.


13. Let A be an n × n matrix and F be the function from Rn to Rn defined by F(x) = Ax. Use the result
in Exercise 12 to show that F is continuous on Rn .

10.2 Newton’s Method


The problem in Example 2 of Section 10.1 is transformed into a convergent fixed-point
problem by algebraically solving the three equations for the three variables x1 , x2 , and x3 .
It is, however, unusual to be able to find an explicit representation for all the variables. In
this section, we consider an algorithmic procedure to perform the transformation in a more
general situation.
To construct the algorithm that led to an appropriate fixed-point method in the one-
dimensional case, we found a function φ with the property that

g(x) = x − φ(x)f (x)

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 639

gives quadratic convergence to the fixed point p of the function g (see Section 2.4). From this
condition Newton’s method evolved by choosing φ(x) = 1/f (x), assuming that f (x) = 0.
A similar approach in the n-dimensional case involves a matrix
⎡ ⎤
a11 (x) a12 (x) · · · a1n (x)
⎢a21 (x) a22 (x) · · · a2n (x)⎥
⎢ ⎥
A(x) = ⎢ . .. .. ⎥ , (10.5)
⎣ .. . . ⎦
an1 (x) an2 (x) · · · ann (x)

where each of the entries aij (x) is a function from Rn into R. This requires that A(x) be
found so that

G(x) = x − A(x)−1 F(x)

gives quadratic convergence to the solution of F(x) = 0, assuming that A(x) is nonsingular
at the fixed point p of G.
The following theorem parallels Theorem 2.8 on page 80. Its proof requires being able
to express G in terms of its Taylor series in n variables about p.

Theorem 10.7 Let p be a solution of G(x) = x. Suppose a number δ > 0 exists with
(i) ∂gi /∂xj is continuous on Nδ = { x | x − p < δ }, for each i = 1, 2, . . . , n and
j = 1, 2, . . . , n;
(ii) ∂ 2 gi (x)/(∂xj ∂xk ) is continuous, and |∂ 2 gi (x)/(∂xj ∂xk )| ≤ M for some constant
M, whenever x ∈ Nδ , for each i = 1, 2, ..., n, j = 1, 2, . . . , n, and k = 1, 2, . . . , n;
(iii) ∂gi (p)/∂xk = 0, for each i = 1, 2, . . . , n and k = 1, 2, . . . , n.
(k) (k−1)
 (0) by x = G(x
Then a number δ̂ ≤ δ exists such that the sequence generated ) converges
quadratically to p for any choice of x , provided that x − p < δ̂. Moreover,
(0)

n2 M (k−1)
x(k) − p∞ ≤ x − p2∞ , for each k ≥ 1.
2

To apply Theorem 10.7, suppose that A(x) is an n × n matrix of functions from Rn


into R in the form of Eq. (10.5), where the specific entries will be chosen later. Assume,
moreover, that A(x) is nonsingular near a solution p of F(x) = 0, and let bij (x) denote the
entry of A(x)−1 in the ith row and jth column. 
For G(x) = x − A(x)−1 F(x), we have gi (x) = xi − nj=1 bij (x)fj (x). So
⎧ n  
⎪  ∂fj ∂bij

⎪ 1 − b (x) (x) + (x)f (x) , if i = k,

⎪ ij
∂xk ∂xk
j
∂gi ⎨ j=1
(x) = n  
∂xk ⎪
⎪  ∂fj ∂bij


⎩−
⎪ bij (x)
∂xk
(x) +
∂xk
(x)fj (x) , if i = k.
j=1

Theorem 10.7 implies that we need ∂gi (p)/∂xk = 0, for each i = 1, 2, . . . , n and
k = 1, 2, . . . , n. This means that for i = k,


n
∂fj
0=1− bij (p) (p),
j=1
∂xi

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
640 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations

that is,


n
∂fj
bij (p) (p) = 1. (10.6)
j=1
∂xi

When k = i,


n
∂fj
0=− bij (p) (p),
j=1
∂xk

so

n
∂fj
bij (p) (p) = 0. (10.7)
j=1
∂xk

The Jacobian Matrix


Define the matrix J(x) by
⎡ ∂f ∂f1 ∂f1 ⎤
1
(x) (x) · · · (x)
⎢ ∂x1 ∂x2 ∂xn ⎥
⎢ ⎥
⎢ ∂f2 ∂f2 ∂f2 ⎥
⎢ (x) (x) · · · (x)⎥
⎢ ∂xn ⎥
J(x) = ⎢ ∂x1 ∂x2 ⎥. (10.8)
⎢ . .. .. ⎥
⎢ .. . . ⎥
⎢ ⎥
⎣ ∂f ∂fn ∂fn ⎦
n
(x) (x) · · · (x)
∂x1 ∂x2 ∂xn

Then conditions (10.6) and (10.7) require that

A(p)−1 J(p) = I, the identity matrix, so A(p) = J(p).

An appropriate choice for A(x) is, consequently, A(x) = J(x) since this satisfies condition
(iii) in Theorem 10.7. The function G is defined by

G(x) = x − J(x)−1 F(x),

and the functional iteration procedure evolves from selecting x(0) and generating, for k ≥ 1,
−1
x(k) = G x(k−1) = x(k−1) − J x(k−1) F x(k−1) . (10.9)

This is called Newton’s method for nonlinear systems, and it is generally expected
to give quadratic convergence, provided that a sufficiently accurate starting value is known
and that J(p)−1 exists. The matrix J(x) is called the Jacobian matrix and has a number of
applications in analysis. It might, in particular, be familiar to the reader due to its application
in the multiple integration of a function of several variables over a region that requires a
The Jacobian matrix first
change of variables to be performed.
appeared in a 1815 paper by A weakness in Newton’s method arises from the need to compute and invert the matrix
Cauchy, but Jacobi wrote De J(x) at each step. In practice, explicit computation of J(x)−1 is avoided by performing
determinantibus functionalibus in the operation in a two-step manner. First, a vector y is found that satisfies J(x(k−1) )y =
1841 and proved numerous −F(x(k−1) ). Then the new approximation, x(k) , is obtained by adding y to x(k−1) . Algorithm
results about this matrix. 10.1 uses this two-step procedure.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 641

ALGORITHM Newton’s Method for Systems


10.1
To approximate the solution of the nonlinear system F(x) = 0 given an initial approxima-
tion x:
INPUT number n of equations and unknowns; initial approximation x = (x1 , . . . , xn )t ,
tolerance TOL; maximum number of iterations N.
OUTPUT approximate solution x = (x1 , . . . , xn )t or a message that the number of
iterations was exceeded.
Step 1 Set k = 1.
Step 2 While (k ≤ N) do Steps 3–7.
Step 3 Calculate F(x) and J(x), where J(x)i,j = (∂fi (x)/∂xj ) for 1 ≤ i, j ≤ n.
Step 4 Solve the n × n linear system J(x)y = −F(x).
Step 5 Set x = x + y.
Step 6 If ||y|| < TOL then OUTPUT (x);
(The procedure was successful.)
STOP.
Step 7 Set k = k + 1.
Step 8 OUTPUT (‘Maximum number of iterations exceeded’);
(The procedure was unsuccessful.)
STOP.

Example 1 The nonlinear system


1
3x1 − cos(x2 x3 ) − = 0,
2
x12 − 81(x2 + 0.1)2 + sin x3 + 1.06 = 0,
10π − 3
e−x1 x2 + 20x3 + =0
3
was shown in Example 2 of Section 10.1 to have the approximate solution (0.5, 0, −0.52359877)t .
Apply Newton’s method to this problem with x(0) = (0.1, 0.1, −0.1)t .
Solution Define

F(x1 , x2 , x3 ) = (f1 (x1 , x2 , x3 ), f2 (x1 , x2 , x3 ), f3 (x1 , x2 , x3 ))t ,

where
1
f1 (x1 , x2 , x3 ) = 3x1 − cos(x2 x3 ) − ,
2
f2 (x1 , x2 , x3 ) = x12 − 81(x2 + 0.1)2 + sin x3 + 1.06,

and
10π − 3
f3 (x1 , x2 , x3 ) = e−x1 x2 + 20x3 + .
3

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
642 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations

The Jacobian matrix J(x) for this system is


⎡ ⎤
3 x3 sin x2 x3 x2 sin x2 x3
⎢ ⎥
J(x1 , x2 , x3 ) = ⎣ 2x1 −162(x2 + 0.1) cos x3 ⎦ .
−x2 e−x1 x2 −x1 e−x1 x2 20

Let x(0) = (0.1, 0.1, −0.1)t . Then F x(0) = (−0.199995, −2.269833417, 8.462025346)t
and
⎡ ⎤
3 9.999833334 × 10−4 9.999833334 × 10−4
⎢ ⎥
J x(0) = ⎣ 0.2 −32.4 0.9950041653 ⎦ .
−0.09900498337 −0.09900498337 20
(0) (0) (0)
Solving the linear system, J x y = −F x gives
⎡ ⎤ ⎡ ⎤
0.3998696728 0.4998696782
y(0) = ⎣ −0.08053315147 ⎦ and x(1) = x(0) + y(0) = ⎣ 0.01946684853 ⎦ .
−0.4215204718 −0.5215204718

Continuing for k = 2, 3, . . . , we have


⎡ (k) ⎤ ⎡ (k−1) ⎤ ⎡ (k−1) ⎤
x1 x1 y1
⎢ (k) ⎥ ⎢ (k−1) ⎥ ⎢ (k−1) ⎥
⎣x2 ⎦ = ⎣x2 ⎦ + ⎣y2 ⎦ ,
x3(k) x3(k−1) y3(k−1)

where
⎡ ⎤
y1(k−1)
  −1 
⎢ (k−1) ⎥ (k−1) (k−1) (k−1)
⎣ y2 ⎦ = − J x1 , x2 , x3 F x1(k−1) , x2(k−1) , x3(k−1) .
y3(k−1)

Thus, at the kth step, the linear system J x(k−1) y(k−1) = −F x(k−1) must be solved,
where
⎡ ⎤
3 x3(k−1) sin x2(k−1) x3(k−1) x2(k−1) sin x2(k−1) x3(k−1)
⎢  ⎥
J x(k−1) = ⎢ ⎣ 2x1(k−1) −162 x2(k−1) + 0.1 cos x3(k−1) ⎥,

(k−1) (k−1) (k−1) (k−1)
−x2(k−1) e−x1 x2
−x1(k−1) e−x1 x2
20
⎡ (k−1) ⎤
y1
⎢ (k−1) ⎥
y(k−1) = ⎣y2 ⎦ ,
y3(k−1)

and
⎡ ⎤
3x1(k−1) − cos x2(k−1) x3(k−1) − 21
⎢⎢
 2  2 ⎥

F x(k−1) = ⎢ x1(k−1) − 81 x2(k−1) + 0.1 + sin x3(k−1) + 1.06⎥ .
⎣ ⎦
(k−1) (k−1) (k−1)
−x1
e x2
+ 20x3 + 3 10π−3

The results using this iterative procedure are shown in Table 10.3.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 643

Table 10.3 k x1(k) x2(k) x3(k) x(k) − x(k−1) ∞


0 0.1000000000 0.1000000000 −0.1000000000
1 0.4998696728 0.0194668485 −0.5215204718 0.4215204718
2 0.5000142403 0.0015885914 −0.5235569638 1.788 × 10−2
3 0.5000000113 0.0000124448 −0.5235984500 1.576 × 10−3
4 0.5000000000 8.516 × 10−10 −0.5235987755 1.244 × 10−5
5 0.5000000000 −1.375 × 10−11 −0.5235987756 8.654 × 10−10

The previous example illustrates that Newton’s method can converge very rapidly once
a good approximation is obtained that is near the true solution. However, it is not always easy
to determine good starting values, and the method is comparatively expensive to employ. In
the next section, we consider a method for overcoming the latter weakness. Good starting
values can usually be found using the Steepest Descent method, which will be discussed in
Section 10.4.

Using Maple for Initial Approximations


The graphing facilities of Maple can assist in finding initial approximations to the solutions
of 2 × 2 and often 3 × 3 nonlinear systems. For example, the nonlinear system

x12 − x22 + 2x2 = 0, 2x1 + x22 − 6 = 0

has two solutions, (0.625204094, 2.179355825)t and (2.109511920, −1.334532188)t . To


use Maple we first define the two equations
eq1 := x12 − x22 + 2x2 = 0; eq2 := 2x1 + x22 − 6 = 0;
To obtain a graph of the two equations for −3 ≤ x1 , x2 ≤ 3, enter the commands
with(plots): implicitplot({eq1, eq2}, x1 = −6..6, x2 = −6..6);
From the graph shown in Figure 10.2, we are able to estimate that there are solutions near
(2.1, −1.3)t , (0.64, 2.2)t , (−1.9, 3.0)t , and (−5.0, −4.0)t . This gives us good starting values
for Newton’s method.

Figure 10.2
x2
8

4
x 21 ⫺ x 22 ⫹ 2x 2 ⫽ 0
2x1 ⫹ x 22 ⫺6⫽0 2

⫺8 ⫺6 ⫺4 4 6 8 x1
⫺2

⫺4

⫺6

⫺8

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
644 C H A P T E R 10 Numerical Solutions of Nonlinear Systems of Equations

The problem is more difficult in three dimensions. Consider the nonlinear system

2x1 − 3x2 + x3 − 4 = 0, 2x1 + x2 − x3 + 4 = 0, x12 + x22 + x32 − 4 = 0.

Define three equations using the Maple commands


eq1 := 2x1−3x2+x3−4 = 0; eq2 := 2x1+x2−x3+4 = 0; eq3 := x12 +x22 +x32 −4 = 0;
The third equation describes a sphere of radius 2 and center (0, 0, 0), so x1, x2, and x3 are
in [−2, 2]. The Maple commands to obtain the graph in this case are
with(plots): implicitplot3d({eq1, eq2, eq3}, x1 = −2..2, x2 = −2..2, x3 = −2..2);
Various three-dimensional plotting options are available in Maple for isolating a solu-
tion to the nonlinear system. For example, we can rotate the graph to better view the sections
of the surfaces. Then we can zoom into regions where the intersections lie and alter the
display form of the axes for a more accurate view of the intersection’s coordinates. For this
problem, a reasonable initial approximation is (x1 , x2 , x3 )t = (−0.5, −1.5, 1.5)t .

E X E R C I S E S E T 10.2
1. Use Newton’s method with x(0) = 0 to compute x(2) for each of the following nonlinear
systems.
1 b. sin(4πx1 x2 ) − 2x2 − x1 = 0,
a. 4x12 − 20x1 + x22 + 8 = 0,
4  
4π − 1
1 2 (e2x1 − e) + 4ex22 − 2ex1 = 0.
x1 x2 + 2x1 − 5x2 + 8 = 0. 4π
2
c. x1 (1 − x1 ) + 4x2 = 12, d. 5x12 − x22 = 0,
(x1 − 2) + (2x2 − 3) = 25.
2
x2 − 0.25(sin x1 + cos x2 ) = 0.
2

2. Use Newton’s method with x(0) = 0 to compute x(2) for each of the following nonlinear
systems.
1 b. x12 + x2 − 37 = 0,
a. 3x1 − cos(x2 x3 ) − = 0,
2
x1 − x22 − 5 = 0,
4x12 − 625x22 + 2x2 − 1 = 0,
10π − 3 x1 + x2 + x3 − 3 = 0.
e−x1 x2 + 20x3 + = 0.
3
c. 15x1 + x22 − 4x3 = 13, d. 10x1 − 2x22 + x2 − 2x3 − 5 = 0,
x12 + 10x2 − x3 = 11, 8x22 + 4x32 − 9 = 0,
x23 − 25x3 = −22. 8x2 x3 + 4 = 0.
3. Use the graphing facilities of Maple to approximate solutions to the following nonlinear
systems.
1 b. sin(4πx1 x2 ) − 2x2 − x1 = 0,
a. 4x12 − 20x1 + x22 + 8 = 0,
4  
4π − 1
1 2 (e2x1 − e) + 4ex22 − 2ex1 = 0.
x1 x2 + 2x1 − 5x2 + 8 = 0. 4π
2
c. x1 (1 − x1 ) + 4x2 = 12, d. 5x12 − x22 = 0,
(x1 − 2)2 + (2x2 − 3)2 = 25. x2 − 0.25(sin x1 + cos x2 ) = 0.
4. Use the graphing facilities of Maple to approximate solutions to the following nonlinear systems
within the given limits.

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
10.2 Newton’s Method 645

1 b. x12 + x2 − 37 = 0,
a. = 0,
3x1 − cos(x2 x3 ) −
2
x1 − x22 − 5 = 0,
4x12 − 625x22 + 2x2 − 1 = 0,
10π − 3 x1 + x2 + x3 − 3 = 0.
e−x1 x2 + 20x3 + = 0. −4 ≤ x1 ≤ 8, −2 ≤ x2 ≤ 2, −6 ≤ x3 ≤ 0
3
−1 ≤ x1 ≤ 1, −1 ≤ x2 ≤ 1, −1 ≤ x3 ≤ 1
c. 15x1 + x22 − 4x3 = 13, d. 10x1 − 2x22 + x2 − 2x3 − 5 = 0,
x12 + 10x2 − x3 = 11, 8x22 + 4x32 − 9 = 0,
x23 − 25x3 = −22. 8x2 x3 + 4 = 0.
0 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 2, 0 ≤ x3 ≤ 2 0 ≤ x1 ≤ 2, −2 ≤ x2 ≤ 0, 0 ≤ x3 ≤ 2
and 0 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 2, −2 ≤ x3 ≤ 0
5. Use
 (k)the answers obtained in Exercise 3 as initial approximations to Newton’s method. Iterate until
x − x(k−1)  < 10−6 .

6. Use
 (k)the answers obtained in Exercise 4 as initial approximations to Newton’s method. Iterate until
x − x(k−1)  < 10−6 .

7. Use Newton’s method to find a solution to the following nonlinear systems in the given domain. Iterate
until x(k) − x(k−1) ∞ < 10−6 .

a. 3x12 − x22 = 0, b. ln x12 + x22 − sin(x1 x2 ) = ln 2 + ln π,
3x1 x22 − x13 − 1 = 0. ex1 −x2 + cos(x1 x2 ) = 0.
(0)
Use x = (1, 1) . t
Use x(0) = (2, 2)t .
c. x13 + x12 x2 − x1 x3 + 6 = 0, d. 6x1 − 2 cos(x2 x3 ) − 1 = 0,
ex1 + ex2 − x3 = 0,

9x2 + x12 + sin x3 + 1.06 + 0.9 = 0,


x22 − 2x1 x3 = 4. 60x3 + 3e−x1 x2 + 10π − 3 = 0.
Use x(0) = (−1, −2, 1)t . Use x (0)
= (0, 0, 0)t .
8. The nonlinear system
E1 : 4x1 − x2 + x3 = x1 x4 , E2 : −x1 + 3x2 − 2x3 = x2 x4 ,
E3 : x1 − 2x2 + 3x3 = x3 x4 , E4 : x12 + x22 + x32 = 1
has six solutions.
a. Show that if (x1 , x2 , x3 , x4 )t is a solution then (−x1 , −x2 , −x3 , x4 )t is a solution.
 
b. Use Newton’s method three times to approximate all solutions. Iterate until x(k) − x(k−1) ∞ < 10−5 .
9. The nonlinear system
1
3x1 − cos(x2 x3 ) − = 0,
2
1
x12 − 625x22 − = 0,
4
10π − 3
e−x1 x2 + 20x3 + =0
3
has a singular Jacobian matrix at the solution. Apply Newton’s method with x(0) = (1, 1 − 1)t . Note
that convergence may be slow or may not occur within a reasonable number of iterations.
10. What does Newton’s method reduce to for the linear system Ax = b given by

a11 x1 + a12 x2 + · · · + a1n xn = b1 ,


a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
..
.
an1 x1 + an2 x2 + · · · + ann xn = bn ,

where A is a nonsingular matrix?

Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.

You might also like