0% found this document useful (0 votes)
16K views21 pages

CH 7

The document describes chapters in a textbook on linear algebra and vector calculus. It discusses the contents of Chapter 7 on linear algebra, including matrices, vectors, determinants, and linear systems. Chapter 7 covers basic concepts of matrices and vectors, matrix multiplication and properties, and solving systems of linear equations using Gauss elimination. It provides examples and solutions to problems on these topics.

Uploaded by

KiidaLai
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16K views21 pages

CH 7

The document describes chapters in a textbook on linear algebra and vector calculus. It discusses the contents of Chapter 7 on linear algebra, including matrices, vectors, determinants, and linear systems. Chapter 7 covers basic concepts of matrices and vectors, matrix multiplication and properties, and solving systems of linear equations using Gauss elimination. It provides examples and solutions to problems on these topics.

Uploaded by

KiidaLai
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

c07.

qxd

6/13/11

2:57 PM

Page 129

Part B LINEAR ALGEBRA. VECTOR CALCULUS


Part B consists of Chap. 7 Chap. 8 Chap. 9 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear Algebra: Matrix Eigenvalue Problems Vector Differential Calculus. Grad, Div, Curl

Chap. 10 Vector Integral Calculus. Integral Theorems Hence we have retained the previous subdivision of Part B into four chapters. Chapter 9 is self-contained and completely independent of Chaps. 7 and 8. Thus, Part B consists of two large independent units, namely, Linear Algebra (Chaps. 7, 8) and Vector Calculus (Chaps. 9, 10). Chapter 10 depends on Chap. 9, mainly because of the occurrence of div and curl (defined in Chap. 9) in the Gauss and Stokes theorems in Chap. 10.

CHAPTER 7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems


Changes
The order of the material in this chapter and its subdivision into sections has been retained, but various local changes have been made to increase the usefulness of this chapter for applications. SECTION 7.1. Matrices, Vectors: Addition and Scalar Multiplication, page 257 Purpose. Explanation of the basic concepts. Explanation of the two basic matrix operations. The latter derive their importance from their use in defining vector spaces, a fact that should perhaps not be mentioned at this early stage. Its systematic discussion follows in Sec. 7.4, where it will fit nicely into the flow of thoughts and ideas. No Prerequisites. Although most of the students may have some working knowledge on the simplest parts of linear algebra, we make no prerequisites. Main Content, Important Concepts Matrix, square matrix, main diagonal Double subscript notation Row vector, column vector, transposition Equality of matrices Matrix addition Scalar multiplication (multiplication of a matrix by a scalar)

129

c07.qxd

6/13/11

2:57 PM

Page 130

130

Instructors Manual

Comments on Important Facts One should emphasize that vectors are always included as special cases of matrices and that those two operations have properties [formulas (3), (4)] similar to those of operations for numbers, which is a great practical advantage. Content and Significance of the Examples Example 1 of the text gives a first impression of the main application of matrices, that is, to linear systems, whose significance and systematic discussion will be explained later, beginning in Sec. 7.3. Example 2 gives a simple application showing the usefulness of matrix addition. Example 3 elaborates on equality of matrices. Examples 4 and 5 concern the two basic algebraic operations of addition and scalar multiplication. Purpose and Structure of Problem Set The questions in Probs. 17 should help the student to reflect on the basic concepts in this section. Problems 816 should help the student in gaining technical skill. Problems 1720 show applications in connection with forces in mechanics and with electrical networks.

SOLUTIONS TO PROBLEM SET 7.1, page 261 2. 100, 810, 960, 0 4. 4, 0, 1; a11, a22; 4, 1 6. B (1>1.609344)A; see inside of the front cover. 8. D 13 2 17 5 2 18 7 15 16T, D 8 25 13 2 17 2 18 7 1.25 15 16T D 5 5 8 4 2 3 2 4T, D 10.60 0 1.40 4 6.40 9.50 20.75 5.50 0 5.20 9.0 2.0 25.0 12.0 20.0T 0.80

10. D 32 8 24 11. D 28 12 8

9. D 20 15

10 20 5 16 32 8 4

24T, D 28 12 10 8

64T, D 32 40 8 0 24

40T, D 1.25 0 1.0

0.50 0.75 0.50 16 32 8 12

24T, D 28 4 10 8

64T, D 30 40 30 0 24 4

1.0T, D 18.75 0.0 6.25 1.0 16.0 12 18 12

10

24T, D 0.0 3.60 0.40

1.60T D 0.0 1.20 3.60 1.20 0.40

24T, D 30 30 24

39.0T, undefined 1.0 12 18 12 1.60T 1.20 1.20 0

24

24T 24

c07.qxd

6/13/11

2:57 PM

Page 131

Instructors Manual

131

12. D 0 5

4 2

13. D30 90 0

1T, D 0 1 5 30

4 2 90

14. D 0.0 T, D 15.0T, undefined, D 18.0T 18.0 24.0 21.60 1.50 13.50 7.20 12.0 7.20

60T, D30 15 0

1T, D 8 1 30

12 4

60T, D 2 3 15 1

0 T, undefined 8 1

0T, undefined 2

15. D 9.0 T, D 9.0 T, undefined, undefined 7.50 7.50 16. D15.0T, D15T, undefined, D 45.20T 42.0 42 19.04 21.0 21 39.60 17. D 11.0T 0.80 8.50

18. From Prob. 17 we have D 27.0T . 4.5 9.0 0 0 1 0 1 0 1 0 1 0 0 1 1 0 0 1

0, p

(u

w)

20. TEAM PROJECT. (b) The nodal incidence matrices are D 1 1 0 0T 1 1

1 1 0

0 0 1

1 0 0

and

1 1 0 0

0 1 0 1

0 0 0 1

U.

c07.qxd

6/13/11

2:57 PM

Page 132

132

Instructors Manual

(c) The networks with these incidence matrices are


4 2 1 1 4 3 2 3 1 1 5 2 2 3 2 4 3 1 2 5 1 3 3

SECTION 7.2. Matrix Multiplication, page 263 Purpose. Matrix multiplication, the third and last algebraic operation, is defined and discussed, with emphasis on its unusual properties; this also includes its representation by inner products of row and column vectors. The motivation of this multiplication is given by formulas (6)(8) in connection with linear transformations. Main Content, Important Facts Definition of matrix multiplication (rows times columns) Properties of matrix multiplication Matrix products in terms of inner products of vectors Linear transformations motivating the definition of matrix multiplication AB AB (AB)
T

BA in general, so the order of factors is important 0 does not imply A B A


T T

0 or B

0 or BA

Short Courses. Products in terms of row and column vectors and the discussion of linear transformations could be omitted. Comment on Notation For transposition, T seems preferable over a prime, which is often used in the literature, but will be needed to indicate differentiation in Chap. 9. Comments on Content and Significance of the Examples in the Text Matrix multiplication is shown for the three possible cases, namely, for products of two matrices (in Example 1), for a matrix times a vector (the most important case in the next sections, in Example 2), and for products of row and column vectors (in Example 3). Most important, matrix multiplication is not commutative (Example 4). Example 5 shows how a matrix product can be expressed in terms of row and column vectors. Example 6 illustrates the computation of matrix products on parallel processors. The operation of transposition (Example 7) transforms, as a special case, row vectors into column vectors and conversely. It is also used in the definition of the very important symmetric and skew-symmetric matrices (Example 8).

c07.qxd

6/13/11

2:57 PM

Page 133

Instructors Manual

133

Further special square matrices are triangular matrices (Example 9), diagonal and scalar matrices, including unit matrices of various sizes n (Example 10). Examples 1113 show some typical applications. Formula (10d) for the transposition of a product should be memorized. In motivating matrix multiplication by linear transformations, one may also illustrate the geometric significance of noncommutativity by combining a rotation with a stretch in the x-direction in both orders and show that a circle transforms into an ellipse with main axes in the direction of the coordinate axes or rotated, respectively.

SOLUTIONS TO PROBLEM SET 7.2, page 270 2. The 3 3 zero matrix, as follows directly from the definitions. 4. n(n 1) l, where l is the number of different main diagonal entries (which are all 0), hence 13 when n 4. 2 6. Triangular are U1 U2, U1U2, hence U1 , which, by transposition, implies the same 2 for L 1 L 2, L 1L 2, and L1.

d , etc. Problems 7 and 8 should serve as eye 0 0 b 0 a a openers, to see what can happen under multiplication. 10. The entry ckj of (AB)T is cjk of AB, which is Row j of A times Column k of B. On the right, ckj is Row k of BT, hence Column k of B, times Column j of AT, hence Row j of A.
0 a 0 0 a a 8. 11. D 1 1 7 12. D 7 14 6 13. D 4 2 14. D0 8 3 5 5 5 8T, D 5 6 5 4 5 8T, D 2 6 9 9 4 2 7 7 7 7 1 6 8T, D 8 8 2 4 10 4 4 4 5T, D4 9 4 2 6 10 0 0 4 2 2 4 13T 15 4 0 8 0 0T 0 4

d, c

d, c

7 21 8

3 11 3

15

10T, D 6 0 5 5

0T, D 0 8 4 0 4 T 6T, D 3 7T 8

4 8 4

4T, D 5 5 T, undefined, D 4 4 0 8 0 1 12 3

1 0

9 1 6

10

12T, D 9 9 9

10

6T, D 9 8 9

0 1 12

10

33

c07.qxd

6/13/11

2:57 PM

Page 134

134

Instructors Manual

15. undefined, D 0T, [10 0 5 16. D 5 7 4 3 7

1], [10

1]

17. D 25 25

5T, undefined, D 10T [0 6 0 2 9 4

2]

17 3

18. [ 1], D 1 1

9T, undefined, D 10T, undefined 30 6 2 2 7.50 0T, [2 0 0 1

19. undefined, D 6.0T, D 7 T, D 7 T 16 16 3.0 3 3 20. [32], [3], [6 24. M AB 12 6], C 7 5

11], D 10T 6 2

14 10

0 0

BA must be the 2

2 zero matrix. It has the form

c c

a11 a21 2a11 2a21

a12 a22

dc

2 3

3 4

2 3

3 4

dc

a11 a21 3a11 3a21

a12 a22

d
2a12 3a12
2 3 a12

3a12 3a22

2a11 3a11

3a21 4a21

4a12 4a22 a11

3a22 4a22

d.
0

a21 a12 from m 11 0 (also from m 22 (also from m 21 0). Answer:

0). a22

from m 12

a11 a12 a11

a12
2 3 a12

d.

c07.qxd

6/13/11

2:57 PM

Page 135

Instructors Manual

135

26. The transition probabilities can be given in a matrix From N A From T 0.5 0.5

0.8 0.2

To N To T 0.2]T, [0.74 0.26]T, and

and multiplication of [1 0]T by A, A2, A3 gives [0.8 [0.722 0.278]T. 28. The matrix of the transition probabilities is

0.9 0.1

0.002 0.998

d.

The initial state is [1200 98,800]T. Hence multiplication by A gives the further states (rounded) [1278 98,722]T, [1347 98,653]T, [1410 98,590]T, indicating that a substantial increase is likely. 30. Team Project. (b) Use induction on n. True if n 1. Take the formula in the problem as the induction hypothesis, multiply by A, and simplify the entries in the product by the addition formulas for the cosine and sine to get An 1. (c) These formulas follow directly from the definition of matrix multiplication. (d) A scalar matrix would correspond to a stretch or contraction by the same factor in all directions. (e) Rotations about the x 1-, x 2-, x 3-axes through u, , c, respectively.

SECTION 7.3. Linear Systems of Equations. Gauss Elimination, page 272 Purpose. This section centers around the Gauss elimination for solving linear systems of m equations in n unknowns x 1, , x n, its practical use as well as its mathematical justification (leaving themore demandinggeneral existence theory to the next sections). Main Content, Important Concepts Nonhomogeneous, homogeneous, coefficient matrix, augmented matrix Gauss elimination in the case of the existence of I. a unique solution (Example 2) II. infinitely many solutions (Example 3) III. no solutions (Example 4) Pivoting Elementary row operations, echelon form Background Material. All one needs here is the multiplication of a matrix and a vector.

c07.qxd

6/13/11

2:57 PM

Page 136

136

Instructors Manual

Comments on Content The student should become aware of the following facts: 1. Linear systems of equations provide a major application of matrix algebra and justification of the definitions of its concepts. 2. The Gauss elimination (with pivoting) gives meaningful results in each of the Cases I III. 3. This method is a systematic elimination that does not look for unsystematic shortcuts (depending on the size of the numbers involved and still advocated in some older precomputer-age books). Algorithms for programs of Gausss and related methods are discussed in Sec. 20.1, which is independent of the rest of Chap. 20 and can thus be taken up along with the present section in case of time and interest. Comments on Examples in the Text Example 1 and Fig. 158 show geometric interpretations of linear systems in 2 and 3 unknowns. Example 2 on an electrical network of Ohms resistors shows Gauss elimination with pivoting and back substitution in the case of a unique solution. Theorem 1 is central; it proves that the three kinds of elementary row operations leave solution sets unchanged, thus justifying Gauss elimination. Examples 2, 3, and 4 illustrate the Gauss elimination for the three possible cases that a linear system has a unique solution, or infinitely many solutions, or no solution, respectively. The section closes with a few comments on the row-echelon form, that is, on the form into which Gauss elimination transforms linear systems. Comments on Problems Problems 114 on Gauss elimination give further illustrations of those three cases. Problem 15 concerns equivalence (its general definition), which is of general mathematical interest, involving reflexivity, symmetry, and transitivity. Electrical networks of Ohms resistors (no inductances or capacitances) are discussed as linear systems in Probs.1720 and some further models in Probs. 2123. Project 24 presents the idea of representing matrix operations (as discussed before) in terms of standard matrices, called elementary matrices. Such representations are helpful, for instance, in designing algorithms, whereas computations themselves generally proceed directly. SOLUTIONS TO PROBLEM SET 7.3, page 280 1. 2. 3. 4. 5. 6. 7. 8. 9. x x x No x x x No x 1, y 1 4 0.4, y 1.2 1, y 1, z 2 solution 6, y 7. 2y 1 2t 1, y 3t, y t (arbitrary), z solution 8 3t, y 4 t, z

t arbitrary, z 2t t (t arbitrary)

c07.qxd

6/13/11

2:57 PM

Page 137

Instructors Manual

137

10. 11. 12. 13. 14. 18.

No solution w t 1, x 1, y t 2 2t 1, z t 2, t 1, and t 2 arbitrary. w x 2y t 1 2t 2, x t 1 arbitrary, y t 2 arbitrary, z w 2, x 0, y 4, z 1 w 0, x 3z 3t, y 2z 1 2t 1, z t arbitrary Currents at the lower node: I1 I2 I3 0

(minus because I1 flows out). Voltage in the left circuit: 4I1 and in the right circuit 12I2 8I3 24 12I2 12 24

(minus because I3 flows against the arrow of E 2). Hence the augmented matrix of the system is D 4 1 0 I1
27 11

1 12 12

1 0 8

The solution is

36T . 0 24 I3
3 11

A,

I2

24 11

A,

A.

22. P1 5, P2 5, D1 13, D2 21, S1 13, S2 21 24. PROJECT. (a) B and C are different. For instance, it makes a difference whether we first multiply a row and then interchange, and then do these operations in reverse order.

(b) Premultiplying A by E makes E operate on rows of A. The assertions then follow almost immediately from the definition of matrix multiplication. SECTION 7.4. Linear Independence. Rank of a Matrix. Vector Space, page 282 Purpose. This section introduces some theory centered around linear independence and rank, in preparation for the discussion of the existence and uniqueness problem for linear systems of equations (Sec. 7.7). Main Content, Important Concepts Linear independence Real vector space Rn, dimension, basis

a11 a31 a21 5a11 8a41 a22

a12 a32

5a12 8a42

U,

a11 a31 5a11 a21 8a41 a32

a12

5a12 a22 8a42

c07.qxd

6/13/11

2:57 PM

Page 138

138

Instructors Manual

Rank defined in terms of row vectors Rank in terms of column vectors Invariance of rank under elementary row operations Short Courses. For the further discussion in the next sections, it suffices to define linear independence and rank. Comments on Rank and Vector Spaces Of the three possible equivalent definitions of rank, (i) By row vectors (our definition), (ii) By column vectors (our Theorem 3), (iii) By submatrices with nonzero determinant (Sec. 7.7), the first seems to be most practical in our context. Introducing vector spaces here, rather than in Sec. 7.1, we have the advantage that the student immediately sees an application (row and column spaces). Vector spaces in full generality follow in Sec. 7.9. Comments on Text and Problem Set Examples 15 concern the same three vectors, in Examples 25 as row vectors of a matrix. Theorem 1 states that rank is invariant under row reduction. Example 3 determines rank by Gauss reduction to echelon form. Since we defined rank in terms of row vectors, rank in terms of column vectors becomes a theorem (Theorem 3). Theorem 4 results from Theorems 2 and 3, as indicated. The text then continues with the definition of vector space in general, of vector space Rn, and of row space and column space of a matrix A, both of which have the same dimension, equal to rank A. This is immediately illustrated in Probs. 110 of the problem set. Problems 1216 give a further discussion of rank. Problems 1725 concern linear independence and dependence. Sets of vectors that form, or do not form, vector spaces follow in Probs. 2735. The discussion of vector space will be continued and extended in the last section (Sec. 7.9) of this chapter, which we leave optional since we shall not make further use of it.

SOLUTIONS TO PROBLEM SET 7.4, page 287 1. 1, [1, 2, 3], [ 2, 1]T 2. Rank is 2. Row reduction gives C 2a 0 1>22a
2

b
b2 a

Hence for general a 0 and 2a 2 b 2 the rank is 2. A basis is [2a 3. 3, [3, 5, 0], [0, 25>3, 0], [0, 0, 5]

b], [0, 2a

b2 2a

].

c07.qxd

6/13/11

2:57 PM

Page 139

Instructors Manual

139

4. 3, Row reduction gives D0 4 0 6 9 0 1T 0

A basis of the row space is {[2, 3, 0], [0, 9, 1], [0, 0, 37]}. The matrix is symmetric and has the eigenvalues 4 and 2 241. 5. 3, {[2, 2, 1], [0, 1, 2], [0, 0, 1]}, {[1, 0, 1], [0, 2, 1], [0, 0, 1]} 6. The matrix is skew-symmetric. Row reduction gives
37 9

as can be seen without calculation. Hence the rank of the matrix is 2. Bases are {[1 0 4], [0 1 0]} and the same vectors transposed (as column vectors). 7. 2, {[6, 0, 3, 0], [0, 1, 0, 5]}, {[6, 0, 2] [0, 1, 0]}. 8. The matrix has rank 4. Row reduction gives as a basis of the row space: [1 2 4 8], [0 4 10 21], [0 0 2 5], [0 0 0 1].

D 0 1 0

0 1 0

0T 0

Row reduction of the tranpose gives a basis of the column space of the given matrix. 9. 3, {[5, 0, 1, 0], [5, 4, 5], [0, 0, 1, 0]}, {[5, 0, 1], [0, 5, 4, 5], [0, 0, 1, 0]} 10. The matrix is symmetric. Row reduction gives 5 0 0 0 2 1 0 0 1 2 4 0 0

0 V. 2

Hence the rank is 3, and a basis is [5 12. 14. 16. 17. 18. 2 1 0], [0 1 2 0], [0 0 2 1].

AB and its transpose (AB)T BTAT have the same rank. This follows directly from Theorem 3. A proof is given in Ref. [B3], vol 1, p. 12. (See App. 1 of the text.) No. Yes. We mention that these are the row vectors of the 4 4 Hilbert matrix; see the Index of the textbook. 19. Yes. 20. No. It is remarkable that A [ajk] with ajk j k 1 has rank 2 for any size n of the matrix.

c07.qxd

6/13/11

2:57 PM

Page 140

140

Instructors Manual

22. No. Quite generally, if one of the vectors v(1), , v(m) is 0, then (1) holds with any c1 0 and c2, , cm all zero. 24. No, by Theorem 4. 26. Three steps; the first two vectors remain; these form a linearly independent subset of the given set. 27. 2, [4, 0, 1], [0, 4, 1] 28. No, if k 0; yes, if k 0, dimension 2, basis [1 0 0], [0 1 3]. 29. No, it is a sub-space. 30. Yes, dimension 2, basis [0 0 1 0], [0 0 1]. 31. No. 32. Yes, dimension 1. The two given equations form a linear system with coefficient matrix

c
The solution is x 1 Hence a basis is [5 34. No

3 4

2 5

1 0

d.

5t 1, x 2 4t 1, x 3 4 23].

23t 1 with arbitrary t 1.

SECTION 7.5. Solutions of Linear Systems: Existence, Uniqueness, page 288 Purpose. The student should see that the totality of solutions (including the existence and uniqueness) can be characterized in terms of the ranks of the coefficient matrix and the augmented matrix. Main Content, Important Concepts Augmented matrix Necessary and sufficient conditions for the existence of solutions Implications for homogeneous systems rank A nullity A n Rank (Sec. 7.4) Background Material. Short Courses. examples.

Brief discussion of the first two theorems, illustrated by some simple

Comments on Content This section should make the student aware of the great importance of rank. It may be good to have students memorize the condition rank A rank A

for the existence of solutions. Students familiar with ODEs may be reminded of the analog of Theorem 4 (see Sec. 2.7). This section may also provide a good opportunity to point to the roles of existence and uniqueness problems throughout mathematics (and to the distinction between the two).

c07.qxd

6/13/11

2:57 PM

Page 141

Instructors Manual

141

SECTION 7.7. Determinants. Cramer s Rule, page 293 For second- and third-order determinants see the reference Sec. 7.6. Main Content of This Section nth-order determinants General properties of determinants Rank in terms of determinants (Theorem 3) Cramers rule for solving linear systems by determinants (Theorem 4) General Comments on Determinants Our definition of a determinant seems more practical than that in terms of permutations (because it immediately gives those general properties), at the expense of the proof that our definition is unambiguous (see the proof in App. 4). General properties are given for order n, from which they can be easily seen for n 3 when needed. The importance of determinants has decreased with time, but determinants will remain in eigenvalue problems (characteristic determinants), ODEs (Wronskians!), integration and transformations (Jacobians!), and other areas of practical interest. Comments on Examples Examples 13 show expansions of determinants in the simplest cases. Example 4 illustrates the role of triangular matrices in the present context. The theorems show properties of determinants, in particular the relation to rank (Theorem 3) and Cramers rule for n equations in n unknowns. Note that the cases n 2 and n 3 were considered in Sec. 7.6. Comments on Problems Problems 115 illustrate general properties of determinants and their evaluation. Problems 1719 compare the (impractical) determination of rank by determinants and by row reduction. Team Project 20 concerns some applications of linear systems to analytic geometry all using the vanishing of determinants.

SOLUTIONS TO PROBLEM SET 7.7, page 300 8. 10. 11. 12. 13. 14. 1.10 1 48 a 3 b 3 c3 17 216. Note that

3abc

15. 1

4 2

7 8

2#2

1 2

5 2

18 # 12

216.

c07.qxd

6/13/11

2:57 PM

Page 142

142

Instructors Manual

16. det A n ( 1)n 1(n interval), because

1). True for n det A 2 ( 1)2

2, a 2-simplex on R 1, that is, a segment (an


1

(2

1)

1.

Assume true for n as just given. Consider A n 1. To get the first row with all entries 0, except for the first entry, subtract from Row 1 the expression 1 n 1 (Row 2 ... Row (n 1)).

The first component of the new row is n>(n 1), whereas the other components are all 0. Develop det A n 1 by this new first row and notice that you can then apply the above induction hypothesis, det A n n
1

( 1)n

(n

1)

( 1)nn,

as had to be shown. 18. 3 because interchange of Rows 1 and 2 and row reduction gives D0 4 0 0 4 0 10 6T .

30

19. 2 20. Team Project. (a) Use row operation (subtraction of rows) on D to transform the last column of D into the form [0 0 1]T and then develop D 0 by this column. (b) For a plane the equation is ax by cz d # 1 0, so that we get the determinantal equation x
6

y y1 y2 y3

z z1 z2 z3

1 1 1 1
6

x1 x2 x3

0.

The plane is 3x 4y 2z 5. (c) For a circle the equation is a(x 2 so that we get x2
6

y 2)

bx

cy

d #1

0,

y2 y2 1 y2 2 y2 3 2y

x x1 x2 x3 20.

y y1 y2 y3

1 1 1 1
6

x2 1 x2 2 x2 3 4x

0.

The circle is x 2

y2

c07.qxd

6/13/11

2:57 PM

Page 143

Instructors Manual

143

(d) For a sphere the equation is a(x 2 so that we obtain x2


7 x2 2

y2

z 2)

bx

cy

dz

e #1

0,

y2 y2 1 y2 2 y2 3 y2 4

z2 z2 1 z2 2 z2 3 z2 4

x x1 x2 x3 x4

y y1 y2 y3 y4

z z1 z2 z3 z4 (z

1 17 1 1 1 1)2

x2 1

0.

x2 3 x2 4

The sphere through the given points is x 2 y 2 (e) For a general conic section the equation is ax 2 so that we get x2 8 21. x 2.5, y x2 1 x2 2 x2 3 x2 4 x2 5 xy x 1y1 x 2y2 x 3y3 x 4y4 x 5y5 y2 y2 1 y2 2 y2 3 y2 4 y2 5 x x1 x2 x3 x4 x5 bxy cy 2 dx ey

16.

f #1

0,

y y1 y2 y3 y4 y5

1 1 1 1 1 1 8

0.

0.15

22. In Cramers rule we have D D1 D2

2 2 2

2 5 24 0 2 5

4 2

24 0

Hence x D1>D 48>24 2, and y 23. x 1, y 1, z 0 24. D 60, D1 60, D2 180, and D3 25. w 1, x 1, y 2, z 2

24 48 120. 120>24 5. 1, y 3, and z 4.

D2>D

240. Hence x

c07.qxd

6/13/11

2:57 PM

Page 144

144

Instructors Manual

SECTION 7.8. Inverse of a Matrix. GaussJordan Elimination, page 301 Purpose. To familiarize the student with the concept of the inverse A A, its conditions for existence, and its computation. Main Content, Important Concepts AA
1 1

of a square matrix

A 1A
1

I and rank
1

Nonsingular and singular matrices Existence of A (AC)


1

GaussJordan elimination C
1

Cancellation laws (Theorem 3) det (AB) det (BA) det A det B Short Courses. Theorem 1 without proof, GaussJordan elimination, formulas (4*) and (7). Comments on Content Although in this chapter we are not concerned with operations count (Chap. 20), it would make no sense to first mislead the student by using GaussJordan for solving Ax b and then later, in numerics, correct the false impression by explaining why Gauss elimination is better because back substitution needs fewer operations than the diagonalization of a triangular matrix. Thus GaussJordan should be applied only when A 1 is wanted. The unusual properties of matrix multiplication, briefly mentioned in Sec. 7.2 can now be explored systematically by the use of rank and inverse. Formula (4*) is worth memorizing.

SOLUTIONS TO PROBLEM SET 7.8, page 308 1. C 2. 3 1>2 S 10 1

cos 2u sin 2u

sin 2u

cos 2u

Note that the given matrix corresponds to a rotation through an angle 2u. If 2u is replaced by 2u (rotation in the opposite sense), this gives the inverse, which corresponds to a rotation through 2u. 3. D 5.0 12.5 0.0 4. E 0.0 0.0 0.0 0.20 2.5 0.0 0.0 0.0 4.0 0.0 0.0 1.80 0.47T 0.12 1.25 0.0 0.0 0.0

c07.qxd

6/13/11

2:57 PM

Page 145

Instructors Manual

145

3 4 1 6. Note that, due to the special form of the given matrix, the 2 2 minor in the right lower corner of the inverse has the form of the inverse of a 2 2 matrix; the inverse is D 0
1 4

5. D2 1

0 1

0T 0

0 5 3

7. D0 1

0 0

0 1 0 8. The matrix is singular. It is interesting that this is not the case for the 2

1T 0

13T . 0 8

2 matrix

1 3

2 4

d.

0 4 0 10. The inverse equals the transpose. This is the defining property of orthogonal matrices to be discussed in Sec. 8.3. 12. I (A2) 1A2. Multiply this by A 1 from the right on both sides of the equation. This gives A 1 (A2) 1A. Do the same operation once more to get the formula to be proved. 14. I I T (A 1A)T AT(A 1)T. Now multiply the first and last expression by (AT) 1 from the left, obtaining (AT) 1 (A 1)T. 16. Rotation through 2u. The inverse represents the rotation through 2u. Replacement of 2u by 2u in the matrix gives the inverse. 18. Multiplication by A from the right interchanges Rows 1 and 2 of A. The inverse of this is the interchange that gives back the original matrix. Hence the inverse of the given matrix should equal the matrix itself, as is the case. 20. Straightforward calculation, particularly simple because of the zeros. And instructive because we now see distinctly why the inverse has zeros at the same positions as the given matrix does. SECTION 7.9. Vector Spaces, Inner Product Spaces, Linear Transformations, Optional, page 309 Purpose. In this optional section we extend our earlier discussion of vector spaces Rn and C n, define inner product spaces, and explain the role of matrices in linear transformations of Rn into Rm.

9. D2 0

0 0

0T 8

c07.qxd

6/13/11

2:57 PM

Page 146

146

Instructors Manual

Main Content, Important Concepts Real vector space, complex vector space Linear independence, dimension, basis Inner product space Linear transformation of Rn into Rm Background Material. Vector spaces R n (Sec. 7.4) Comments on Content The student is supposed to see and comprehend how concrete models (R n and C n, the inner product for vectors) lead to abstract concepts, defined by axioms resulting from basic properties of those models. Because of the level and general objective of this chapter, we have to restrict our discussion to the illustration and explanation of the abstract concepts in terms of some simple typical examples. Most essential from the viewpoint of matrices is our discussion of linear transformations, which, in a more theoretically oriented course of a higher level, would occupy a more prominent position. Comment on Footnote 4 Hilberts work was fundamental to various areas in mathematics; roughly speaking, he worked on number theory 18931898, foundations of geometry 18981902, integral equations 19021912, physics 19101922, and logic and foundations of mathematics 19221930. Closest to our interests here is the development in integral equations, as follows. In 1870 Carl Neumann (Sec. 5.6) had the idea of solving the Dirichlet problem for the Laplace equation by converting it to an integral equation. This created general interest in integral equations. In 1896 Vito Volterra (18601940) developed a general theory of these equations, followed by Erik Ivar Fredholm (18661927) in 19001903 (whose papers caused great excitement), and Hilbert since 1902. This gave the impetus to the development of inner product and Hilbert spaces and operators defined on them. These spaces and operators and their spectral theory have found basic applications in quantum mechanics since 1927. Hilberts great interest in mathematical physics is documented by Ref. [GenRef3], a classic full of ideas that are of importance to the mathematical work of the engineer. For more details, see G. Birkhoff and E. Kreyszig. The establishment of functional analysis. Historia Mathematica 11 (1984), pp. 258321. Further Comments on Content and Comments on Problems It is important to understand that matrices form vector spaces (Example 1, etc.) and so do polynomials up to a fixed degree n (Example 2). An inner product space is obtained from a vector space V by defining an inner product on V V. In addition to the usual dot product notation the student should perhaps also become aware of the standard notation (a, b), used, e.g., in functional analysis and applications. The last part of the section concerns the role of matrices in connection with linear transformations. The fundamental importance of (3)(5) (CauchySchwarz, triangle inequalities, parallelogram equality) will not appear on our level, but should perhaps be mentioned in passing, along with the verifications required in Probs. 2325. The problems center around vector spaces, supplementing Problem Set 7.4, linear spaces and inverse matrices, as well as norm and inner product (to be substantially extended in numerics in Chap. 20).

c07.qxd

6/13/11

2:57 PM

Page 147

Instructors Manual

147

SOLUTIONS TO PROBLEM SET 7.9, page 318 2. Take the difference of the given representation and another representation v k 1a (1) . . . k na (n), obtaining a (cj k j) a(j) 0. k j 0 because of the linear independence of the basis vectors. This cj for j 1, , n, the uniqueness. v v

Hence cj implies k j 3. 1, [3, 2, 1]T 4. Yes, dimension 3 because of the skew symmetry, at most three entries of the nine entries of such a matrix can be different (and not zero). A basis is D 1 0 0 1 0 0 0T , 0 0 D 0 0 1 0 0 0 0T , 1 0 D0 0 0 0 0 1 1T . 0 0

6. Yes. Dimension 2. Basis cos 2x, sin 2x. Note that these functions are solutions of the ODE y s 4y 0. To mention this connection with vector spaces would not have added much to our discussion in Chap. 2. Similarly for the next problem (Prob. 7). 8. No, because det (A 1 A 2) det A 1 det A 2 in general. 10. Yes, dimension 4, basis D 0 1 4 0T, 0 0 D0 0 0 0T, 1 0 D0 0 0 1T, 0 0 D0 0 0 0T 0 1

11. x 1 2y1 y2, x 2 3y1 0.50y2 12. The inverse transformation is obtained by calculating the inverse matrix. This gives x1 x2 14. x 1 x2 x3 15. 214 16. 21 4 4y1 2y1 4y1
1 9 1 4 1 5 y1 4 5 y1 2 5 y2 3 5 y2.

2y2 4y2 2y2


1 9

2y3 4y3 8y3 21 2 213 18

2 9

17. 222 18. 9

19. 314 (0.75). 5 20. 1 21. 7 22. Yes. Vectors [v1 v2 v3]T orthogonal to the given vector must satisfy 2v1 A basis of this two-dimensional vector space is [0 1 0]T, [1 0

v3 0. 2]T.

c07.qxd

6/13/11

2:57 PM

Page 148

148

Instructors Manual

23. a [2, 1, 3]T, b 24. a [1, 1, 1, 1]T, 2 3 2 3 b [1, 2, 3, 0]T, 5 5 5 (a,b) aTb

[ 4, 8,

1]T, ||a

b||

1 15

0.067

a b

1 6 226

8.307, ||a||

1 5 214

||b||

12.742

0.636

SOLUTIONS TO CHAPTER 7 REVIEW QUESTIONS AND PROBLEMS, page 318 11. D 0 1 1 2 1 0 1 2 3 5 2 1T, D0 5 0 1 5 1 0 1 0 1 2 2 1 3T 3 0

12. D 1 2

13. D 0T, [4 4 1 14. 1, D 0 2 2 2 1>5

1T, D2 1 1 0

1T 2

1]

1 0 1

15. 5,

0T 1 1 4>5T, D 2>5 1>5 1>5 1>5


3 D25 3 25 2 25 3 25 53 25 27 25 27 25 T 2 25 18 25

16. D 1>5 1>5 17.


3 18. D25 3 25 2 25

1>5 6>5 4>5 5


27 25 T, 2 25 18 25

1>5

1 0 0

2>5

2>5

5, 25, 25,
3 25 53 25 27 25

1>5T

19. D 0 1

1 5 3

4T 4 4

c07.qxd

6/13/11

2:57 PM

Page 149

Instructors Manual

149

20. D 2 2 4 22. 24. 26. 27. 28. 30. 31. 32. 34.

4 10 16

x 1, y t arbitrary, z 3t 2 x 13 t 1 3 t 2 2, y t 1 arbitrary, z t 2 arbitrary 2 2 No solution 1 x 1, y 2. 1 1 x 4, y z 3 2, 2 Ranks 1, 1. The first row of the matrices equals 3 times the second row. Ranks 2, 2, one solution. Ranks 1, 2, so that there exists no solution. I1 12 A, I2 4 A, I3 16 A

2T

You might also like