25% found this document useful (4 votes)
4K views10 pages

Nicholson, Linear Algebra With Applications 7E With Solutions Part 1 PDF

Uploaded by

유위백
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
25% found this document useful (4 votes)
4K views10 pages

Nicholson, Linear Algebra With Applications 7E With Solutions Part 1 PDF

Uploaded by

유위백
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 10
Linear Algebra Linear Algebra with Applications SEVENTH EDITION W. Keith Nicholson fF) sen McGraw-Hill Ceri oreo team. Succeed. Page v Preface This textbook is an introduction to the ideas and techniques of linear algebra for first- or second-year students with a working knowledge of high school algebra. The contents have enough flexibility to present a traditional introduction to the subject, or to allow for a more applied course. Chapters 1-4 contain a one-semester course for beginners whereas Chapters 5-9 contain a second semester course (see the Suggested Course Outlines below). The text is primarily about real linear algebra with complex numbers being mentioned when appropriate (reviewed in Appendix A). Overall, the aim of the text is to achieve a balance among computational skills, theory, and applications of linear algebra. Calculus is not a prerequisite; places where it is mentioned may be omitted. As a tule, students of linear algebra learn by studying examples and solving problems. Accordingly, the book contains a variety of exercises (over 1200, many with multiple parts), ordered as to their difficulty. In addition, more than 375 solved examples are included in the text, many of which are computational in nature. The examples are also used to motivate (and illustrate) concepts and theorems, carrying the student from concrete to abstract. While the treatment is rigorous, proofs are presented at a level appropriate to the student and may be omitted with no loss of continuity. As a result, the book can be used to give a course that emphasizes computation and examples, or to give a more theoretical treatment (some longer proofs are deferred to the end of the Section). Linear Algebra has application to the natural sciences, engineering, management, and the social sciences as well as mathematics. Consequently, 18 optional “applications” sections are included in the text introducing topics as diverse as electrical networks, economic models, Markov chains, linear recurrences, systems of differential equations, and linear codes over finite fields. Additionally some applications (for example linear dynamical systems, and directed graphs) are introduced in context. The applications sections appear at the end of the relevant chapters to encourage students to browse. SUGGESTED COURSE OUTLINES This text includes the basis for a two-semester course in linear algebra, * Chapters 14 provide a standard one-semester course of 35 lectures, including linear equations, matrix algebra, determinants, diagonalization, and geometric vectors, with applications as time permits. At Calgary, we cover Sections 1.11.3, 2.1-2.6, 3.1-3.3 and 4.14.4, and the course is taken by all science and engineering students in their first semester. Prerequisites include a working knowledge of high school algebra (algebraic manipulations and some familiarity with polynomials); calculus is not required. * Chapters 5-9 contain a second semester course including IR”, abstract vector spaces, linear transformations (and their matrices), orthogonality, complex matrices (up to the spectral theorem) and applications. There is more material here than can be covered in one semester, and at Calgary we cover Sections 5.1-5.5, 6.1-6.4, 7.1-7.3, 8.1-8.6, and 9.1-9.3, with a couple of applications as time permits. * Chapter 5 is a “bridging” chapter that introduces concepts like spanning, independence, and basis in the concrete setting of IR”, before venturing into the abstract in Chapter 6. The duplication is, balanced by the value of reviewing these notions, and it enables the student to focus in Chapter 6 on the new idea of an abstract system. Moreover, Chapter 5 completes the discussion of rank and diagonalization from earlier chapters, and includes a brief introduction to orthogonality in IR”, which creates the possibilty of a one-semester, matrix-oriented course covering Chapters 1-5 for students not wanting to study the abstract theory. Page vi CHAPTER DEPENDENCIES The following chart suggests how the material introduced in each chapter draws on concepts covered in certain earlier chapters. A solid arrow means that ready assimilation of ideas and techniques presented in the later chapter depends on familiarity with the earlier chapter. A broken arrow indicates that some reference to the earlier chapter is made but the chapter need not be covered. ‘cmp 1 Spt of Une Eaone ‘Chapter 2 Mtr Agere ‘Chapter: Oeterenants and Ougontiaton = “crapter «vector Geomeny ‘catoter The Vector Sones * Crag Vector Saces Chapter 7 Linea Transformations - Chapter & Ortogenatty Gaoter Change of Bos chapter 10 Innes Product Sones “napter 11 Canonical Forms, NEW IN THE SEVENTH EDITION * Vector notation. Based on feedback from reviewers and current users, all vectors are denoted by boldface letters (used only in abstract spaces in earlier editions). Thus x becomes x in I? and IRS (Chapter 4), and in R” the column X becomes x. Furthermore, the notation [x4 Xp ... Xj] for vectors in R” has been eliminated; instead we write vectors as n-tuples (x4, x2, .... Xn) OF aS columns L* The result is a uniform notation for vectors throughout the text. * Definitions. Important ideas and concepts are identified in their given context for student's understanding, These are highlighted in the text when they are first discussed, identified in the left margin, and listed on the inside back cover for reference. * Exposition. Several new margin diagrams have been included to clarify concepts, and the exposition has been improved to simplify and streamline discussion and proofs. OTHER CHANGES: * Several new examples and exercises have been added. * The motivation for the matrix inversion algorithm has been rewritten in Section 2.4. * For geometric vectors in R2, addition (parallelogram law) and scalar multiplication now appear earlier (Section 2.2). The discussion of reflections in Section 2.6 has been simplified, and projections are now included. * The example in Section 3.3, which illustrates that x in R? is an eigenvector of A if, and only if, the line Ry is A-invariant, has been completely rewritten. © The first part of Section 4.1 on vector geometry in IR? and R° has also been rewritten and shortened. * In Section 6.4 there are three improvements: Theorem 1 now shows that an independent set can be extended to a basis by adding vectors from any prescribed basis; the proof that a spanning set can be cut down to a basis has been simplified (in Theorem 3); and in Theorem 4, the argument that independence is equivalent to spanning for a set SS V with |S| = dim V has been streamlined and a new example added. Page vii + In Section 8.1, the definition of projections has been clarified, as has the discussion of the nature of quadratic forms in R2. HIGHLIGHTS OF THE TEXT + Two-stage definition of matrix multiplication. First, in Section 2.2 matrix-vector products are introduced naturally by viewing the left side of a system of linear equations as a product. Second, matrix-matrix products are defined in Section 2.3 by taking the columns of a product AB to be A times the corresponding columns of B. This is motivated by viewing the matrix product as composition of maps (see next item). This works well pedagogically and the usual dot-product definition follows easily. As a bonus, the proof of associativity of matrix multiplication now takes four lines. * Matrices as transformations. Matrix-column multiplications are viewed (in Section 2.2) as transformations R” + R. These maps are then used to describe simple geometric reflections and rotations in R? as well as systems of linear equations. + Early linear transformations. It has been said that vector spaces exist so that linear transformations can act on them—consequently these maps are a recurring theme in the text. Motivated by the matrix transformations introduced earlier, linear transformations Rh” > R” are defined in Section 2.6, their standard matrices are derived, and they are then used to describe rotations, reflections, projections, and other operators on R2, * Early diagonalization. As requested by engineers and scientists, this important technique is presented in the first term using only determinants and matrix inverses (before defining independence and dimension). Applications to population growth and linear recurrences are given. * Early dynamical systems. These are introduced in Chapter 3, and lead (via diagonalization) to applications like the possible extinction of species. Beginning students in science and engineering can relate to this because they can see (often for the first time) the relevance of the subject to the real world, * Bridging chapter. Chapter 5 lets students deal with tough concepts (like independence, spanning, and basis) in the concrete setting of IR” before having to cope with abstract vector ‘spaces in Chapter 6. * Examples. The text contains over 375 worked examples, which present the main techniques of the subject, illustrate the central ideas, and are keyed to the exercises in each section. * Exercises. The text contains a variety of exercises (nearly 1175, many with multiple parts), starting with computational problems and gradually progressing to more theoretical exercises. Exercises marked with a * have an answer at the end of the book or in the Students Solution Manual (available online). There is a complete Solution Manual is available for instructors. * Applications. There are optional applications at the end of most chapters (see the list below). While some are presented in the course of the text, most appear at the end of the relevant chapter to encourage students to browse. * Appendices. Because complex numbers are needed in the text, they are described in Appendix ‘A, which includes the polar form and roots of unity. Methods of proofs are discussed in Appendix B, followed by mathematical induction in Appendix C. A brief discussion of polynomials is included in Appendix D. Alll these topics are presented at the high-school level, * Self-Study. This text is self-contained and therefore is suitable for self-study. * Rigour. Proofs are presented as clearly as possible (some at the end of the section), but they are optional and the instructor can choose how much he or she wants to prove. However the proofs are there, so this text is more rigorous than most. Linear algebra provides one of the better venues where students begin to think logically and argue concisely. To this end, there are exercises that ask the student to “show” some simple implication, and others that ask her or him to either prove a given statement or give a counterexample. | personally present a few proofs in the first semester course and more in the second (see the Suggested Course Outlines). + Major Theorems. Several major results are presented in the book. Examples: Uniqueness of the reduced row-echelon form; the cofactor expansion for determinants; the Cayley-Hamilton theorem; the Jordan canonical form; Schur's theorem on block triangular form; the principal axis and spectral theorems; and others. Proofs are included because the stronger students should at least be aware of what is involved. Page viii ANCILLARY MATERIALS CONNECT McGraw-Hill Connect™ is a web-based assignment and assessment platform that gives students the means to better connect with their coursework, with their instructors, and with the important concepts that they will need to know for success now and in the future. With Connect, instructors can deliver assignments, quizzes and tests easily online. Students can practise important skills at their own pace and on their own schedule. With Connect, students also get 24/7 online access to an eBook—an online edition of the text—to aid them in successfully completing their work, wherever and whenever they choose. INSTRUCTOR RESOURCES Instructor Resources * Instructor's Solutions Manual * Partial Student's Solution Manual * Computerized Test Bank SUPERIOR LEARNING SOLUTIONS AND SUPPORT GB) SE | Solutions that make a ditterence. Technology that fits. MH-Campus | Connect | | Learsmart || Tegrity Custom ‘The McGraw-Hill Ryerson team is ready to help you assess and integrate any of our products, technology, and services into your course for optimal teaching and learning performance. Whether it's helping your students improve their grades, or putting your entire course online, the McGraw-Hill Ryerson team is here to help you do it. Contact your iLearning Sales Specialist today to learn how to maximize all of McGraw-Hill Ryerson's resources! For more information on the latest technology and Learning Solutions offered by McGraw-Hill Ryerson and its partners, please visit us online: www.mcgrawhill.ca/he/solutions. CHAPTER SUMMARIES Chapter 1: Systems of Linear Equations. A standard treatment of gaussian elimination is given. The rank of a matrix is introduced via the row- echelon form, and solutions to a homogenous system are presented as linear combinations of basic solutions. Applications to network flows, electrical networks, and chemical reactions are provided. Chapter 2: Matrix Algebra After a traditional look at matrix addition, scalar multiplication, and transposition in Section 2.1, matrix- vector multiplication is introduced in Section 2.2 by viewing the left side of a system of linear equations as the product Ax of the coefficient matrix A with the column x of variables. The usual dot-product definition of a matrix-vector multiplication follows. Section 2.2 ends by viewing an m x n matrix A as a transformation IR” — R”. This is illustrated for R2 — IR2 by describing reflection in the x axis, rotation of IR? through °, shears, and so on. In Section 2.3, the product of matrices A and B is defined by AB = [Ab; Abo ... Ab,], where the bj are the columns of B. A routine computation shows that this is the matrix of the transformation B followed by A. This observation is used frequently throughout the book, and leads to simple, conceptual proofs of the basic axioms of matrix algebra. Note that linearity is not required—all that is needed is some basic properties of matrix-vector multiplication developed in Section 2.2. Thus the usual arcane definition of matrix multiplication is split into two well motivated parts, each an important aspect of matrix algebra. Of course, this has the pedagogical advantage that the conceptual power of geometry can be invoked to illuminate and clarify algebraic techniques and definitions. In Sections 2.4 and 2.5 matrix inverses are characterized, their geometrical meaning is explored, and block multiplication is introduced, emphasizing those cases needed later in the book. Elementary matrices are discussed, and the Smith normal form is derived. Then in Section 2.6, linear transformations R” —- R' are defined and shown to be matrix transformations. The matrices of reflections, rotations, and projections in the plane are determined. Finally, matrix multiplication is related to directed graphs, matrix LU-factorization is introduced, and applications to economic models and Markov chains are presented. Chapter 3: Determinants and Diagonalization. The cofactor expansion is stated (proved by induction later) and used to define determinants inductively and to deduce the basic rules. The product and adjugate theorems are proved. Then the diagonalization algorithm is presented (motivated by an example about the possible extinction of a species of birds). AS requested by our Engineering Faculty, this is done earlier than in most texts because it requires only determinants and matrix inverses, avoiding any need for subspaces, independence and dimension. Eigenvectors of a 2 x 2 matrix A are described geometrically (using the A-invariance of lines through the origin). Diagonalization is then used to study discrete linear dynamical systems and to discuss applications to linear recurrences and systems of differential equations. A brief discussion of Google PageRank is included. Page ix Chapter 4: Vector Geometry. Vectors are presented intrinsically in terms of length and direction, and are related to matrices via coordinates. Then vector operations are defined using matrices and shown to be the same as the corresponding intrinsic definitions. Next, dot products and projections are introduced to solve problems about lines and planes. This leads to the cross product. Then matrix transformations are introduced in R 3. matrices of projections and reflections are derived, and areas and volumes are computed using determinants. The chapter closes with an application to computer graphics. Chapter 5: The Vector Space Subspaces, spanning, independence, and dimensions are introduced in the context of IR” in the first two sections. Orthogonal bases are introduced and used to derive the expansion theorem. The basic properties of rank are presented and used to justify the definition given in Section 1.2. Then, after a rigorous study of diagonalization, best approximation and least squares are discussed. The chapter closes with an application to correlation and variance. As in the sixth edition, this is a “bridging” chapter, easing the transition to abstract spaces. Concem about duplication with Chapter 6 is mitigated by the fact that this is the most difficult part of the course and many students welcome a repeat discussion of concepts like independence and spanning, albeit in the abstract setting. In a different direction, Chapters 1-5 could serve as a solid introduction to linear algebra for students not requiring abstract theory. Chapter 6: Vector Spaces. Building on the work on IR” in Chapter 5, the basic theory of abstract finite dimensional vector spaces is developed emphasizing new examples like matrices, polynomials and functions. This is the first acquaintance most students have had with an abstract system, so not having to deal with spanning, independence and dimension in the general context eases the transition to abstract thinking. Applications to polynomials and to differential equations are included. Chapter 7: Linear Transformations. General linear transformations are introduced, motivated by many examples from geometry, matrix theory, and calculus. Then kemels and images are defined, the dimension theorem is proved, and isomorphisms are discussed. The chapter ends with an application to linear recurrences. A proof is included that the order of a differential equation (with constant coefficients) equals the dimension of the space of solutions. Chapter 8: Orthogonality. The study of orthogonality in R”, begun in Chapter 5, is continued. Orthogonal complements and projections are defined and used to study orthogonal diagonalization. This leads to the principal axis theorem, the Cholesky factorization of a positive definite matrix, and QR-factorization. The theory is extended to C” in Section 8.6 where hermitian and unitary matrices are discussed, culminating in Schur's theorem and the spectral theorem. A short proof of the Cayley-Hamilton theorem is also presented. In Section 8.7 the field Z, of integers modulo p is constructed informally for any prime p, and codes are discussed over any finite field. The chapter concludes with applications to quadratic forms, constrained optimization, and statistical principal component analysis. Chapter 9: Change of Basis ‘The matrix of general linear transformation is defined and studied. In the case of an operator, the relationship between basis changes and similarity is revealed. This is illustrated by computing the matrix of a rotation about a line through the origin in IR°, Finally, invariant subspaces and direct sums are introduced, related to similarity, and (as an example) used to show that every involution is similar to a diagonal matrix with diagonal entries #1. Chapter 10: Inner Product Spaces. General inner products are introduced and distance, norms, and the Cauchy-Schwarz inequality are discussed. The Gram-Schmidt algorithm is presented, projections are defined and the approximation theorem is proved (with an application to Fourier approximation). Finally, isometries are characterized, and distance preserving operators are shown to be composites of a translations and isometries. Chapter 11: Canonical Forms. ‘The work in Chapter 9 is continued. Invariant subspaces and direct sums are used to derive the block triangular form. That, in tur, is used to give a compact proof of the Jordan canonical form. Of course the level is higher. Appendices In Appendix A, complex arithmetic is developed far enough to find nth roots. In Appendix B, methods of proof are discussed, while Appendix C presents mathematical induction. Finally, Appendix D describes the properties of polynomials in elementary terms. Page x LIST OF APPLICATIONS Network Flow (Section 1.4) Electrical Networks (Section 1.5) * Chemical Reactions (Section 1.6) * Directed Graphs (in Section 2.3) * Input-Output Economic Models (Section 2.8) * Markov Chains (Section 2.9) + Polynomial Interpolation (in Section 3.2) * Population Growth (Examples 1 and 10, Section 3.3) * Google PageRank (in Section 3.3) * Linear Recurrences (Section 3.4; see also Section 7.5) * Systems of Differential Equations (Section 3.5) * Computer Graphics (Section 4.5) + Least Squares Approximation (in Section 5.6) * Correlation and Variance (Section 5.7) + Polynomials (Section 6.5) * Differential Equations (Section 6.6) * Linear Recurrences (Section 7.5) * Error Correcting Codes (Section 8.7) * Quadratic Forms (Section 8.8) * Constrianed Optimization (Section 8.9) * Statistical Principal Component Analysis (Section 8.10) * Fourier Approximation (Section 10.5) ACKNOWLEDGMENTS Comments and suggestions that have been invaluable to the development of this edition were provided by a variety of reviewers, and | thank the following instructors: Robert Andre University of Waterloo Dietrich Burbulla University of Toronto Dzung M. Ha Ryerson University Mark Solomonovich Grant MacEwan Fred Szabo Concordia University Edward Wang Wiffred Laurier Petr Zizler Mount Royal University Itis a pleasure to recognize the contributions of several people to this book. First, | would like to thank a number of colleagues who have made suggestions that have improved this edition. Discussions with Thi Dinh and Jean Springer have been invaluable and many of their suggestions have been incorporated. Thanks are also due to Kristine Bauer and Clifton Cunningham for several conversations about the new way to look at matrix multiplication. | also wish to extend my thanks to Joanne Canape for being there when I have technical questions. Of course, thanks go to James Booty, Senior Sponsoring Editor, for his enthusiasm and effort in getting this project underway, to Sarah Fulton and Erin Catto, Developmental Editors, for their work on the editorial background of the book, and to Cathy Biribauer and Robert Templeton (First Folio Resource Group Inc.) and the rest of the production staff at McGraw-Hill Ryerson for their parts in the project. Thanks also go to Jason Nicholson for his help in various aspects of the. book, particularly the Solutions Manual. Finally, | want to thank my wife Kathleen, without whose understanding and cooperation, this book would not exist. W. Keith Nicholson University of Calgary

You might also like