0% found this document useful (0 votes)
68 views41 pages

124 - Section 1.2

The document discusses several linear algebra concepts including powers of matrices, the identity matrix, the transpose of matrices, symmetric matrices, and Gaussian elimination. Gaussian elimination is a method for solving systems of linear equations by systematically eliminating variables from each equation until an upper triangular system is obtained, allowing the solutions to be readily determined.

Uploaded by

Aesthetic Duke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views41 pages

124 - Section 1.2

The document discusses several linear algebra concepts including powers of matrices, the identity matrix, the transpose of matrices, symmetric matrices, and Gaussian elimination. Gaussian elimination is a method for solving systems of linear equations by systematically eliminating variables from each equation until an upper triangular system is obtained, allowing the solutions to be readily determined.

Uploaded by

Aesthetic Duke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 41

Applied Linear Algebra

Systems of Linear
Equations 2
Overview

Students will learn about:

 Powers of Matrices
 The Identity Matrix
 The Transpose
 Symmetric Matrices
 Gaussian Elimination
Powers of Matrices

 If A is a square matrix, the associative law allows us to write


AA as , and AAA as A3, and so on. In general,

 EXAMPLE 13. Let

 Compute
 Then guess the general form of and confirm your guess by
induction on n.
Powers of Matrices

 Solution: We find that

 A reasonable guess, therefore, is that for all natural numbers n,


Powers of Matrices

 We confirm this by induction on n. Formula is correct for


n=1.Suppose it is correct for n=k, that is,

 Then

which is precisely the result we obtain in formula by putting n = k+


1.
 If the induction hypothesis is valid for n =k, we have shown that it
is also valid for n = k + 1. So the formula is indeed generally valid.
The Identity Matrix

 The identity matrix of order n, denoted by l.n (or often just by I),
is the n x n matrix having ones along the main diagonal and zeros
elsewhere:
The Identity Matrix
 If A is any m x n matrix, it is easy to verify that AIn =A.
 Likewise, if B is any n x m matrix, then InB = B.
 In particular, Aln = InA = A (for every n x n matrix A)
 Thus, ln is the matrix equivalent of 1 in the real number system.
In fact, it is the only matrix with this property.
 To prove this, suppose E is an arbitrary n x n matrix such that
AE = A for all n x n matrices A. Putting A = In in particular
yields InE =In. But InE = E . So E=In .
Errors to Avoid

 The rules of matrix algebra make many arguments very easy,


but one has to be extremely careful to use only valid rules
when matrices are being multiplied.
 For instance. It is tempting to simplify the expression AA+AB +
BA + BB on the right-hand side to AA + 2AB + BB.
 This is wrong! Even when AB and BA are both defined, AB is
not necessarily equal to BA. Matrix multiplication is not
commutative in general.
Errors to Avoid

 EXAMPLE 15 Let A and B be the matrices

 Show that AB≠BA


 Solution:
Errors to Avoid
 If a and b are real numbers, then ab= 0 implies that either a or b is
0. The corresponding result is not true for matrices. In fact, AB
can be the zero matrix even if neither A nor B is the zero matrix.
 EXAMPLE 16 Let
 Compute AB.
 Solution:

 For real numbers, if because we can


cancel by multiplying each side of the equation by 1/a. The
corresponding cancellation "rule" is not valid for matrices.
Example 16 illustrates this point also: There
Errors to Avoid

 So we have found examples showing that in general:

 We see that matrix multiplication is not commutative in general,


and that the cancellation law is generally invalid for matrix
multiplication. (The cancellation law is valid if A has a so-called
inverse.)
Errors to Avoid

 EXAMPLE 17 A firm uses raw materials to produce


the commodities
 For i=1,..., m and j=1, ..., n, we let be the quantity of raw
material Rj which is needed to produce each unit of commodity
These quantities form the matrix
Errors to Avoid
 Suppose that the firm plans a monthly production of uj units of
each commodity j = 1, 2, .... n. This plan can be represented by
an n x 1 matrix (column vector) u, called the firm's monthly
production vector:

 Since ai1, in particular, is the amount of raw material Ri which is


needed to produce one unit of commodity Vi, it follows that ai1u1
is the amount of raw material Rj which is needed to produce u1
units of commodity Vi. Similarly aij u j is the amount needed for uj
units of Vj (j = 2,..., n).
Errors to Avoid
 The total monthly requirement of raw material Ri is therefore

 Note that this is the inner product of the ith row vector in A and
the column vector u. The firm's monthly requirement vector r for
all raw material is therefore given by the matrix product r=Au.
 Thus r is an m x 1 matrix, or a column vector.
 Suppose that the prices of the m raw materials are p1 , p2 ,... pm
per unit. If we define the price vector p  ( p1 , p2 ,... pm ) then the
total monthly cost K of acquiring
m the required raw materials to
produce the vector u is  i 1
pi ri
 This sum can also be written as the matrix product pr. Hence,
The Transpose
 Consider any m x n matrix A. The transpose of A, denoted by
A', is defined as the n x m matrix whose first column is the first
row of A, whose second column is the second row of A, and so
on. Thus,

 So we can write
 The subscripts i and j have to be interchanged because the yth
row of A becomes the j-th column of A', whereas the i-th
column of A becomes the i-th row of A'.
The Transpose

 EXAMPLE 19 Let

 Solution:
The Transpose

 The following rules apply to matrix transposition


Symmetric Matrices

 Square matrices with the property that they are symmetric


about the main diagonal are called symmetric. For example,

are all symmetric. Symmetric matrices are characterized by the


fact that they are equal to their own transposes:
Symmetric Matrices

 EXAMPLE 20 If X is an arbitrary m x n matrix, show that


are both symmetric.
 Solution: First, note that

 Using rules (d) and (a), we find that

 This proves that is symmetric. Prove the other equality in a


similar way.
Gaussian Elimination

 This section explains a general method for finding possible


solutions to a linear system of equations. The method involves
systematic elimination of the unknowns from each equation in
turn. Because it is very efficient, it is the starling point for
computer programs designed to solve large systems of linear
equations. Consider first the following example.
 EXAMPLE 21 Find ail possible solutions of the system
Gaussian Elimination
 Solution: We start by interchanging the first two equations,
which certainly will not alter the set of solutions. We obtain

 This has removed one variable from the second equation.


 The next step is to use the (new) first equation to eliminate the
same variable from the third equation.
 This is done by adding three times the first equation to the last
equation. (The same result is obtained if we solve the first
equation obtain and then substitute this into
the last equation.)
Gaussian Elimination

 This gives

 The last two equations now have only as unknowns.


 The next step in the systematic procedure is to divide the second
equation by 2, so that the coefficient of becomes 1. Thus,
Gaussian Elimination

 Next, eliminate x2 from the last equation by multiplying the


second equation by -5 and adding the result to the last
equation. This gives:

 Finally, multiply the last equation by 2/27 to obtain


Gaussian Elimination
 We see that , and then the other unknowns can easily be
found: Inserting into the second equation gives
after which the first equation yields
 So the only solution of the given equation system is
 Our elimination procedure led to a "staircase" in system, with
as leading entries. In matrix notation, we have

 The matrix of coefficients on the left-hand side is upper


triangular because all entries below the main diagonal are 0.
Moreover, the diagonal elements are all 1.
Gaussian Elimination

 The solution method illustrated in this example is called Gaussian


elimination.
 The operations performed on the given system of equations in
order to arrive at the triangular system are called elementary row
operations.
 Sometimes we continue the elimination procedure until we also
obtain zeros above the leading entries.
 The elementary operations required to do this are frequently
represented as follows:
Gaussian Elimination

 The arrow and the number -1 are included to show that we are
adding - 1 times the second equation to the first. The result is

 At the next step we add -7/2 times the third equation to the first,
and also add 1/2 times the third equation to the second. Note
how the display indicates two operations, affecting rows 1 and
2. The result is
Gaussian Elimination

 EXAMPLE 22 Find all possible solutions of the following


system of equations:
Gaussian Elimination

 Solution: We begin with three operations to remove from


equations 2, 3, and 4:
Gaussian Elimination

 The result is

 (The -1/5 without any arrow is used to indicate that the relevant
equation is multiplied by -1/5.)
Gaussian Elimination

 Further operations give

 We have now constructed the staircase.


Gaussian Elimination

 The last two equations are superfluous, and we continue by


creating zeros above the leading entry :

 From this system we conclude that the solutions of the original


system can be expressed as follows:
Gaussian Elimination
 Clearly, x3 can be chosen freely, after which other variables
are uniquely determined. We can represent the solution set as:

 We say that the solution set of the system has one degree of
freedom, since one of the variables can be freely chosen. If
this variable is given a fixed value, then the other two variables
are uniquely determined.

 Briefly formulated, the Gaussian elimination method (often


called the Gauss-Jordan method) takes the following form:
Gaussian Elimination

GAUSSIAN ELIMINATION METHOD


 1) Make a staircase with 1 as the coefficient for each non-zero
leading entry.
 2) Produce O's above each leading entry.
 3) The general solution is found by expressing the unknowns
that occur as leading entries in terms of those unknowns that
do not.
 The latter unknowns (if there are any) can be chosen freely.
 The number of unknowns that can be chosen freely (possibly 0)
is the number of degrees of freedom for the system.
Gaussian Elimination
 This description of the recipe assumes that the system has
solutions.
 However, the Gaussian elimination method can also be used to
show that a linear system of equations is inconsistent - that is.
it has no solutions.
 Before showing you an example of this, let us introduce a
notation that considerable reduces the amount of notation
needed.
 Looking back at the last two examples, we realize that we only
need to know the coefficients of the system of equations and
the right-hand side vector, while the variables only serve to
indicate in which column the different coefficient belong.
Gaussian Elimination

 Thus, Example 22 can be represented by augmented matrices


(each with an extra column) as follows:
Gaussian Elimination
 We have performed elementary row operations on the different
4x4 augmented matrices, and we have used the equivalence
symbol ~ between two matrices when the latter has been
obtained by using elementary operations on the former.
 This is justified because sue operations do always produce an
equivalent system of equations.
 Note carefully how the system of equations in Example 2 is
represented by the first matrix, and how the last matrix
represents the system
Gaussian Elimination

 EXAM PLE 23 For what values of the numbers a, b, and c


does the following system of equations has solutions? Find the
solutions when they exist.
Gaussian Elimination

 Solution: We represent the system of equations by its


augmented matrix, then perform elementary row operations as
required by the Gaussian method:
Gaussian Elimination

 The last row represents the equation

 The system therefore has solutions only if 2a- 3b + c = 0. In this


case the last row has only zeros, and we continue using
elementary operations till we end up with the following matrix:
Gaussian Elimination

 This means that the corresponding system of equations is:


Gaussian Elimination

 Here two variables can be freely chosen. Once they have been
chosen, however, the other two are uniquely determined linear
functions of

 For the given system is inconsistent, so has


no solutions.

You might also like