Sample Problems For Advance Math For BSIE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

i i “main”

2007/2/16
page 150

i i

150 CHAPTER 2 Matrices and Systems of Linear Equations


 
3 7 10 the linear algebra package of Maple, the three elementary
20.  2 3 −1 . row operations are
1 2 1
  • swaprow(A, i, j ) : permute rows i and j
3 −3 6 • mulrow(A, i, k) : multiply row i by k
21.  2 −2 4 .
6 −6 12 • addrow(A, i, j ) : add k times row i to row j
   For Problems 26–28, use some form of technology to de-
3 5 −12
 2 3 −7 . termine a row-echelon form of the given matrix.
22.
−2 −1 1 26. The matrix in Problem 14.
 
1 −1 −1 2 27. The matrix in Problem 15.
 3 −2 0 7 
23.  
 2 −1 2 4 . 28. The matrix in Problem 18.
4 −2 3 8  Many forms of technology also have built-in functions
  for directly determining the reduced row-echelon form of
1 −2 1 3
 3 −6 2 7 . a given matrix A. For example, in the linear algebra pack-
24.
age of Maple, the appropriate command is rref(A). In Prob-
4 −8 3 10
lems 29–31, use technology to determine directly the reduced
  row-echelon form of the given matrix.
0 1 2 1
25.  0 3 1 2 .
29. The matrix in Problem 21.
0 2 0 1
30. The matrix in Problem 24.
Many forms of technology have commands for performing
elementary row operations on a matrix A. For example, in 31. The matrix in Problem 25.

2.5 Gaussian Elimination


We now illustrate how elementary row-operations applied to the augmented matrix of a
system of linear equations can be used first to determine whether the system is consistent,
and second, if the system is consistent, to find all of its solutions. In doing so, we will
develop the general theory for linear systems of equations.

Example 2.5.1 Determine the solution set to

3x1 − 2x2 + 2x3 = 9,


x1 − 2x2 + x3 = 5, (2.5.1)
2x1 − x2 − 2x3 = −1.

Solution: We first use elementary row operations to reduce the augmented matrix of
the system to row-echelon form.
     
3 −2 2 9 1 −2 1 5 1 −2 1 5
1 −2 1 5 ∼ 1 
3 −2 2 9 ∼ 0 4 −1 − 6
2

2 −1 −2 −1 2 −1 −2 −1 0 3 −4 −11

     
1 −2 1 5 1 −2 1 5 1 −2 1 5
∼ 0 1 3 5 ∼ 0 1 5 ∼ 0 1 3 5 .
3 4 5
3
0 3 −4 −11 0 0 −13 −26 0 0 12

i i

i i
i i “main”
2007/2/16
page 151

i i

2.5 Gaussian Elimination 151

1. P12 2. A12 (−3), A13 (−2) 3. A32 (−1) 4. A23 (−3) 5. M3 (−1/13)

The system corresponding to this row-echelon form of the augmented matrix is

x1 − 2x2 + x3 = 5, (2.5.2)
x2 + 3x3 = 5, (2.5.3)
x3 = 2, (2.5.4)

which can be solved by back substitution. From Equation (2.5.4), x3 = 2. Substituting


into Equation (2.5.3) and solving for x2 , we find that x2 = −1. Finally, substituting into
Equation (2.5.2) for x3 and x2 and solving for x1 yields x1 = 1. Thus, our original system
of equations has the unique solution (1, −1, 2), and the solution set to the system is

S = {(1, −1, 2)},

which is a subset of R3 . 
The process of reducing the augmented matrix to row-echelon form and then using
back substitution to solve the equivalent system is called Gaussian elimination. The
particular case of Gaussian elimination that arises when the augmented matrix is reduced
to reduced row-echelon form is called Gauss-Jordan elimination.

Example 2.5.2 Use Gauss-Jordan elimination to determine the solution set to

x1 + 2x2 − x3 = 1,
2x1 + 5x2 − x3 = 3,
x1 + 3x2 + 2x3 = 6.

Solution: In this case, we first reduce the augmented matrix of the system to reduced
row-echelon form.
         
1 2 −1 1 1 2 −1 1 1 0 −3 −1 1 0 −3 −1 1 0 0 5
2 5 −1 3 ∼ 1 
0 1 1 1 ∼ 0 1 1 1 ∼ 0 1 1 1 ∼ 0 1 0 −1
2 3 4

1 3 2 6 0 1 3 5 0 0 2 4 0 0 1 2 0 0 1 2

1. A12 (−2), A13 (−1) 2. A21 (−2), A23 (−1) 3. M3 (1/2) 4. A31 (3), A32 (−1)

The augmented matrix is now in reduced row-echelon form. The equivalent system is

x1 = 5,
x2 = −1,
x3 = 2.

and the solution can be read off directly as (5, −1, 2). Consequently, the given system
has solution set
S = {(5, −1, 2)}
in R3 . 
We see from the preceding two examples that the advantage of Gauss-Jordan elim-
ination over Gaussian elimination is that it does not require back substitution. However,
the disadvantage is that reducing the augmented matrix to reduced row-echelon form
requires more elementary row operations than reduction to row-echelon form. It can be

i i

i i
i i “main”
2007/2/16
page 152

i i

152 CHAPTER 2 Matrices and Systems of Linear Equations

shown, in fact, that in general, Gaussian elimination is the more computationally effi-
cient technique. As we will see in the next section, the main reason for introducing the
Gauss-Jordan method is its application to the computation of the inverse of an n × n
matrix.

Remark The Gaussian elimination method is so systematic that it can be programmed


easily on a computer. Indeed, many large-scale programs for solving linear systems are
based on the row-reduction method.
In both of the preceding examples,

rank(A) = rank(A# ) = number of unknowns in the system

and the system had a unique solution. More generally, we have the following lemma:

Lemma 2.5.3 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system. If rank(A) = rank(A# ) = n, then the system has a unique solution.

Proof If rank(A) = rank(A# ) = n, then there are n leading ones in any row-echelon
form of A, hence back substitution gives a unique solution. The form of the row-echelon
form of A# is shown below, with m − n rows of zeros at the bottom of the matrix omitted
and where the ∗’s denote unknown elements of the row-echelon form.
 
1 ∗ ∗ ∗ ... ∗ ∗
0 1 ∗ ∗ . . . ∗ ∗ 
 
 0 0 1 ∗ . . . ∗ ∗
 
 .. .. .. .. .. .. 
. . . . . . . . .
0 0 0 0 ... 1 ∗

Note that rank(A) cannot exceed rank(A# ). Thus, there are only two possibilities
for the relationship between rank(A) and rank(A# ): rank(A) < rank(A# ) or rank(A) =
rank(A# ). We now consider what happens in these cases.

Example 2.5.4 Determine the solution set to

x1 + x2 − x3 + x4 = 1,
2x1 + 3x2 + x3 = 4,
3x1 + 5x2 + 3x3 − x4 = 5.

Solution: We use elementary row operations to reduce the augmented matrix:


     
1 1 −1 1 1 1 1 −1 1 1 1 1 −1 1 1
2 3 1 0 4 ∼ 0 1 3 −2 2 ∼ 0 1 3 −2 2
1 2

3 5 3 −1 5 0 2 6 −4 2 0 0 0 0 −2

1. A12 (−2), A13 (−3) 2. A23 (−2)

The last row tells us that the system of equations has no solution (that is, it is inconsistent),
since it requires

0x1 + 0x2 + 0x3 + 0x4 = −2,

which is clearly impossible. The solution set to the system is thus the empty set ∅. 

i i

i i
i i “main”
2007/2/16
page 153

i i

2.5 Gaussian Elimination 153

In the previous example, rank(A) = 2, whereas rank(A# ) = 3. Thus, rank(A) <


rank(A# ), and the corresponding system has no solution. Next we establish that this
result is true in general.

Lemma 2.5.5 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system. If rank(A) < rank(A# ), then the system is inconsistent.

Proof If rank(A) < rank(A# ), then there will be one row in the reduced row-echelon
form of the augmented matrix whose first nonzero element arises in the last column.
Such a row corresponds to an equation of the form

0x1 + 0x2 + · · · + 0xn = 1,

which has no solution. Consequently, the system is inconsistent.


Finally, we consider the case when rank(A) = rank(A# ). If rank(A) = n, we have
already seen in Lemma 2.5.3 that the system has a unique solution. We now consider an
example in which rank(A) < n.

Example 2.5.6 Determine the solution set to

5x1 −6x2 +x3 = 4,


2x1 −3x2 +x3 = 1, (2.5.5)
4x1 −3x2 −x3 = 5.

Solution: We begin by reducing the augmented matrix of the system.


     
5 −6 1 4 1 −3 2 −1 1 −3 2 −1
2 −3 1 1 ∼ 1 
2 −3 1 1 ∼ 0 3 −3 3
2

4 −3 −15 4 −3 −1 5 0 9 −9 9


1 −3 2 −1 1 −3 2 −1
∼ 0 1 −1 1 ∼ 0 1 −1 1
3 4

0 9 −9 9 0 0 0 0

1. A31 (−1) 2. A12 (−2), A13 (−4) 3. M2 (1/3) 4. A23 (−9)

The augmented matrix is now in row-echelon form, and the equivalent system is

x1 − 3x2 + 2x3 = −1, (2.5.6)


x2 − x3 = 1. (2.5.7)

Since we have three variables, but only two equations relating them, we are free to
specify one of the variables arbitrarily. The variable that we choose to specify is called
a free variable or free parameter. The remaining variables are then determined by
the system of equations and are called bound variables or bound parameters. In the
foregoing system, we take x3 as the free variable and set

x3 = t,

where t can assume any real value5 . It follows from (2.5.7) that

x2 = 1 + t.
5 When considering systems of equations with complex coefficients, we allow free variables to assume
complex values as well.

i i

i i
i i “main”
2007/2/16
page 154

i i

154 CHAPTER 2 Matrices and Systems of Linear Equations

Further, from Equation (2.5.6),

x1 = −1 + 3(1 + t) − 2t = 2 + t.

Thus the solution set to the given system of equations is the following subset of R3 :

S = {(2 + t, 1 + t, t) : t ∈ R}.

The system has an infinite number of solutions, obtained by allowing the parameter t to
assume all real values. For example, two particular solutions of the system are

(2, 1, 0) and (0, −1, −2),

corresponding to t = 0 and t = −2, respectively. Note that we can also write the solution
set S above in the form

S = {(2, 1, 0) + t (1, 1, 1) : t ∈ R}. 

Remark The geometry of the foregoing solution is as follows. The given system
(2.5.5) can be interpreted as consisting of three planes in 3-space. Any solution to the
system gives the coordinates of a point of intersection of the three planes. In the preceding
example the planes intersect in a line whose parametric equations are

x1 = 2 + t, x2 = 1 + t, x3 = t.

(See Figure 2.3.1.)


In general, the solution to a consistent m × n system of linear equations may involve
more than one free variable. Indeed, the number of free variables will depend on how
many nonzero rows arise in any row-echelon form of the augmented matrix, A# , of the
system; that is, it will depend on the rank of A# . More precisely, if rank(A# ) = r # , then the
equivalent system will have only r # relationships between the n variables. Consequently,
provided the system is consistent,

number of free variables = n − r # .

We therefore have the following lemma.

Lemma 2.5.7 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system and let r # = rank(A# ). If r # = rank(A) < n, then the system has an infinite
number of solutions, indexed by n − r # free variables.

Proof As discussed before, any row-echelon equivalent system will have only r # equa-
tions involving the n variables, and so there will be n − r # > 0 free variables. If we
assign arbitrary values to these free variables, then the remaining r # variables will be
uniquely determined, by back substitution, from the system. Since each free variable can
assume infinitely many values, in this case there are an infinite number of solutions to
the system.

Example 2.5.8 Use Gaussian elimination to solve

x1 − 2x2 + 2x3 − x4 = 3,
3x1 + x2 + 6x3 + 11x4 = 16,
2x1 − x2 + 4x3 + 4x4 = 9.

i i

i i
i i “main”
2007/2/16
page 155

i i

2.5 Gaussian Elimination 155

Solution: A row-echelon form of the augmented matrix of the system is


 
1 −2 2 −1 3
0 1 0 2 1  ,
0 0 0 0 0

so that we have two free variables. The equivalent system is

x1 − 2x2 + 2x3 − x4 = 3, (2.5.8)


x2 + 2x4 = 1. (2.5.9)

Notice that we cannot choose any two variables freely. For example, from Equation
(2.5.9), we cannot specify both x2 and x4 independently. The bound variables should be
taken as those that correspond to leading 1s in the row-echelon form of A# , since these
are the variables that can always be determined by back substitution (they appear as the
leftmost variable in some equation of the system corresponding to the row echelon form
of the augmented matrix).

Choose as free variables those variables that


do not correspond to a leading 1 in a row-echelon form of A# .

Applying this rule to Equations (2.5.8) and (2.5.9), we choose x3 and x4 as free variables
and therefore set

x3 = s, x4 = t.

It then follows from Equation (2.5.9) that

x2 = 1 − 2t.

Substitution into (2.5.8) yields

x1 = 5 − 2s − 3t,

so that the solution set to the given system is the following subset of R4 :

S = {(5 − 2s − 3t, 1 − 2t, s, t) : s, t ∈ R}.


= {(5, 1, 0, 0) + s(−2, 0, 1, 0) + t (−3, −2, 0, 1) : s, t ∈ R}. 

Lemmas 2.5.3, 2.5.5, and 2.5.7 completely characterize the solution properties of
an m × n linear system. Combining the results of these three lemmas gives the next
theorem.

Theorem 2.5.9 Consider the m × n linear system Ax = b. Let r denote the rank of A, and let r # denote
the rank of the augmented matrix of the system. Then

1. If r < r # , the system is inconsistent.


2. If r = r # , the system is consistent and

(a) There exists a unique solution if and only if r # = n.


(b) There exists an infinite number of solutions if and only if r # < n.

i i

i i
i i “main”
2007/2/16
page 156

i i

156 CHAPTER 2 Matrices and Systems of Linear Equations

Homogeneous Linear Systems


Many problems that we will meet in the future will require the solution to a homogeneous
system of linear equations. The general form for such a system is

a11 x1 + a12 x2 + · · · + a1n xn = 0,


a21 x1 + a22 x2 + · · · + a2n xn = 0,
.. (2.5.10)
.
am1 x1 + am2 x2 + · · · + amn xn = 0,

or, in matrix form, Ax = 0, where A is the coefficient matrix of the system and 0 denotes
the m-vector whose elements are all zeros.

Corollary 2.5.10 The homogeneous linear system Ax = 0 is consistent for any coefficient matrix A, with
a solution given by x = 0.

Proof We can see immediately from (2.5.10) that if x = 0, then Ax = 0, so x = 0 is a


solution to the homogeneous linear system.
Alternatively, we can deduce the consistency of this system from Theorem 2.5.9 as
follows. The augmented matrix A# of a homogeneous linear system differs from that of
the coefficient matrix A only by the addition of a column of zeros, a feature that does
not affect the rank of the matrix. Consequently, for a homogeneous system, we have
rank(A# ) = rank(A), and therefore, from Theorem 2.5.9, such a system is necessarily
consistent.

Remarks
1. The solution x = 0 is referred to as the trivial solution. Consequently, from
Theorem 2.5.9, a homogeneous system either has only the trivial solution or has
an infinite number of solutions (one of which must be the trivial solution).
2. Once more it is worth mentioning the geometric interpretation of Corollary 2.5.10
in the case of a homogeneous system with three unknowns. We can regard each
equation of such a system as defining a plane. Owing to the homogeneity, each
plane passes through the origin, hence the planes intersect at least at the origin.

Often we will be interested in determining whether a given homogeneous system


has an infinite number of solutions, and not in actually obtaining the solutions. The
following corollary to Theorem 2.5.9 can sometimes be used to determine by inspection
whether a given homogeneous system has nontrivial solutions:

Corollary 2.5.11 A homogeneous system of m linear equations in n unknowns, with m < n, has an infinite
number of solutions.

Proof Let r and r # be as in Theorem 2.5.9. Using the fact that r = r # for a homogeneous
system, we see that since r # ≤ m < n, Theorem 2.5.9 implies that the system has an
infinite number of solutions.

Remark If m ≥ n, then we may or may not have nontrivial solutions, depending on


whether the rank of the augmented matrix, r # , satisfies r # < n or r # = n, respectively.
We encourage the reader to construct linear systems that illustrate each of these two
possibilities.

i i

i i
i i “main”
2007/2/16
page 157

i i

2.5 Gaussian Elimination 157


 
0 2 3
Example 2.5.12 Determine the solution set to Ax = 0, if A = 0 1 −1.
0 3 7
Solution: The augmented matrix of the system is
 
0 2 3 0
0 1 −1 0 ,
0 3 7 0

with reduced row-echelon form


 
0 1 0 0
0 0 1 0 .
0 0 0 0

The equivalent system is

x2 = 0,
x3 = 0.

It is tempting, but incorrect, to conclude from this that the solution to the system
is x1 = x2 = x3 = 0. Since x1 does not occur in the system, it is a free variable and
therefore not necessarily zero. Consequently, the correct solution to the foregoing system
is (r, 0, 0), where r is a free variable, and the solution set is {(r, 0, 0) : r ∈ R}. 
The linear systems that we have so far encountered have all had real coefficients, and
we have considered corresponding real solutions. The techniques that we have developed
for solving linear systems are also applicable to the case when our system has complex
coefficients. The corresponding solutions will also be complex.

Remark In general, the simplest method of putting a leading 1 in a position that


contains
  the complex number a + ib is to multiply the corresponding row by the scalar
1
a 2 +b2
(a − ib). This is illustrated in steps 1 and 4 in the next example. If difficulties
are encountered, consultation of Appendix A is in order.

Example 2.5.13 Determine the solution set to

(1 + 2i)x1 + 4x2 + (3 + i)x3 = 0,


(2 − i)x1 + (1 + i)x2 + 3x3 = 0,
5ix1 + (7 − i)x2 + (3 + 2i)x3 = 0.

Solution: We reduce the augmented matrix of the system.


   
1 + 2i 4 3 + i 0 1 45 (1 − 2i) 1 − i 0
2−i 1+i  1 
3 0 ∼ 2−i 1+i 3 0
5i 7 − i 3 + 2i 0 5i 7 − i 3 + 2i 0

 
5 (1 − 2i) 1−i
4
1 0

2
∼ 0 (1 + i) − 45 (1 − 2i)(2 − i) 3 − (1 − i)(2 − i) 0
0 (7 − i) − 4i(1 − 2i) (3 + 2i) − 5i(1 − i) 0

i i

i i
i i “main”
2007/2/16
page 158

i i

158 CHAPTER 2 Matrices and Systems of Linear Equations


   
1 45 (1 − 2i) 1 − i 0 1 5 (1 − 2i)
4
1−i 0
= 0 1 + 5i 2 + 3i 0 ∼ 0 2 + 3i 0
3
1 + 5i
0 −1 − 5i −2 − 3i 0 0 0 0 0

 
5 (1 − 2i) 1−i
4
1 0
∼ 0 0 .
4
1 1
26 (17 − 7i)
0 0 0 0

1. M1 ((1 − 2i)/5) 2. A12 (−(2 − i)), A13 (−5i) 3. A23 (1) 4. M2 ((1 − 5i)/26)

This matrix is now in row-echelon form. The equivalent system is

x1 + 5 (1 − 2i)x2
4
+ (1 − i)x3 = 0,
x2 + 26 (17 − 7i)x3
1
= 0.

There is one free variable, which we take to be x3 = t, where t can assume any complex
value. Applying back substitution yields

x2 = 26 t (−17 + 7i)
1

x1 = − 65
2
t (1 − 2i)(−17 + 7i) − t (1 − i)
= − 65 t (59 + 17i)
1

so that the solution set to the system is the subset of C3

  
− 65
1
t (59 + 17i), 26
1
t (−17 + 7i), t : t ∈ C . 

Exercises for 2.5


Key Terms • Understand the relationship between the ranks of A
Gaussian elimination, Gauss-Jordan elimination, Free vari- and A# , and how this affects the number of solutions
ables, Bound (or leading) variables, Trivial solution. to a linear system.

Skills True-False Review


• Be able to solve a linear system of equations by Gaus- For Questions 1–6, decide if the given statement is true or
sian elimination and by Gauss-Jordan elimination. false, and give a brief justification for your answer. If true,
• Be able to identify free variables and bound variables you can quote a relevant definition or theorem from the text.
and know how they are used to construct the solution If false, provide an example, illustration, or brief explanation
set to a linear system. of why the statement is false.

i i

i i
i i “main”
2007/2/16
page 159

i i

2.5 Gaussian Elimination 159

1. The process by which a matrix is brought via elemen- x1 + 2x2 − x3 + x4 = 1,


tary row operations to row-echelon form is known as 2x1 − 3x2 + x3 − x4 = 2,
8.
Gauss-Jordan elimination. x1 − 5x2 + 2x3 − 2x4 = 1,
4x1 + x2 − x3 + x4 = 3.
2. A homogeneous linear system of equations is always
consistent. x1 + 2x2 + x3 + x4 − 2x5 = 3,
3. For a linear system Ax = b, every column of the 9. x3 + 4x4 − 3x5 = 2,
row-echelon form of A corresponds to either a bound 2x1 + 4x2 − x3 − 10x4 + 5x5 = 0.
variable or a free variable, but not both, of the linear For Problems 10–15, use Gauss-Jordan elimination to deter-
system. mine the solution set to the given system.
4. A linear system Ax = b is consistent if and only if the
2x1 − x2 − x3 = 2,
last column of the row-echelon form of the augmented
10. 4x1 + 3x2 − 2x3 = −1,
matrix [A b] is not a pivot column.
x1 + 4x2 + x3 = 4.
5. A linear system is consistent if and only if there are
free variables in the row-echelon form of the corre- 3x1 + x2 + 5x3 = 2,
sponding augmented matrix. 11. x1 + x2 − x3 = 1,
2x1 + x2 + 2x3 = 3.
6. The columns of the row-echelon form of A# that con-
tain the leading 1s correspond to the free variables. x1 − 2x3 = −3,
12. 3x1 − 2x2 − 4x3 = −9,
Problems x1 − 4x2 + 2x3 = −3.
For Problems 1–9, use Gaussian elimination to determine
2x1 − x2 + 3x3 − x4 = 3,
the solution set to the given system.
13. 3x1 + 2x2 + x3 − 5x4 = −6,
x1 + 2x2 + x3 = 1, x1 − 2x2 + 3x3 + x4 = 6.
1. 3x1 + 5x2 + x3 = 3,
2x1 + 6x2 + 7x3 = 1. x1 + x2 + x3 − x4 = 4,
x1 − x2 − x3 − x4 = 2,
3x1 − x2 = 1, 14.
x1 + x2 − x3 + x4 = −2,
2. 2x1 + x2 + 5x3 = 4, x1 − x2 + x3 + x4 = −8.
7x1 − 5x2 − 8x3 = −3.
2x1 − x2 + 3x3 + x4 − x5 = 11,
3x1 + 5x2 − x3 = 14,
x1 − 3x2 − 2x3 − x4 − 2x5 = 2,
3. x1 + 2x2 + x3 = 3,
15. 3x1 + x2 − 2x3 − x4 + x5 = −2,
2x1 + 5x2 + 6x3 = 2.
x1 + 2x2 + x3 + 2x4 + 3x5 = −3,
6x1 − 3x2 + 3x3 = 12, 5x1 − 3x2 − 3x3 + x4 + 2x5 = 2.
4. 2x1 − x2 + x3 = 4,
−4x1 + 2x2 − 2x3 = −8. For Problems 16–20, determine the solution set to the sys-
tem Ax = b for the given coefficient matrix A and right-hand
2x1 − x2 + 3x3 = 14, side vector b.
3x1 + x2 − 2x3 = −1,    
5. 1 −3 1 8
7x1 + 2x2 − 3x3 = 3,
16. A =  5 −4 1  , b =  15 .
5x1 − x2 − 2x3 = 5.
2 4 −3 −4
2x1 − x2 − 4x3 = 5,    
3x1 + 2x2 − 5x3 = 8, 1 0 5 0
6.
5x1 + 6x2 − 6x3 = 20, 17. A =  3 −2 11  , b =  2 .
x1 + x2 − 3x3 = −3. 2 −2 6 2
   
x1 + 2x2 − x3 + x4 = 1, 0 1 −1 −2
7. 2x1 + 4x2 − 2x3 + 2x4 = 2, 18. A =  0 5 1  , b =  8 .
5x1 + 10x2 − 5x3 + 5x4 = 5. 0 2 1 5

i i

i i
i i “main”
2007/2/16
page 160

i i

160 CHAPTER 2 Matrices and Systems of Linear Equations


   
1 −1 0 −1 2 26. Consider the system of linear equations
19. A =  2 1 3 7  , b =  2 .
3 −2 1 0 4 a11 x1 + a12 x2 = b1 ,
    a21 x1 + a22 x2 = b2 .
11 0 1 2
 3 1 −2 3   
20. A =   , b =  8 . Define
,
1 , and
2 by
 2 3 1 2  3
−2 3 5 −2 −9
= a11 a22 − a12 a21 ,

1 = a22 b1 − a12 b2 ,
2 = a11 b2 − a12 b1 .
21. Determine all values of the constant k for which the
following system has (a) no solution, (b) an infinite
number of solutions, and (c) a unique solution.
(a) Show that the given system has a unique solution
x1 + 2x2 − x3 = 3, if and only if
= 0, and that the unique solution
2x1 + 5x2 + x3 = 7, in this case is x1 =
1 /
, x2 =
2 /
.
x1 + x2 − k 2 x3 = −k. (b) If
= 0 and a11 = 0, determine the conditions
on
2 that would guarantee that the system has (i)
22. Determine all values of the constant k for which the no solution, (ii) an infinite number of solutions.
following system has (a) no solution, (b) an infinite (c) Interpret your results in terms of intersections of
number of solutions, and (c) a unique solution. straight lines.
2x1 + x2 − x3 + x4 = 0, Gaussian elimination with partial pivoting uses the follow-
x1 + x2 + x3 − x4 = 0, ing algorithm to reduce the augmented matrix:
4x1 + 2x2 − x3 + x4 = 0,
3x1 − x2 + x3 + kx4 = 0.
1. Start with augmented matrix A# .
23. Determine all values of the constants a and b for which 2. Determine the leftmost nonzero column.
the following system has (a) no solution, (b) an infinite 3. Permute rows to put the element of largest absolute
number of solutions, and (c) a unique solution. value in the pivot position.
x1 + x2 − 2x3 = 4, 4. Use elementary row operations to put zeros beneath
3x1 + 5x2 − 4x3 = 16, the pivot position.
2x1 + 3x2 − ax3 = b. 5. If there are no more nonzero rows below the pivot po-
sition, go to 7, otherwise go to 6.
24. Determine all values of the constants a and b for which
the following system has (a) no solution, (b) an infinite 6. Apply (2)–(5) to the submatrix consisting of the rows
number of solutions, and (c) a unique solution. that lie below the pivot position.
7. The matrix is in reduced form.6
x1 − ax2 = 3,
2x1 + x2 = 6,
In Problems 27–30, use the preceding algorithm to reduce A#
−3x1 + (a + b)x2 = 1.
and then apply back substitution to solve the equivalent sys-
tem. Technology might be useful in performing the required
25. Show that the system
row operations.
x1 + x2 + x3 = y1 ,
27. The system in Problem 1.
2x1 + 3x2 + x3 = y2 ,
3x1 + 5x2 + x3 = y3 , 28. The system in Problem 5.
has an infinite number of solutions, provided that 29. The system in Problem 6.
(y1 , y2 , y3 ) lies on the plane whose equation is
y1 − 2y2 + y3 = 0. 30. The system in Problem 10.

6
Notice that this reduced form is not a row-echelon matrix.

i i

i i
i i “main”
2007/2/16
page 161

i i

2.5 Gaussian Elimination 161

31. (a) An n × n system of linear equations whose ma- x1 − x2 + x3 = 0,


trix of coefficients is a lower triangular matrix 3x2 + 2x3 = 0,
40.
is called a lower triangular system. Assuming 3x1 − x3 = 0,
that aii = 0 for each i, devise a method for solv- 5x1 + x2 − x3 = 0.
ing such a system that is analogous to the back-
substitution method. 2x1 − 4x2 + 6x3 = 0,
3x1 − 6x2 + 9x3 = 0,
(b) Use your method from (a) to solve 41.
x1 − 2x2 + 3x3 = 0,
5x1 − 10x2 + 15x3 = 0.
x1 = 2,
2x1 − 3x2 = 1, 4x1 − 2x2 − x3 − x4 = 0,
3x1 + x2 − x3 = 8. 42. 3x1 + x2 − 2x3 + 3x4 = 0,
5x1 − x2 − 2x3 + x4 = 0.
32. Find all solutions to the following nonlinear system of
equations: 2x1 + x2 − x3 + x4 = 0,
x1 + x2 + x3 − x4 = 0,
43.
4x13 + 2x22 + 3x3 = 12, 3x1 − x2 + x3 − 2x4 = 0,
4x1 + 2x2 − x3 + x4 = 0.
x13 − x22 + x3 = 2,
3x13 + x22 − x3 = 2. For Problems 44–54, determine the solution set to the system
Ax = 0 for the given matrix A.
Does your answer contradict Theorem 2.5.9? Explain.  
2 −1
44. A = .
For Problems 33–43, determine the solution set to the given 3 4
system.  
1 − i 2i
45. A = .
3x1 + 2x2 − x3 = 0, 1 + i −2
33. 2x1 + x2 + x3 = 0,  
5x1 − 4x2 + x3 = 0. 1 + i 1 − 2i
46. A = .
−1 + i 2 + i
2x1 + x2 − x3 = 0,  
1 2 3
3x1 − x2 + 2x3 = 0,
34. 47. A =  2 −1 0 .
x1 − x2 − x3 = 0,
1 1 1
5x1 + 2x2 − 2x3 = 0.
 
1 1 1 −1
2x1 − x2 − x3 = 0, 48. A =  −1 0 −1 2 .
35. 5x1 − x2 + 2x3 = 0, 1 3 2 2
x1 + x2 + 4x3 = 0.
 
2 − 3i 1 + i i − 1
(1 + 2i)x1 + (1 − i)x2 + x3 = 0, 49. A =  3 + 2i −1 + i −1 − i .
36. ix1 + (1 + i)x2 − ix3 = 0, 5−i 2i −2
2ix1 + x2 + (1 + 3i)x3 = 0.  
1 3 0
3x1 + 2x2 + x3 = 0, 50. A =  −2 −3 0 .
37. 6x1 − x2 + 2x3 = 0, 1 4 0
12x1 + 6x2 + 4x3 = 0.  
1 0 3
2x1 + x2 − 8x3 = 0,  3 −1 7 
 
38.
3x1 − 2x2 − 5x3 = 0, 51. A =   2 1 8 .

5x1 − 6x2 − 3x3 = 0,  1 1 5
3x1 − 5x2 + x3 = 0. −1 1 −1
 
x1 + (1 + i)x2 + (1 − i)x3 = 0, 1 −1 0 1
39. ix1 + x2 + ix3 = 0, 52. A =  3 −2 0 5 .
(1 − 2i)x1 − (1 − i)x2 + (1 − 3i)x3 = 0. −1 2 0 1

i i

i i
i i “main”
2007/2/16
page 162

i i

162 CHAPTER 2 Matrices and Systems of Linear Equations


   
1 0 −3 0 2 + i i 3 − 2i
53. A =  3 0 −9 0 . 54. A =  i 1 − i 4 + 3i .
−2 0 6 0 3 − i 1 + i 1 + 5i

2.6 The Inverse of a Square Matrix


In this section we investigate the situation when, for a given n × n matrix A, there exists
a matrix B satisfying

AB = In and BA = In (2.6.1)

and derive an efficient method for determining B (when it does exist). As a possible
application of the existence of such a matrix B, consider the n × n linear system

Ax = b. (2.6.2)

Premultiplying both sides of (2.6.2) by an n × n matrix B yields

(BA)x = Bb.

Assuming that BA = In , this reduces to

x = Bb. (2.6.3)

Thus, we have determined a solution to the system (2.6.2) by a matrix multiplication. Of


course, this depends on the existence of a matrix B satisfying (2.6.1), and even if such a
matrix B does exist, it will turn out that using (2.6.3) to solve n × n systems is not very
efficient computationally. Therefore it is generally not used in practice to solve n × n
systems. However, from a theoretical point of view, a formula such as (2.6.3) is very
useful. We begin the investigation by establishing that there can be at most one matrix
B satisfying (2.6.1) for a given n × n matrix A.

Theorem 2.6.1 Let A be an n × n matrix. Suppose B and C are both n × n matrices satisfying

AB = BA = In , (2.6.4)
AC = CA = In , (2.6.5)

respectively. Then B = C.

Proof From (2.6.4), it follows that


C = CIn = C(AB).

That is,
C = (CA)B = In B = B,
where we have used (2.6.5) to replace CA by In in the second step.
Since the identity matrix In plays the role of the number 1 in the multiplication of
matrices, the properties given in (2.6.1) are the analogs for matrices of the properties

xx −1 = 1, x −1 x = 1,

which holds for all (nonzero) numbers x. It is therefore natural to denote the matrix B
in (2.6.1) by A−1 and to call it the inverse of A. The following definition introduces the
appropriate terminology.

i i

i i

You might also like