Sample Problems For Advance Math For BSIE
Sample Problems For Advance Math For BSIE
Sample Problems For Advance Math For BSIE
2007/2/16
page 150
i i
Solution: We first use elementary row operations to reduce the augmented matrix of
the system to row-echelon form.
3 −2 2 9 1 −2 1 5 1 −2 1 5
1 −2 1 5 ∼ 1
3 −2 2 9 ∼ 0 4 −1 − 6
2
2 −1 −2 −1 2 −1 −2 −1 0 3 −4 −11
1 −2 1 5 1 −2 1 5 1 −2 1 5
∼ 0 1 3 5 ∼ 0 1 5 ∼ 0 1 3 5 .
3 4 5
3
0 3 −4 −11 0 0 −13 −26 0 0 12
i i
i i
i i “main”
2007/2/16
page 151
i i
1. P12 2. A12 (−3), A13 (−2) 3. A32 (−1) 4. A23 (−3) 5. M3 (−1/13)
x1 − 2x2 + x3 = 5, (2.5.2)
x2 + 3x3 = 5, (2.5.3)
x3 = 2, (2.5.4)
which is a subset of R3 .
The process of reducing the augmented matrix to row-echelon form and then using
back substitution to solve the equivalent system is called Gaussian elimination. The
particular case of Gaussian elimination that arises when the augmented matrix is reduced
to reduced row-echelon form is called Gauss-Jordan elimination.
x1 + 2x2 − x3 = 1,
2x1 + 5x2 − x3 = 3,
x1 + 3x2 + 2x3 = 6.
Solution: In this case, we first reduce the augmented matrix of the system to reduced
row-echelon form.
1 2 −1 1 1 2 −1 1 1 0 −3 −1 1 0 −3 −1 1 0 0 5
2 5 −1 3 ∼ 1
0 1 1 1 ∼ 0 1 1 1 ∼ 0 1 1 1 ∼ 0 1 0 −1
2 3 4
1 3 2 6 0 1 3 5 0 0 2 4 0 0 1 2 0 0 1 2
1. A12 (−2), A13 (−1) 2. A21 (−2), A23 (−1) 3. M3 (1/2) 4. A31 (3), A32 (−1)
The augmented matrix is now in reduced row-echelon form. The equivalent system is
x1 = 5,
x2 = −1,
x3 = 2.
and the solution can be read off directly as (5, −1, 2). Consequently, the given system
has solution set
S = {(5, −1, 2)}
in R3 .
We see from the preceding two examples that the advantage of Gauss-Jordan elim-
ination over Gaussian elimination is that it does not require back substitution. However,
the disadvantage is that reducing the augmented matrix to reduced row-echelon form
requires more elementary row operations than reduction to row-echelon form. It can be
i i
i i
i i “main”
2007/2/16
page 152
i i
shown, in fact, that in general, Gaussian elimination is the more computationally effi-
cient technique. As we will see in the next section, the main reason for introducing the
Gauss-Jordan method is its application to the computation of the inverse of an n × n
matrix.
and the system had a unique solution. More generally, we have the following lemma:
Lemma 2.5.3 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system. If rank(A) = rank(A# ) = n, then the system has a unique solution.
Proof If rank(A) = rank(A# ) = n, then there are n leading ones in any row-echelon
form of A, hence back substitution gives a unique solution. The form of the row-echelon
form of A# is shown below, with m − n rows of zeros at the bottom of the matrix omitted
and where the ∗’s denote unknown elements of the row-echelon form.
1 ∗ ∗ ∗ ... ∗ ∗
0 1 ∗ ∗ . . . ∗ ∗
0 0 1 ∗ . . . ∗ ∗
.. .. .. .. .. ..
. . . . . . . . .
0 0 0 0 ... 1 ∗
Note that rank(A) cannot exceed rank(A# ). Thus, there are only two possibilities
for the relationship between rank(A) and rank(A# ): rank(A) < rank(A# ) or rank(A) =
rank(A# ). We now consider what happens in these cases.
x1 + x2 − x3 + x4 = 1,
2x1 + 3x2 + x3 = 4,
3x1 + 5x2 + 3x3 − x4 = 5.
3 5 3 −1 5 0 2 6 −4 2 0 0 0 0 −2
The last row tells us that the system of equations has no solution (that is, it is inconsistent),
since it requires
which is clearly impossible. The solution set to the system is thus the empty set ∅.
i i
i i
i i “main”
2007/2/16
page 153
i i
Lemma 2.5.5 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system. If rank(A) < rank(A# ), then the system is inconsistent.
Proof If rank(A) < rank(A# ), then there will be one row in the reduced row-echelon
form of the augmented matrix whose first nonzero element arises in the last column.
Such a row corresponds to an equation of the form
0 9 −9 9 0 0 0 0
The augmented matrix is now in row-echelon form, and the equivalent system is
Since we have three variables, but only two equations relating them, we are free to
specify one of the variables arbitrarily. The variable that we choose to specify is called
a free variable or free parameter. The remaining variables are then determined by
the system of equations and are called bound variables or bound parameters. In the
foregoing system, we take x3 as the free variable and set
x3 = t,
where t can assume any real value5 . It follows from (2.5.7) that
x2 = 1 + t.
5 When considering systems of equations with complex coefficients, we allow free variables to assume
complex values as well.
i i
i i
i i “main”
2007/2/16
page 154
i i
x1 = −1 + 3(1 + t) − 2t = 2 + t.
Thus the solution set to the given system of equations is the following subset of R3 :
S = {(2 + t, 1 + t, t) : t ∈ R}.
The system has an infinite number of solutions, obtained by allowing the parameter t to
assume all real values. For example, two particular solutions of the system are
corresponding to t = 0 and t = −2, respectively. Note that we can also write the solution
set S above in the form
Remark The geometry of the foregoing solution is as follows. The given system
(2.5.5) can be interpreted as consisting of three planes in 3-space. Any solution to the
system gives the coordinates of a point of intersection of the three planes. In the preceding
example the planes intersect in a line whose parametric equations are
x1 = 2 + t, x2 = 1 + t, x3 = t.
Lemma 2.5.7 Consider the m × n linear system Ax = b. Let A# denote the augmented matrix of the
system and let r # = rank(A# ). If r # = rank(A) < n, then the system has an infinite
number of solutions, indexed by n − r # free variables.
Proof As discussed before, any row-echelon equivalent system will have only r # equa-
tions involving the n variables, and so there will be n − r # > 0 free variables. If we
assign arbitrary values to these free variables, then the remaining r # variables will be
uniquely determined, by back substitution, from the system. Since each free variable can
assume infinitely many values, in this case there are an infinite number of solutions to
the system.
x1 − 2x2 + 2x3 − x4 = 3,
3x1 + x2 + 6x3 + 11x4 = 16,
2x1 − x2 + 4x3 + 4x4 = 9.
i i
i i
i i “main”
2007/2/16
page 155
i i
Notice that we cannot choose any two variables freely. For example, from Equation
(2.5.9), we cannot specify both x2 and x4 independently. The bound variables should be
taken as those that correspond to leading 1s in the row-echelon form of A# , since these
are the variables that can always be determined by back substitution (they appear as the
leftmost variable in some equation of the system corresponding to the row echelon form
of the augmented matrix).
Applying this rule to Equations (2.5.8) and (2.5.9), we choose x3 and x4 as free variables
and therefore set
x3 = s, x4 = t.
x2 = 1 − 2t.
x1 = 5 − 2s − 3t,
so that the solution set to the given system is the following subset of R4 :
Lemmas 2.5.3, 2.5.5, and 2.5.7 completely characterize the solution properties of
an m × n linear system. Combining the results of these three lemmas gives the next
theorem.
Theorem 2.5.9 Consider the m × n linear system Ax = b. Let r denote the rank of A, and let r # denote
the rank of the augmented matrix of the system. Then
i i
i i
i i “main”
2007/2/16
page 156
i i
or, in matrix form, Ax = 0, where A is the coefficient matrix of the system and 0 denotes
the m-vector whose elements are all zeros.
Corollary 2.5.10 The homogeneous linear system Ax = 0 is consistent for any coefficient matrix A, with
a solution given by x = 0.
Remarks
1. The solution x = 0 is referred to as the trivial solution. Consequently, from
Theorem 2.5.9, a homogeneous system either has only the trivial solution or has
an infinite number of solutions (one of which must be the trivial solution).
2. Once more it is worth mentioning the geometric interpretation of Corollary 2.5.10
in the case of a homogeneous system with three unknowns. We can regard each
equation of such a system as defining a plane. Owing to the homogeneity, each
plane passes through the origin, hence the planes intersect at least at the origin.
Corollary 2.5.11 A homogeneous system of m linear equations in n unknowns, with m < n, has an infinite
number of solutions.
Proof Let r and r # be as in Theorem 2.5.9. Using the fact that r = r # for a homogeneous
system, we see that since r # ≤ m < n, Theorem 2.5.9 implies that the system has an
infinite number of solutions.
i i
i i
i i “main”
2007/2/16
page 157
i i
x2 = 0,
x3 = 0.
It is tempting, but incorrect, to conclude from this that the solution to the system
is x1 = x2 = x3 = 0. Since x1 does not occur in the system, it is a free variable and
therefore not necessarily zero. Consequently, the correct solution to the foregoing system
is (r, 0, 0), where r is a free variable, and the solution set is {(r, 0, 0) : r ∈ R}.
The linear systems that we have so far encountered have all had real coefficients, and
we have considered corresponding real solutions. The techniques that we have developed
for solving linear systems are also applicable to the case when our system has complex
coefficients. The corresponding solutions will also be complex.
5 (1 − 2i) 1−i
4
1 0
2
∼ 0 (1 + i) − 45 (1 − 2i)(2 − i) 3 − (1 − i)(2 − i) 0
0 (7 − i) − 4i(1 − 2i) (3 + 2i) − 5i(1 − i) 0
i i
i i
i i “main”
2007/2/16
page 158
i i
5 (1 − 2i) 1−i
4
1 0
∼ 0 0 .
4
1 1
26 (17 − 7i)
0 0 0 0
1. M1 ((1 − 2i)/5) 2. A12 (−(2 − i)), A13 (−5i) 3. A23 (1) 4. M2 ((1 − 5i)/26)
x1 + 5 (1 − 2i)x2
4
+ (1 − i)x3 = 0,
x2 + 26 (17 − 7i)x3
1
= 0.
There is one free variable, which we take to be x3 = t, where t can assume any complex
value. Applying back substitution yields
x2 = 26 t (−17 + 7i)
1
x1 = − 65
2
t (1 − 2i)(−17 + 7i) − t (1 − i)
= − 65 t (59 + 17i)
1
− 65
1
t (59 + 17i), 26
1
t (−17 + 7i), t : t ∈ C .
i i
i i
i i “main”
2007/2/16
page 159
i i
i i
i i
i i “main”
2007/2/16
page 160
i i
6
Notice that this reduced form is not a row-echelon matrix.
i i
i i
i i “main”
2007/2/16
page 161
i i
i i
i i
i i “main”
2007/2/16
page 162
i i
AB = In and BA = In (2.6.1)
and derive an efficient method for determining B (when it does exist). As a possible
application of the existence of such a matrix B, consider the n × n linear system
Ax = b. (2.6.2)
(BA)x = Bb.
x = Bb. (2.6.3)
Theorem 2.6.1 Let A be an n × n matrix. Suppose B and C are both n × n matrices satisfying
AB = BA = In , (2.6.4)
AC = CA = In , (2.6.5)
respectively. Then B = C.
That is,
C = (CA)B = In B = B,
where we have used (2.6.5) to replace CA by In in the second step.
Since the identity matrix In plays the role of the number 1 in the multiplication of
matrices, the properties given in (2.6.1) are the analogs for matrices of the properties
xx −1 = 1, x −1 x = 1,
which holds for all (nonzero) numbers x. It is therefore natural to denote the matrix B
in (2.6.1) by A−1 and to call it the inverse of A. The following definition introduces the
appropriate terminology.
i i
i i