0% found this document useful (0 votes)
56 views12 pages

Basic Opertaion Matrices

This document discusses basic concepts related to matrix algebra including transpose of a matrix, square matrices, identity matrices, invertible matrices, and matrix multiplication and addition properties. Key topics covered include definitions of transpose, properties of transpose, definitions of square and identity matrices, and conditions for a matrix to be invertible.

Uploaded by

prittycarol8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views12 pages

Basic Opertaion Matrices

This document discusses basic concepts related to matrix algebra including transpose of a matrix, square matrices, identity matrices, invertible matrices, and matrix multiplication and addition properties. Key topics covered include definitions of transpose, properties of transpose, definitions of square and identity matrices, and conditions for a matrix to be invertible.

Uploaded by

prittycarol8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

32 CHAPTER 2 Algebra of Matrices

2.6 Transpose of a Matrix


The transpose of a matrix A, written AT , is the matrix obtained by writing the columns of A, in order, as
rows. For example, 2 3 2 3
 T 1 4 1
1 2 3
¼ 42 55 and ½1; 3; 5T ¼ 4 3 5
4 5 6
3 6 5
In other words, if A ¼ ½aij  is an m n matrix, then AT ¼ ½bij  is the n m matrix where bij ¼ aji .

Observe that the tranpose of a row vector is a column vector. Similarly, the transpose of a column
vector is a row vector.

The next theorem lists basic properties of the transpose operation.

THEOREM 2.3: Let A and B be matrices and let k be a scalar. Then, whenever the sum and product are
defined,

(i) ðA þ BÞT ¼ AT þ BT , (iii) ðkAÞT ¼ kAT ,


(ii) ðAT ÞT ¼ A; (iv) ðABÞT ¼ BT AT .

We emphasize that, by (iv), the transpose of a product is the product of the transposes, but in the
reverse order.

2.7 Square Matrices


A square matrix is a matrix with the same number of rows as columns. An n n square matrix is said to
be of order n and is sometimes called an n-square matrix.
Recall that not every two matrices can be added or multiplied. However, if we only consider square
matrices of some given order n, then this inconvenience disappears. Specifically, the operations of
addition, multiplication, scalar multiplication, and transpose can be performed on any n n matrices, and
the result is again an n n matrix.

EXAMPLE 2.6 The following are square matrices of order 3:


2 3 2 3
1 2 3 2 5 1
4
A ¼ 4 4 4 5 and 4
B¼ 0 3 2 5
5 6 7 1 2 4

The following are also matrices of order 3:


2 3 2 3 2 3
3 3 4 2 4 6 1 4 5
6 7 6 7 6 7
A þ B ¼ 4 4 1 6 5; 2A ¼ 4 8 8 8 5; A ¼ 42
T
4 65
6 8 3 10 12 14 3 4 7
2 3 2 3
5 7 15 27 30 33
6 7 6 7
AB ¼ 4 12 0 20 5; BA ¼ 4 22 24 26 5
17 7 35 27 30 33

Diagonal and Trace


Let A ¼ ½aij  be an n-square matrix. The diagonal or main diagonal of A consists of the elements with the
same subscripts—that is,
a11 ; a22 ; a33 ; ...; ann
CHAPTER 2 Algebra of Matrices 33

The trace of A, written trðAÞ, is the sum of the diagonal elements. Namely,
trðAÞ ¼ a11 þ a22 þ a33 þ    þ ann
The following theorem applies.

THEOREM 2.4: Suppose A ¼ ½aij  and B ¼ ½bij  are n-square matrices and k is a scalar. Then
(i) trðA þ BÞ ¼ trðAÞ þ trðBÞ, (iii) trðAT Þ ¼ trðAÞ,
(ii) trðkAÞ ¼ k trðAÞ, (iv) trðABÞ ¼ trðBAÞ.

EXAMPLE 2.7 Let A and B be the matrices A and B in Example 2.6. Then
diagonal of A ¼ f1; 4; 7g and trðAÞ ¼ 1  4 þ 7 ¼ 4
diagonal of B ¼ f2; 3; 4g and trðBÞ ¼ 2 þ 3  4 ¼ 1
Moreover,
trðA þ BÞ ¼ 3  1 þ 3 ¼ 5; trð2AÞ ¼ 2  8 þ 14 ¼ 8; trðAT Þ ¼ 1  4 þ 7 ¼ 4
trðABÞ ¼ 5 þ 0  35 ¼ 30; trðBAÞ ¼ 27  24  33 ¼ 30
As expected from Theorem 2.4,
trðA þ BÞ ¼ trðAÞ þ trðBÞ; trðAT Þ ¼ trðAÞ; trð2AÞ ¼ 2 trðAÞ
Furthermore, although AB 6¼ BA, the traces are equal.

Identity Matrix, Scalar Matrices


The n-square identity or unit matrix, denoted by In , or simply I, is the n-square matrix with 1’s on the
diagonal and 0’s elsewhere. The identity matrix I is similar to the scalar 1 in that, for any n-square matrix
A,
AI ¼ IA ¼ A
More generally, if B is an m n matrix, then BIn ¼ Im B ¼ B.
For any scalar k, the matrix kI that contains k’s on the diagonal and 0’s elsewhere is called the scalar
matrix corresponding to the scalar k. Observe that
ðkIÞA ¼ kðIAÞ ¼ kA
That is, multiplying a matrix A by the scalar matrix kI is equivalent to multiplying A by the scalar k.

EXAMPLE 2.8 The following are the identity matrices of orders 3 and 4 and the corresponding scalar
matrices for k ¼ 5:
2 3 2 3
2 3 1 2 3 5
1 0 0 6 7 5 0 0 6 7
4 0 1 0 5; 6 1 7; 4 0 5 0 5; 6 5 7
4 1 5 4 5 5
0 0 1 0 0 5
1 5

Remark 1: It is common practice to omit blocks or patterns of 0’s when there is no ambiguity, as
in the above second and fourth matrices.

Remark 2: The Kronecker delta function dij is defined by


0 if i 6¼ j
dij ¼
1 if i ¼ j
Thus, the identity matrix may be defined by I ¼ ½dij .
34 CHAPTER 2 Algebra of Matrices

2.8 Powers of Matrices, Polynomials in Matrices


Let A be an n-square matrix over a field K. Powers of A are defined as follows:
A2 ¼ AA; A3 ¼ A2 A; ...; Anþ1 ¼ An A; ...; and A0 ¼ I
Polynomials in the matrix A are also defined. Specifically, for any polynomial
f ðxÞ ¼ a0 þ a1 x þ a2 x2 þ    þ an xn
where the ai are scalars in K, f ðAÞ is defined to be the following matrix:
f ðAÞ ¼ a0 I þ a1 A þ a2 A2 þ    þ an An
[Note that f ðAÞ is obtained from f ðxÞ by substituting the matrix A for the variable x and substituting the
scalar matrix a0 I for the scalar a0 .] If f ðAÞ is the zero matrix, then A is called a zero or root of f ðxÞ.
 
1 2
EXAMPLE 2.9 Suppose A ¼ . Then
   3 4      
1 2 1 2 7 6 7 6 1 2 11 38
A ¼
2
¼ and A ¼ A A ¼
3 2
¼
3 4 3 4 9 22 9 22 3 4 57 106
Suppose f ðxÞ ¼ 2x2  3x þ 5 and gðxÞ ¼ x2 þ 3x  10. Then
       
7 6 1 2 1 0 16 18
f ðAÞ ¼ 2 3 þ5 ¼
9 22 3 4 0 1 27 61
       
7 6 1 2 1 0 0 0
gðAÞ ¼ þ3  10 ¼
9 22 3 4 0 1 0 0
Thus, A is a zero of the polynomial gðxÞ.

2.9 Invertible (Nonsingular) Matrices


A square matrix A is said to be invertible or nonsingular if there exists a matrix B such that
AB ¼ BA ¼ I
where I is the identity matrix. Such a matrix B is unique. That is, if AB1 ¼ B1 A ¼ I and AB2 ¼ B2 A ¼ I,
then
B1 ¼ B1 I ¼ B1 ðAB2 Þ ¼ ðB1 AÞB2 ¼ IB2 ¼ B2
We call such a matrix B the inverse of A and denote it by A1 . Observe that the above relation is
symmetric; that is, if B is the inverse of A, then A is the inverse of B.
   
2 5 3 5
EXAMPLE 2.10 Suppose that A ¼ and B ¼ . Then
1 3 1 2
       
65 10 þ 10 1 0 65 15  15 1 0
AB ¼ ¼ and BA ¼ ¼
33 5 þ 6 0 1 2 þ 2 5 þ 6 0 1

Thus, A and B are inverses.


It is known (Theorem 3.16) that AB ¼ I if and only if BA ¼ I. Thus, it is necessary to test only one
product to determine whether or not two given matrices are inverses. (See Problem 2.17.)
Now suppose A and B are invertible. Then AB is invertible and ðABÞ1 ¼ B1 A1 . More generally, if
A1 ; A2 ; . . . ; Ak are invertible, then their product is invertible and
ðA1 A2 . . . Ak Þ1 ¼ A1 1 1
k . . . A2 A1
the product of the inverses in the reverse order.
CHAPTER 2 Algebra of Matrices 35

Inverse of a 2 2 Matrix  
a b
Let A be an arbitrary 2 2 matrix, say A ¼ . We want to derive a formula for A1 , the inverse
c d
of A. Specifically, we seek 22 ¼ 4 scalars, say x1 , y1 , x2 , y2 , such that
        
a b x1 x2 1 0 ax1 þ by1 ax2 þ by2 1 0
¼ or ¼
c d y1 y2 0 1 cx1 þ dy1 cx2 þ dy2 0 1
Setting the four entries equal to the corresponding entries in the identity matrix yields four equations,
which can be partitioned into two 2 2 systems as follows:
ax1 þ by1 ¼ 1; ax2 þ by2 ¼ 0
cx1 þ dy1 ¼ 0; cx2 þ dy2 ¼ 1
Suppose we let jAj ¼ ab  bc (called the determinant of A). Assuming jAj 6¼ 0, we can solve uniquely for
the above unknowns x1 , y1 , x2 , y2 , obtaining
d c b a
x1 ¼ ; y1 ¼ ; x2 ¼ ; y2 ¼
jAj jAj jAj jAj
Accordingly,
 1    
1 a b d=jAj b=jAj 1 d b
A ¼ ¼ ¼
c d c=jAj a=jAj jAj c a
In other words, when jAj 6¼ 0, the inverse of a 2 2 matrix A may be obtained from A as follows:
(1) Interchange the two elements on the diagonal.
(2) Take the negatives of the other two elements.
(3) Multiply the resulting matrix by 1=jAj or, equivalently, divide each element by jAj.
In case jAj ¼ 0, the matrix A is not invertible.
   
2 3 1 3
EXAMPLE 2.11 Find the inverse of A ¼ and B ¼ .
4 5 2 6
First evaluate jAj ¼ 2ð5Þ  3ð4Þ ¼ 10  12 ¼ 2. Because jAj 6¼ 0, the matrix A is invertible and
   5 
1 1 5 3 2 3
A ¼ ¼ 2
2 4 2 2 1
Now evaluate jBj ¼ 1ð6Þ  3ð2Þ ¼ 6  6 ¼ 0. Because jBj ¼ 0, the matrix B has no inverse.

Remark: The above property that a matrix is invertible if and only if A has a nonzero determinant
is true for square matrices of any order. (See Chapter 8.)

Inverse of an n n Matrix
Suppose A is an arbitrary n-square matrix. Finding its inverse A1 reduces, as above, to finding the
solution of a collection of n n systems of linear equations. The solution of such systems and an efficient
way of solving such a collection of systems is treated in Chapter 3.

2.10 Special Types of Square Matrices


This section describes a number of special kinds of square matrices.

Diagonal and Triangular Matrices


A square matrix D ¼ ½dij  is diagonal if its nondiagonal entries are all zero. Such a matrix is sometimes
denoted by
D ¼ diagðd11 ; d22 ; . . . ; dnn Þ
36 CHAPTER 2 Algebra of Matrices

where some or all the dii may be zero. For example,


2 3
2 3 6
3 0 0  
4 0 6 0 7
4 0 7 0 5; ; 6 7
0 5 4 9 5
0 0 2
8
are diagonal matrices, which may be represented, respectively, by
diagð3; 7; 2Þ; diagð4; 5Þ; diagð6; 0; 9; 8Þ
(Observe that patterns of 0’s in the third matrix have been omitted.)
A square matrix A ¼ ½aij  is upper triangular or simply triangular if all entries below the (main)
diagonal are equal to 0—that is, if aij ¼ 0 for i > j. Generic upper triangular matrices of orders 2, 3, 4 are
as follows:
2 3
2 3 c11 c12 c13 c14
  b11 b12 b13
a11 a12 6 c22 c23 c24 7
; 4 b22 b23 5; 6 7
0 a22 4 c33 c34 5
b33
c44
(As with diagonal matrices, it is common practice to omit patterns of 0’s.)
The following theorem applies.

THEOREM 2.5: Suppose A ¼ ½aij  and B ¼ ½bij  are n n (upper) triangular matrices. Then
(i) A þ B, kA, AB are triangular with respective diagonals:
ða11 þ b11 ; . . . ; ann þ bnn Þ; ðka11 ; . . . ; kann Þ; ða11 b11 ; . . . ; ann bnn Þ

(ii) For any polynomial f ðxÞ, the matrix f ðAÞ is triangular with diagonal
ð f ða11 Þ; f ða22 Þ; . . . ; f ðann ÞÞ

(iii) A is invertible if and only if each diagonal element aii 6¼ 0, and when A1 exists
it is also triangular.
A lower triangular matrix is a square matrix whose entries above the diagonal are all zero. We note
that Theorem 2.5 is true if we replace ‘‘triangular’’ by either ‘‘lower triangular’’ or ‘‘diagonal.’’

Remark: A nonempty collection A of matrices is called an algebra (of matrices) if A is closed


under the operations of matrix addition, scalar multiplication, and matrix multiplication. Clearly, the
square matrices with a given order form an algebra of matrices, but so do the scalar, diagonal, triangular,
and lower triangular matrices.

Special Real Square Matrices: Symmetric, Orthogonal, Normal


[Optional until Chapter 12]
Suppose now A is a square matrix with real entries—that is, a real square matrix. The relationship
between A and its transpose AT yields important kinds of matrices.

(a) Symmetric Matrices


A matrix A is symmetric if AT ¼ A. Equivalently, A ¼ ½aij  is symmetric if symmetric elements (mirror
elements with respect to the diagonal) are equal—that is, if each aij ¼ aji .
A matrix A is skew-symmetric if AT ¼ A or, equivalently, if each aij ¼ aji . Clearly, the diagonal
elements of such a matrix must be zero, because aii ¼ aii implies aii ¼ 0.
(Note that a matrix A must be square if AT ¼ A or AT ¼ A.)
CHAPTER 2 Algebra of Matrices 37
2 3 2 3
2 3 5 0 3 4  
1 0 0
EXAMPLE 2.12 Let A ¼ 4 3 6 7 5; B ¼ 4 3 0 5 5; C ¼ :
0 0 1
5 7 8 4 5 0
(a) By inspection, the symmetric elements in A are equal, or AT ¼ A. Thus, A is symmetric.
(b) The diagonal elements of B are 0 and symmetric elements are negatives of each other, or BT ¼ B.
Thus, B is skew-symmetric.
(c) Because C is not square, C is neither symmetric nor skew-symmetric.

(b) Orthogonal Matrices


A real matrix A is orthogonal if AT ¼ A1 —that is, if AAT ¼ AT A ¼ I. Thus, A must necessarily be
square and invertible.
2 3
1
9
8
9  4
9
6 7
EXAMPLE 2.13 Let A ¼ 4 49  49  79 5. Multiplying A by AT yields I; that is, AAT ¼ I. This means
8 1 4
9 9 9

AT A ¼ I, as well. Thus, AT ¼ A1 ; that is, A is orthogonal.

Now suppose A is a real orthogonal 3 3 matrix with rows


u1 ¼ ða1 ; a2 ; a3 Þ; u2 ¼ ðb1 ; b2 ; b3 Þ; u3 ¼ ðc1 ; c2 ; c3 Þ
Because A is orthogonal, we must have AA ¼ I. Namely,
T
2 32 3 2 3
a1 a2 a3 a1 b1 c1 1 0 0
AAT ¼ 4 b1 b2 b3 54 a2 b2 c2 5 ¼ 4 0 1 0 5 ¼ I
c1 c2 c3 a3 b3 c3 0 0 1
Multiplying A by AT and setting each entry equal to the corresponding entry in I yields the following nine
equations:
a21 þ a22 þ a23 ¼ 1; a1 b1 þ a2 b2 þ a3 b3 ¼ 0; a1 c1 þ a2 c2 þ a3 c3 ¼ 0
b1 a1 þ b2 a2 þ b3 a3 ¼ 0; b21 þ b22 þ b23 ¼ 1; b1 c1 þ b2 c2 þ b3 c3 ¼ 0
c1 a1 þ c2 a2 þ c3 a3 ¼ 0; c1 b1 þ c2 b2 þ c3 b3 ¼ 0; c21 þ c22 þ c23 ¼ 1
Accordingly, u1  u1 ¼ 1, u2  u2 ¼ 1, u3  u3 ¼ 1, and ui  uj ¼ 0 for i 6¼ j. Thus, the rows u1 , u2 , u3 are
unit vectors and are orthogonal to each other.

Generally speaking, vectors u1 , u2 ; . . . ; um in Rn are said to form an orthonormal set of vectors if the
vectors are unit vectors and are orthogonal to each other; that is,
0 if i 6¼ j
ui  uj ¼
1 if i ¼ j
In other words, ui  uj ¼ dij where dij is the Kronecker delta function:
We have shown that the condition AAT ¼ I implies that the rows of A form an orthonormal set of
vectors. The condition AT A ¼ I similarly implies that the columns of A also form an orthonormal set
of vectors. Furthermore, because each step is reversible, the converse is true.
The above results for 3 3 matrices are true in general. That is, the following theorem holds.

THEOREM 2.6: Let A be a real matrix. Then the following are equivalent:
(a) A is orthogonal.
(b) The rows of A form an orthonormal set.
(c) The columns of A form an orthonormal set.

For n ¼ 2, we have the following result (proved in Problem 2.28).


38 CHAPTER 2 Algebra of Matrices

THEOREM 2.7: Let A be a real 2 2 orthogonal matrix. Then, for some real number y,
   
cos y sin y cos y sin y
A¼ or A¼
 sin y cos y sin y  cos y

(c) Normal Matrices


A real matrix A is normal if it commutes with its transpose AT —that is, if AAT ¼ AT A. If A is symmetric,
orthogonal, or skew-symmetric, then A is normal. There are also other normal matrices.
 
6 3
EXAMPLE 2.14 Let A ¼ . Then
  3 6       
6 3 6 3 45 0 6 3 6 3 45 0
AA ¼
T
¼ and A A¼
T
¼
3 6 3 6 0 45 3 6 3 6 0 45
Because AAT ¼ AT A, the matrix A is normal.

2.11 Complex Matrices


Let A be a complex matrix—that is, a matrix with complex entries. Recall (Section 1.7) that if z ¼ a þ bi
is a complex number, then z ¼ a  bi is its conjugate. The conjugate of a complex matrix A, written A,  is

the matrix obtained from A by taking the conjugate of each entry in A. That is, if A ¼ ½aij , then A ¼ ½bij ,
where bij ¼ aij . (We denote this fact by writing A ¼ ½
aij .)
The two operations of transpose and conjugation commute for any complex matrix A, and the special
notation AH is used for the conjugate transpose of A. That is,
 T ¼ ðAT Þ
AH ¼ ðAÞ
Note that if A is real, then AH ¼ AT . [Some texts use A* instead of AH :]
2 3
  2  8i 6i
2 þ 8i 5  3i 4  7i
EXAMPLE 2.15 Let A ¼ . Then AH ¼ 4 5 þ 3i 1 þ 4i 5.
6i 1  4i 3 þ 2i
4 þ 7i 3  2i

Special Complex Matrices: Hermitian, Unitary, Normal [Optional until Chapter 12]
Consider a complex matrix A. The relationship between A and its conjugate transpose AH yields
important kinds of complex matrices (which are analogous to the kinds of real matrices described above).

A complex matrix A is said to be Hermitian or skew-Hermitian according as to whether


AH ¼ A or AH ¼ A:
Clearly, A ¼ ½aij  is Hermitian if and only if symmetric elements are conjugate—that is, if each
aij ¼ aji —in which case each diagonal element aii must be real. Similarly, if A is skew-symmetric,
then each diagonal element aii ¼ 0. (Note that A must be square if AH ¼ A or AH ¼ A.)
A complex matrix A is unitary if AH A1 ¼ A1 AH ¼ I—that is, if
AH ¼ A1 :
Thus, A must necessarily be square and invertible. We note that a complex matrix A is unitary if and only
if its rows (columns) form an orthonormal set relative to the dot product of complex vectors.

A complex matrix A is said to be normal if it commutes with AH —that is, if


AAH ¼ AH A
CHAPTER 2 Algebra of Matrices 39

(Thus, A must be a square matrix.) This definition reduces to that for real matrices when A is real.

EXAMPLE 2.16 Consider the following complex matrices:


2 3 2 3
3 1  2i 4 þ 7i 1 i 1 þ i  
1 2 þ 3i 1
A ¼ 4 1 þ 2i 4 2i 5 B¼ 4 i 1 1 þ i5 C¼
2 i 1 þ 2i
4  7i 2i 5 1 þ i 1 þ i 0

(a) By inspection, the diagonal elements of A are real, and the symmetric elements 1  2i and 1 þ 2i are
conjugate, 4 þ 7i and 4  7i are conjugate, and 2i and 2i are conjugate. Thus, A is Hermitian.
(b) Multiplying B by BH yields I; that is, BBH ¼ I. This implies BH B ¼ I, as well. Thus, BH ¼ B1 ,
which means B is unitary.
(c) To show C is normal, we evaluate CC H and C H C:
    
2 þ 3i 1 2  3i i 14 4  4i
CC ¼H
¼
i 1 þ 2i 1 1  2i 4 þ 4i 6
 
14 4  4i
and similarly C H C ¼ . Because CC H ¼ C H C, the complex matrix C is normal.
4 þ 4i 6

We note that when a matrix A is real, Hermitian is the same as symmetric, and unitary is the same as
orthogonal.

2.12 Block Matrices


Using a system of horizontal and vertical (dashed) lines, we can partition a matrix A into submatrices
called blocks (or cells) of A. Clearly a given matrix may be divided into blocks in different ways. For
example,
2 3 2 3 2 3
1 2 0 1 3 1 2 0 1 3 1 2 0 1 3
62 3 5 7 2 7 62 3 5 7 2 7 62 3 5 7 2 7
6 7; 6 7; 6 7
43 1 4 5 95 43 1 4 5 95 43 1 4 5 95
4 6 3 1 8 4 6 3 1 8 4 6 3 1 8
The convenience of the partition of matrices, say A and B, into blocks is that the result of operations on A
and B can be obtained by carrying out the computation with the blocks, just as if they were the actual
elements of the matrices. This is illustrated below, where the notation A ¼ ½Aij  will be used for a block
matrix A with blocks Aij .
Suppose that A ¼ ½Aij  and B ¼ ½Bij  are block matrices with the same numbers of row and column
blocks, and suppose that corresponding blocks have the same size. Then adding the corresponding blocks
of A and B also adds the corresponding elements of A and B, and multiplying each block of A by a scalar
k multiplies each element of A by k. Thus,
2 3
A11 þ B11 A12 þ B12 . . . A1n þ B1n
6 A þB A22 þ B22 . . . A2n þ B2n 7
6 21 21 7
AþB¼6 7
4 ... ... ... ... 5
Am1 þ Bm1 Am2 þ Bm2 ... Amn þ Bmn
and
2 3
kA11 kA12 . . . kA1n
6 kA21 kA22 . . . kA2n 7
kA ¼ 6
4 ...
7
... ... ... 5
kAm1 kAm2 . . . kAmn
40 CHAPTER 2 Algebra of Matrices

The case of matrix multiplication is less obvious, but still true. That is, suppose that U ¼ ½Uik  and
V ¼ ½Vkj  are block matrices such that the number of columns of each block Uik is equal to the number of
rows of each block Vkj . (Thus, each product Uik Vkj is defined.) Then
2 3
W11 W12 . . . W1n
6 W21 W22 . . . W2n 7
UV ¼ 6
4 ...
7; where Wij ¼ Ui1 V1j þ Ui2 V2j þ    þ Uip Vpj
... ... ... 5
Wm1 Wm2 . . . Wmn

The proof of the above formula for UV is straightforward but detailed and lengthy. It is left as an exercise
(Problem 2.85).

Square Block Matrices


Let M be a block matrix. Then M is called a square block matrix if
(i) M is a square matrix.
(ii) The blocks form a square matrix.
(iii) The diagonal blocks are also square matrices.
The latter two conditions will occur if and only if there are the same number of horizontal and vertical
lines and they are placed symmetrically.

Consider the following two block matrices:


2 3 2 3
1 2 3 4 5 1 2 3 4 5
61 1 1 1 17 61 1 1 1 17
6 7 6 7
A¼6
69 8 7 6 577 and B¼6
69 8 7 6 577
44 4 4 4 45 44 4 4 4 45
3 5 3 5 3 3 5 3 5 3

The block matrix A is not a square block matrix, because the second and third diagonal blocks are not
square. On the other hand, the block matrix B is a square block matrix.

Block Diagonal Matrices


Let M ¼ ½Aij  be a square block matrix such that the nondiagonal blocks are all zero matrices; that is,
Aij ¼ 0 when i 6¼ j. Then M is called a block diagonal matrix. We sometimes denote such a block
diagonal matrix by writing
M ¼ diagðA11 ; A22 ; . . . ; Arr Þ or M ¼ A11 A22  Arr
The importance of block diagonal matrices is that the algebra of the block matrix is frequently reduced to
the algebra of the individual blocks. Specifically, suppose f ðxÞ is a polynomial and M is the above block
diagonal matrix. Then f ðMÞ is a block diagonal matrix, and
f ðMÞ ¼ diagð f ðA11 Þ; f ðA22 Þ; . . . ; f ðArr ÞÞ
Also, M is invertible if and only if each Aii is invertible, and, in such a case, M 1 is a block diagonal
matrix, and

M 1 ¼ diagðA1 1 1
11 ; A22 ; . . . ; Arr Þ

Analogously, a square block matrix is called a block upper triangular matrix if the blocks below the
diagonal are zero matrices and a block lower triangular matrix if the blocks above the diagonal are zero
matrices.
CHAPTER 2 Algebra of Matrices 41

EXAMPLE 2.17 Determine which of the following square block matrices are upper diagonal, lower
diagonal, or diagonal:
2 3
2 3 1 0 0 0 2 3 2 3
1 2 0 62 3 4 07 1 0 0 1 2 0
A ¼ 4 3 4 5 5; B¼6 4 5 0 6 0 5;
7 C ¼ 4 0 2 3 5; D ¼ 43 4 55
0 0 6 0 4 5 0 6 7
0 7 8 9

(a) A is upper triangular because the block below the diagonal is a zero block.
(b) B is lower triangular because all blocks above the diagonal are zero blocks.
(c) C is diagonal because the blocks above and below the diagonal are zero blocks.
(d) D is neither upper triangular nor lower triangular. Also, no other partitioning of D will make it into
either a block upper triangular matrix or a block lower triangular matrix.

SOLVED PROBLEMS

Matrix Addition and Scalar Multiplication


   
1 2 3 3 0 2
2.1 Given A ¼ and B ¼ , find:
4 5 6 7 1 8
(a) A þ B, (b) 2A  3B.
(a) Add the corresponding elements:
   
1 þ 3 2 þ 0 3þ2 4 2 5
AþB¼ ¼
47 5 þ 1 6 þ 8 3 6 2

(b) First perform the scalar multiplication and then a matrix addition:
     
2 4 6 9 0 6 7 4 0
2A  3B ¼ þ ¼
8 10 12 21 3 24 29 7 36
(Note that we multiply B by 3 and then add, rather than multiplying B by 3 and subtracting. This usually
prevents errors.)

     
x y x 6 4 xþy
2.2. Find x; y; z; t where 3 ¼ þ :
z t 1 2t zþt 3
Write each side as a single equation:
   
3x 3y xþ4 xþyþ6
¼
3z 3t zþt1 2t þ 3
Set corresponding entries equal to each other to obtain the following system of four equations:
3x ¼ x þ 4; 3y ¼ x þ y þ 6; 3z ¼ z þ t  1; 3t ¼ 2t þ 3
or 2x ¼ 4; 2y ¼ 6 þ x; 2z ¼ t  1; t¼3
The solution is x ¼ 2, y ¼ 4, z ¼ 1, t ¼ 3.

2.3. Prove Theorem 2.1 (i) and (v): (i) ðA þ BÞ þ C ¼ A þ ðB þ CÞ, (v) kðA þ BÞ ¼ kA þ kB.
Suppose A ¼ ½aij , B ¼ ½bij , C ¼ ½cij . The proof reduces to showing that corresponding ij-entries
in each side of each matrix equation are equal. [We prove only (i) and (v), because the other parts
of Theorem 2.1 are proved similarly.]
42 CHAPTER 2 Algebra of Matrices

(i) The ij-entry of A þ B is aij þ bij ; hence, the ij-entry of ðA þ BÞ þ C is ðaij þ bij Þ þ cij . On the other hand,
the ij-entry of B þ C is bij þ cij ; hence, the ij-entry of A þ ðB þ CÞ is aij þ ðbij þ cij Þ. However, for
scalars in K,
ðaij þ bij Þ þ cij ¼ aij þ ðbij þ cij Þ
Thus, ðA þ BÞ þ C and A þ ðB þ CÞ have identical ij-entries. Therefore, ðA þ BÞ þ C ¼ A þ ðB þ CÞ.
(v) The ij-entry of A þ B is aij þ bij ; hence, kðaij þ bij Þ is the ij-entry of kðA þ BÞ. On the other hand, the ij-
entries of kA and kB are kaij and kbij , respectively. Thus, kaij þ kbij is the ij-entry of kA þ kB. However,
for scalars in K,
kðaij þ bij Þ ¼ kaij þ kbij
Thus, kðA þ BÞ and kA þ kB have identical ij-entries. Therefore, kðA þ BÞ ¼ kA þ kB.

Matrix Multiplication
2 3
3 2 4 2 3
3 6 9 7 5
2.4. Calculate: (a) ½8; 4; 54 2 5, (b) ½6; 1; 7; 56 7
4 3 5, (c) ½3; 8; 2; 44 1 5
1 6
2
(a) Multiply the corresponding entries and add:
2 3
3
½8; 4; 54 2 5 ¼ 8ð3Þ þ ð4Þð2Þ þ 5ð1Þ ¼ 24  8  5 ¼ 11
1

(b) Multiply the corresponding entries and add:


2 3
4
6 9 7
6 7
½6; 1; 7; 56 7 ¼ 24 þ 9  21 þ 10 ¼ 22
4 3 5
2

(c) The product is not defined when the row matrix and the column matrix have different numbers of elements.

2.5. Let ðr sÞ denote an r s matrix. Find the sizes of those matrix products that are defined:
(a) ð2 3Þð3 4Þ; (c) ð1 2Þð3 1Þ; (e) ð4 4Þð3 3Þ
(b) ð4 1Þð1 2Þ, (d) ð5 2Þð2 3Þ, (f) ð2 2Þð2 4Þ
In each case, the product is defined if the inner numbers are equal, and then the product will have the size of
the outer numbers in the given order.
(a) 2 4, (c) not defined, (e) not defined
(b) 4 2, (d) 5 3, (f) 2 4

   
1 3 2 0 4
2.6. Let A ¼ and B ¼ . Find: (a) AB, (b) BA.
2 1 3 2 6
(a) Because A is a 2 2 matrix and B a 2 3 matrix, the product AB is defined and is a 2 3 matrix. To
obtain the entries in the first row of AB, multiply the first row ½1; 3 of A by the columns
     
2 0 4
; ; of B, respectively, as follows:
3 2 6
      
1 3 2 0 4 2 þ 9 0  6 4 þ 18 11 6 14
AB ¼ ¼ ¼
2 1 3 2 6
CHAPTER 2 Algebra of Matrices 43

To obtain the entries in the second row of AB, multiply the second row ½2; 1 of A by the columns of B:
    
1 3 2 0 4 11 6 14
AB ¼ ¼
2 1 3 2 6 4  3 0 þ 2 8  6
Thus,
 
11 6 14
AB ¼ :
1 2 14

(b) The size of B is 2 3 and that of A is 2 2. The inner numbers 3 and 2 are not equal; hence, the product
BA is not defined.

2 3
  2 1 0 6
2 3 1
2.7. Find AB, where A ¼ and B ¼ 4 1 3 5 1 5.
4 2 5
4 1 2 2
Because A is a 2 3 matrix and B a 3 4 matrix, the product AB is defined and is a 2 4 matrix. Multiply
the rows of A by the columns of B to obtain
   
4 þ 3  4 2 þ 9  1 0  15 þ 2 12 þ 3  2 3 6 13 13
AB ¼ ¼ :
8  2 þ 20 4  6 þ 5 0 þ 10  10 24  2 þ 10 26 5 0 32

       
1 6 2 2 1 6 1 6
2.8. Find: (a) , (b) , (c) ½2; 7 .
3 5 7 7 3 5 3 5
(a) The first factor is 2 2 and the second is 2 1, so the product is defined as a 2 1 matrix:
      
1 6 2 2  42 40
¼ ¼
3 5 7 6  35 41
(b) The product is not defined, because the first factor is 2 1 and the second factor is 2 2.
(c) The first factor is 1 2 and the second factor is 2 2, so the product is defined as a 1 2 (row) matrix:
 
1 6
½2; 7 ¼ ½2 þ 21; 12  35 ¼ ½23; 23
3 5

2.9. Clearly, 0A ¼ 0 and A0 ¼ 0, where the 0’s are zero matrices (with possibly different sizes). Find
matrices A and B with no zero entries such that AB ¼ 0.
     
1 2 6 2 0 0
Let A ¼ and B ¼ . Then AB ¼ .
2 4 3 1 0 0

2.10. Prove Theorem 2.2(i): ðABÞC ¼ AðBCÞ.


Let A ¼ ½aij , B ¼ ½bjk , C ¼ ½ckl , and let AB ¼ S ¼ ½sik , BC ¼ T ¼ ½tjl . Then
P
m P
n
sik ¼ aij bjk and tjl ¼ bjk ckl
j¼1 k¼1

Multiplying S ¼ AB by C, the il-entry of ðABÞC is


P
n n P
P m
si1 c1l þ si2 c2l þ    þ sin cnl ¼ sik ckl ¼ ðaij bjk Þckl
k¼1 k¼1 j¼1

On the other hand, multiplying A by T ¼ BC, the il-entry of AðBCÞ is


P
m m P
P n
ai1 t1l þ ai2 t2l þ    þ ain tnl ¼ aij tjl ¼ aij ðbjk ckl Þ
j¼1 j¼1 k¼1

The above sums are equal; that is, corresponding elements in ðABÞC and AðBCÞ are equal. Thus,
ðABÞC ¼ AðBCÞ.

You might also like