0% found this document useful (0 votes)
31 views7 pages

Chapter 7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Chapter 7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Chapter 7

Matrix calculus

Matrix calculus is concerned with rules for operating on functions of


matrices. A matrix is a set of numbers arranges in a linear or rectangular
array form representing sets of coefficients of linear transformations or
system of linear equations. Matrices are useful because they enable us to
consider an array of many numbers as a single object, denote it by a single
symbol, and perform calculations with these symbols in a very compact
form.
A rectangular array of (real or complex) numbers of the form
 a11 a12 . . . a1n 
 
 a21 a22 . . . a2 n 
. 
 
a 
 m1 m 2
a . . . a mn 

is called a matrix. The numbers a11, . . . , amn are the elements of the matrix.
The horizontal lines are called rows or row vectors, and the vertical lines
are called columns or column vectors of the matrix. A matrix with m rows
and n columns is called an m  n matrix (read “m by n matrix”).
Matrices will be denoted by capital (upper case) bold-faced letters
A, B, etc., or by a jk , bik  , etc., that is, by writing the general element of a
matrix, enclosed in parentheses.

1
In the double-subscript notation for the elements, the first
subscript always denotes the row and the second subscript the column
containing the given element.

Algebraic operations for Matrices

Addition of matrices is defined only for matrices having the same


number of rows and the same number of columns, and is then defined as
follows. The sum of two m  n matrices A  a jk  and B  b jk  is the m  n
matrix C  c jk  with elements

c jk  a jk  b jk j  1, . . . , m & k  1, . . . , n

and is written as

C  A B

Transpose of a matrix

The transpose of AT of an m  n matrix A  a jk  is the n  m matrix


obtained by interchanging the rows and columns in A, that is

 a11 a21 . . . am1 


 
 a12 a22 . . . am 2 
AT  a  kj   
.
 
a 
 1n a2 n . . . amn 

Example: if

 7
 
b  7 5  2, then b   5  T

  2
 

2
A real square matrix A  a jk  is said to be symmetric if it is equal to its
transpose

AT  A , that is, akj  a jk  j , k 1, . . .,n 

A real square matrix A  a jk  is said to be skew-symmetric if

A T  A , that is, akj  a jk  j , k 1, . . .,n 

Matrix Multiplication

The definition of the operation of multiplication of matrices by matrices


differs in important respects from ordinary scalar multiplication. The rule
of multiplication is such that two matrices can be only multiplied when
the number of columns of the first is equal to the number of rows of the
second. This means, let A  a jk  be an m  n matrix and B  b jk  an r  p 
matrix, then the product AB (in this order) is defined only when rn and
is the m  p  matrix C  c jk  whose elements are
n
c jk  a j1b1k  a j 2b2 k  . . .  a jnbnk   a jibik
i 1

Example: if a row matrix A is multiplied by a column matrix B

 a1 
 
 a2 
A  . 
 
and B  b , b , . . ., b 
1 2 p

. 
 
 am 

3
 a1  b1 a1  b2 . . . a1  bp 
 
 a2  b1 a2  b2 . . . a2  bp 
C   
. 
a b a b . . . a b 
 m 1 m 2 m p

The process of matrix multiplication is, therefore, conveniently referred


to as the multiplication of rows into columns.

Note that matrix multiplication is associative and is distributive with


respect to addition of matrices, that is

A (BC)  (AB)C

A(B  C)  AB  AC

Transposition of a product: the transpose of a product equals the product


of the transpose factors, taken in reverse order,

(AB)T  BT A T

Matrix division

If the determinant a of a square matrix a  does not vanish, a  is said to


be nonsingular and possesses a reciprocal, or inverse, matrix R such that

aR  U  Ra

Where U is the unit matrix of the same order as a 

R 
A  ji
 a1
a

4
The matrix A  is called the adjoint of the matrix a and it is obtained by
ji

replacing the elements of the transpose of a  by their corresponding


cofactor

As an example, let it be required to obtain the inverse of the matrix

a11 a12 a13 


a  a21 a22 a23 
a31 a32 a33 

The next step to obtain the adjoint of the matrix and therefore the inverse
of the matrix is to form the transpose of a

a11 a21 a31 


a  a12 a22 a32 
T

a13 a23 a33 

Now replace the elements of aT by their corresponding cofactors and


obtain

a22a33  a23a32 a13a32  a12a33 a12a23  a13a22 


A 
ji  a23a31  a21a33 a11a33  a13a31 a13a21  a11a23 
a21a32  a22a31 a12a31  a11a32 a11a22  a12a21 

The inverse of the matrix a  is given by

a1 
A  ji

Matrices of special types

a. Conjugate matrices: to the operations aT and a1 , defined by


transposition and inversion, may be added another one. This

5
operation is denoted bya  and implies that if the elements of a are
complex numbers, the corresponding elements of a  are their
respective complex conjugates. The matrix a  is called the conjugate
of a  .

b. The associate of a  : the transposed conjugate of a  , a , is called


T

the associate of a  .

c. Symmetric matrix: if a  a T

d. Involuntary matrix: if a  a 1 , the matrix a  is involuntary

e. Real matrix: if a  a , a  is a real matrix

f. Orthogonal matrix: if a  a T 1 , a  is an orthogonal matrix

g. Hermitian matrix: if a  aT , a  is a Hermitian matrix

h. Unitary matrix: if a  aT  , a  is unitary


1

i. Skew-symmetry matrix: if a   a T , a  is skew symmetry

j. Pure imaginary: if a   a , a  is pure imaginary

k. Skew Hermitian: if a   aT , a  is skew Hermitian

Matrices are used to solve systems of algebraic problems with n equations


and n unknowns, systems of homogenous linear equations, quadratic
algebraic equations, inversions of large datasets, variance and covariance
analysis, and differential equations with variable coefficients.

Example: System of Linear Equations


6
A system of m linear equations (or set of m simultaneous linear equations)
in n unknowns x1 , . . ., xn is a set of equations of the form

a11x1  . . .  a1n xn  b1
a21x1  . . .  a2 n xn  b2
. ... . .
am1 x1  . . .  amn xn  bm

The aik are given number, which are called the coefficients of the system.
The bi are also given numbers. If the bi are all zero, then the equation is
called a homogeneous system. If at least one bi is not zero, then the
equation is called a nonhomogenous system.

A solution of the above equation is a set of numbers x1 , . . ., xn which


satisfy all the m equations. A solution vector of the above equation is a
vector x whose components constitute a solution x1 , . . ., xn .

From the definition of matrix multiplication we see that the m


equations of the above system of linear equations may be written as a
single vector equation

Ax  b

where the coefficient matrix A  aik  is the m  n matrix

 a11 a12 . . . an1   x1   b1 


     
 a21 a22 . . . a2 n  .  . 
A  , x    and b 
. . .
     
a  x  b 
 m1 a m2 . . . a mn   m  m

You might also like