Matrices-1 - Matrices
Matrices-1 - Matrices
Jeremy Gunawardena
Department of Systems Biology
Harvard Medical School
200 Longwood Avenue, Cambridge, MA 02115, USA
[email protected]
3 January 2006
Contents
1 Introduction 1
8 Properties of determinants 10
9 Gaussian elimination 11
about:blank 1/16
5/23/24, 12:03 PM Matrices-1 - matrices
1 Introduction
This is a Part I of an introduction to the matrix algebra needed for the Harvard Systems Biology
101 graduate course. Molecular systems are inherently many dimensional—there are usually many
molecular players in any biological system—and linear algebra is a fundamental tool for thinking
about many dimensional systems. It is also widely used in other areas of biology and science.
I will describe the main concepts needed for the course—determinants, matrix inverses, eigenvalues
and eigenvectors—and try to explain where the concepts come from, why they are important and
how they are used. If you know some of the material already, you may find the treatment here quite
slow. There are mostly no proofs but there are worked examples in low dimensions. New concepts
appear in italics when they are introduced or defined and there is an index of important items at
the end. There are many textbooks on matrix algebra and you should refer to one of these for more
details, if you need them.
Thanks to Matt Thomson for spotting various bugs. Any remaining errors are my responsibility.
Let me know if you come across any or have any comments.
Matrices first arose from trying to solve systems of linear equations. Such problems go back to the
very earliest recorded instances of mathematical activity. A Babylonian tablet from around 300 BC
states the following problem1 :
There are two fields whose total area is 1800 square yards. One produces grain at the
rate of 2/3 of a bushel per square yard while the other produces grain at the rate of 1/2
a bushel per square yard. If the total yield is 1100 bushels, what is the size of each field?
If we let x and y stand for the areas of the two fields in square yards, then the problem amounts to
saying that
x+y = 1800
(1)
2x/3 + y/2 = 1100 .
This is a system of two linear equations in two unknowns. The linear refers to the fact that the
unknown quantities appear just as x and y, not as 1/x or y 3 . Equations with the latter terms are
nonlinear and their study forms part of a different branch of mathematics, called algebraic geometry.
Generally speaking, it is much harder to say anything about nonlinear equations. However, linear
equations are a different matter: we know a great deal about them. You will, of course, have seen
examples like (1) before and will know how to solve them. (So what is the answer?). Let us consider
a more general problem (this is the kind of thing mathematicians love to do) in which we do not
know exactly what the coefficients are (ie: 1, 2/3, 1/2, 1800, 1100):
ax + by = u
(2)
cx + dy = v,
and suppose, just to keep things simple, that none of the numbers a, b, c or d are 0. You should be
able to solve this too so let us just recall how to do it. If we multiply the first equation by (c/a),
which we can do because a 6= 0, and subtract the second, we find that
(cb/a)y − dy = cu/a − v .
1 For
an informative account of the history of matrices and determinants, see
http://www-groups.dcs.st-and.ac.uk/ history/HistTopics/Matrices and determinants.html.
about:blank 2/16
5/23/24, 12:03 PM Matrices-1 - matrices
A matrix is any rectangular array of numbers. If the array has n rows and m columns, then it is an
n×m matrix. The numbers n and m are called the dimensions of the matrix. We will usually denote
matrices with capital letters, like A, B, etc, although we will sometimes use lower case letters for
one dimensional matrices (ie: 1 × m or n × 1 matrices). One dimensional matrices are often called
vectors, as in row vector for a n × 1 matrix or column vector for a 1 × m matrix but we are going
to use the word “vector” to refer to something different in Part II. We will use the notation Aij to
refer to the number in the i-th row and j-th column. For instance, we can extract the numerical
coefficients from the system of linear equations in (5) and represent them in the matrix
a b c
d e f
A= g h i . (6)
about:blank 3/16
5/23/24, 12:03 PM Matrices-1 - matrices
It is conventional to use brackets (either round or square) to delineate matrices when you write them
down as rectangular arrays. With our notation, A23 = f and A32 = h.
The first known use of the matrix idea appears in the “The Nine Chapters of the Mathematical Art”,
the 3rd century BC Chinese text mentioned above. The word matrix itself was coined by the British
mathematician James Joseph Sylvester in 1850. Matrices first arose from specific problems like (1).
It took nearly two thousand years before mathematicians realised that they could gain an enormous
amount by abstracting away from specific examples and treating matrices as objects in their own
right, just as we will do here. The first fully abstract definition of a matrix was given by Sylvester’s
friend and collaborator, Arthur Cayley, in his 1858 book, “A memoir on the theory of matrices”.
Abstraction was a radical step at the time but became one of the key guiding principles of 20th
century mathematics. Sylvester, by the way, spent a lot of time in America. In his 60s, he became
Professor of Mathematics at Johns Hopkins University and founded America’s first mathematics
journal, The American Journal of Mathematics.
There are a number of useful operations on matrices. Some of them are pretty obvious. For instance,
you can add any two n × m matrices by simply adding the corresponding entries. We will use A + B
to denote the sum of matrices formed in this way:
Addition of matrices obeys all the formulae that you are familiar with for addition of numbers. A
list of these are given in Figure 2. You can also multiply a matrix by a number by simply multiplying
each entry of the matrix by the number. If λ is a number and A is an n × m matrix, then we denote
the result of such multiplication by λA, where
Multiplication by a number also satisfies the usual properties of number multiplication and a list
of these can also be found in Figure 2. All of this should be fairly obvious and easy. What about
products of matrices? You might think, at first sight, that the “obvious” product is to just multiply
the corresponding entries. You can indeed define a product like this—it is called the Hadamard
product—but this turns out not to be very productive mathematically. The matrix matrix product
is a much stranger beast, at first sight.
If you have an n × k matrix, A, and a k × m matrix, B, then you can matrix multiply them together
to form an n × m matrix denoted AB. (We sometimes use A.B for the matrix product if that helps
to make formulae clearer.) The matrix product is one of the most fundamental matrix operations
and it is important to understand how it works in detail. It may seem unnatural at first sight and
we will learn where it comes from later but, for the moment, it is best to treat it as something new
to learn and just get used to it. The first thing to remember is how the matrix dimensions work.
You can only multiply two matrices together if the number of columns of
the first equals the number of rows of the second.
Note two consequences of this. Just because you can form the matrix product AB does not mean
that you can form the product BA. Indeed, you should be able to see that the products AB and BA
only both make sense when A and B are square matrices: they have the same number of rows as
columns. (This is an early warning that reversing the order of multiplication can make a difference;
see (9) below.) You can always multiply any two square matrices of the same dimension, in any
order. We will mostly be working with square matrices but, as we will see in a moment, it can be
helpful to use non-square matrices even when working with square ones.
To explain how matrix multiplication works, we are going to first do it in the special case when
n = m = 1. In this case we have a 1 × k matrix, A, multiplied by a k × 1 matrix, B. According to
about:blank 4/16
5/23/24, 12:03 PM Matrices-1 - matrices
the rule for dimensions, the result should be a 1 × 1 matrix. This has just has one entry. What is
the entry? You get it by multiplying corresponding terms together and adding the results:
B11
B
21
.. . (8)
.
(A11 A12 · · · A1k ) Bk1
= (A B11 + A12 B21 + · · · + A1k Bk1 )
Once you know how to multiply one dimensional 11 matrices, it is easy to multiply any two matrices.
If A is an n × k matrix and B is a k × m matrix, then the ij-th element of AB is given by taking
the i row of A, which is a 1 × k matrix, and the j-th column of B, which is a k × 1 matrix, and
multiplying them together just as in (8). Schematically, this looks as shown in Figure 1. It can be
helpful to arrange the matrices in this way if you are multiplying matrices by hand. You can try this
out on the following example of a 2 × 3 matrix multiplied by a 3 × 2 matrix to give a 2 × 2 matrix.
−2 2
1 0
1 2 3 2 1 = 6 5
2 3 1 1 5
There is one very important property of matrix multiplication that it is best to see early on. Consider
the calculation below, in which two square matrices are multiplied in a different order
1 2 3 −1 5 5 3 −1 1 2 1 7
= = .
2 −1 1 3 5 −5 1 3 2 −1 7 −1
We see from this that
This is one of the major differences between matrix multiplication and number multiplication. You
can get yourself into all kinds of trouble if you forget it.
You might well ask where such an apparently bizarre product came from. What is the motivation
behind it? That is a good question. A partial answer comes from the following observation. (We will
give a more complete answer later.) Once we have matrix multiplication, we can use it to rewrite
the system of equations (5) as an equation about matrices. You should be able to check that (5) is
about:blank 5/16
5/23/24, 12:03 PM Matrices-1 - matrices
a b c x u
d e
f y v
g h i z = w (10)
Ax = u (11)
where A is the n × n matrix of coefficients and we have used x to denote the n × 1 matrix of unknown
quantities (ie: the x, y and z in (10)) and u to denote the n × 1 matrix of constant quantities (ie:
the u, v and w in (10)). (We are clearly going to have to use a different notation for the unknown
quantities and the constants when we look at n dimensional matrices but we will get to that later.)
Now that we can express systems of linear equations in matrix notation, we can start to think about
how we might solve the single matrix equation (11) rather than a system of n equations.
Before dealing with solutions of linear equations, let us take a look at some matrix products that will
be useful to us later. You should be familiar with complex numbers, of the form a + ib, where a and
b are ordinary numbers and i is the so-called “imaginary” square root of −1. Complex numbers are
important, among other things, because they allow us to solve equations which we cannot solve in
the ordinary numbers. The simplest such equation is x2 + 1 = 0, which has no solutions in ordinary
numbers but has ±i as its two solutions in the complex numbers. What is rather more amazing is
that if you take any polynomial equation, such as
am xm + am−1 xm−1 + · · · + a1 x + a0 = 0 ,
even one whose coefficients, a0 , · · · , am , are complex numbers, then this polynomial has a solution
over the complex numbers. This is the fundamental theorem of algebra. The first more-or-less correct
proof of it was given by Gauss in his doctoral dissertation.
What have complex numbers got to do with matrices? We can associate to any complex number
the matrix
a −b
. (12)
Notice that, given any matrix of this form, we b cana always construct the corresponding complex
number, a + ib, and these two processes, which convert back and forth between complex numbers
and matrices, are inverse to each other. It follows that matrices of the form in (12) and complex
numbers are in one-to-one correspondence with each other. What does this correspondence do to the
algebraic operations on both sides? Well, if you take two complex numbers, a + ib and c + id, then
the matrix of their sum (a + c) + i(b + d), is the sum of their matrices. What about multiplication?
Using the rule for matrix multiplication, we find that
a −b c −d ac − bd −(ad + bc)
. = .
b a d c ad + bc
This is exactly the matrix of the product of the complex numbers: ac − bd
We see from this that the subset of matrices having the form in (12) are the complex numbers
in disguise, as it were. This may confirm your suspicion that the matrix product is infernally
about:blank 6/16
5/23/24, 12:03 PM Matrices-1 - matrices
complicated! A one-to-one correspondence between two algebraic structures, which preserves the
operations on both sides, is called an isomorphism. We can as well work with either structure. As
we will see later on, this can be very useful. Some of the deepest results in mathematics reveal the
existence of unexpected isomorphisms.
After that diversion, let us get back to linear equations. If you saw the equation ax = u, where a,
x and u were all numbers, you would know immediately how to solve for x. Provided a 6= 0, you
would multiply both sides of the equation by 1/a, to get x = u/a.
Perhaps we can do the same thing with matrices? (Analogy is often a powerful guide in mathematics.)
It turns out that we can and that is what we are going to work through in the next few sections.
Before starting, however, it helps to pause for a bit and look at what we have just done with numbers
more closely. It may seem that this is a waste of time because it is so easy with numbers but we
often take the familiar for granted and by looking closely we will learn some things that will help us
understand what we need when we move from numbers to matrices.
The quantity 1/a is the multiplicative inverse of a. That is a long winded way of saying that its
the (unique) number with the property that when multiplied by a, it gives 1. (Why is it unique?)
Another way to write the multiplicative inverse is a−1 ; it always exists, provided only that a 6= 0.
(As you know, dividing by zero is frowned upon. Why? If 0−1 existed as a number, then we could
take the equation 0 × 2 = 0 × 3, which is clearly true since both sides equal 0, and multiply through
by 0−1 , giving 2 = 3. Since this is rather embarrassing, it had better be the case that 0 has no
multiplicative inverse.) Let us go through the solution of the simple equation ax = u again using
the notation a−1 and this time I will be careful with the order of multiplication. If we multiply the
left hand side by a−1 we get
a−1 (ax) = (a−1 a)x = 1x = x ,
while if we multiply the right hand side we get just
a−1 u .
Hence, x = a−1 u. Notice that there are couple of things going on behind the scenes. We needed to
use the property that multiplication of numbers is associative (in other words, a(bc) = (ab)c) and
that multiplying any number by 1 leaves it unchanged.
If we want to use the same kind of reasoning for the matrix equation (11), we shall have to answer
three questions.
If we can find a satisfactory answer to all of these, then we can solve the matrix equation (11) just
as we did the number equation ax = u.
From this point, we are mostly going to deal with square matrices. Remember that we can always
multiply two square matrices of the same dimension.
The matrix equivalent of the number 1 is the called the identity matrix. The n × n identity matrix
is denoted In but we often simplify this to just I when the dimension is clear from the context. All
about:blank 7/16
5/23/24, 12:03 PM Matrices-1 - matrices
the diagonal entries of In are 1 and all the off-diagonal entries are 0, so that I4 looks like
1 0 0 0
0 1 0 0
.
0 0 1 0
I4 =
0 0 0 1
You should be able to convince yourself very quickly, using the rule for matrix multiplication in
Figure 1, that if A is any n × n matrix, then
A.In = In .A = A .
so that In really does behave like the number 1 for square matrices of dimension n.
Here is a little problem to test your understanding of matrix multiplication. How would you in-
terchange two columns (or two rows) of a matrix just by matrix multiplication? More specif-
ically, let A(i ↔ j) denote the matrix A with the columns i and j interchanged. Show that
A(i ↔ j) = A.I (i ↔ j). What happens if you want to interchange rows?
The second question asks, given three n × n matrices A, B and C, whether A.(B.C ) = (A.B ).C ?
Well, you can just write down each of the products in terms of the individual entries Aij , Bij and
Cij using the formula in Figure 1 and work it out. The answer turns out to be “yes”, although the
calculation is quite tedious. In fact, it is just as tedious to show that associativity holds for any
three matrices which can be multiplied together. If A is an n × k matrix, B a k × j matrix and C
is a j × m matrix, then
A.(B.C ) = (A.B ).C (13)
Although you can show that the matrix product is associative this way, it is not a particularly
edifying proof. Mathematicians are always a bit suspicious of results that follow from long, boring
calculations because they suspect they are missing something. A result as simple as (13) ought to
be true for simple reasons, not complex ones. This kind of aesthetic uneasiness has often led to new
mathematics. In this case, there is a simple reason why matrix multiplication is associative and it
is the same reason that explains why the matrix product itself is “natural”. However, we have to
change our perspective quite a bit, from algebra to geometry, to see things in this light. We will get
to it in Part II.
Matrix multiplication obeys several rules that are similar to those that hold for number multiplication
(with the exception, of course, of commutativity). These are listed in Figure 2.
The third question asks about the multiplicative inverse of a matrix. (From now on, we will just say
“inverse”.) The inverse of the matrix A is a matrix, which we will call, by analogy with numbers,
A−1 , such that A−1 .A = A.A−1 = I. This only makes sense for square matrices (why?). Notice that
because matrix multiplication is not commutative, it is important to make sure that both A−1 .A
and A.A−1 give the same result. If the inverse exists, it must be unique (why?).
It turns out that the inverse matrix is closely related to the determinant. This should not really
be a surprise, given what we found out above for example (2). We are going to define the inverse
matrix and the determinant of a matrix through the same basic formula (16) below.
You have already worked out the determinant of a 2 × 2 matrix. If we extract the 2 × 2 coefficient
matrix from example (2)
a b
(14)
c d
about:blank 8/16
5/23/24, 12:03 PM Matrices-1 - matrices
then its determinant is the quantity we came across before, ad − bc. What is the determinant of
an arbitrary n × n matrix? If A is a square matrix, we will denote its determinant det A. (Strictly
speaking, the determinant is a function from matrices to numbers, so we should put brackets around
the argument, like det(A). However, this can make formulae look awfully messy at times and I try
not to use too many brackets, unless, for some reason, it is important to clarify which matrix the
determinant applies to.)
The easiest way to explain the determinant is to do it by induction (or what computer scientists
would call recursion). That is, I will show how you the determinant of an n × n matrix can be
calculated if you know how to calculate the determinant of an (n − 1) × (n − 1) matrix. We need to
start this off somewhere, so here is the determinant of a 1 × 1 matrix:
det(a) = a . (15)
Notice that 1 × 1 matrices are equivalent to numbers but there is a notational distinction between
the matrix (a) and the number a.
Now suppose you have an n × n matrix A. We are going to form an associated n × n matrix, called
the adjugate of A, denoted adj A. (Once again, we should probably write adj(A) but, as above, we
try and avoid excess brackets when we can.) To define the adjugate we need a notation that allows
us to select a submatrix from A. The (n−1) × (n−1) submatrix A(ij) is obtained from A by omitting
the i-th row and the j-th column of A. You have to be careful about the small difference between
A(ij) and Aij : the former is a matrix which is one size smaller than A; the latter is a number. We
can now define the adjugate. The ij-th entry of the adjugate matrix, adj A, is the product of the
determinant of the submatrix A(ji) (NOTE THE REVERSAL OF ROWS AND COLUMNS) and
the sign factor (−1)i+j . In other words,
(adj A)ij = (−1)i+j det A(ji)
It is easier to see what is going on here by working through an example. Consider the 2 × 2 matrix
in (14). The 11 element of adj A is given by omitting the first row and first column, which gives
the matrix (d), taking its determinant, which is d, and then multiplying by (−1)2 = 1. The 12
element is given by omitting the second row and first column, which gives the matrix (b), taking its
determinant, which is b, and multiplying by (−1)3 = −1. Proceeding in this way, we find that the
adjugate of (14) is
d −b
.
−cwhyathe adjugate helps you to work out det A. Well,
What is the big deal? I still have not explained
you can check for yourself with this example that
ad − bc 0
A.(adj A) = (adj A).A = .
The determinant of the 2 × 2 matrix has reappeared, even0 thoughad we
− bconly used determinants of 1 × 1
matrices in the calculation. This turns out to be a general result. If A is any n × n matrix then
Notice that the first two expressions are matrix products while the last expression is multiplication
of the matrix In by the number det A, as in (7). Formula (16) is very useful and tells us many things.
For a start, it explains how to calculate determinants. If you look at individual entries of (det A)In ,
you can recover a set of formulae that were first worked out by the French mathematician, Pierre
Simon (Marquis de) Laplace, in the 18th century. They are still known as Laplace expansions in
his honour. We will not write these all down but this is what you get if you look at the 11 entry of
(det A)In and expand out the formula A.(adj A) = (det A)In from (16):
det A = A11 det A(11) − A12 det A(12) + A13 det A(13) − · · · + (−1)1+n A1n det A(1n) . (17)
about:blank 9/16
5/23/24, 12:03 PM Matrices-1 - matrices
You can try writing down some of the other Laplace expansions yourself. Notice that if you pick one
of the off diagonal entries of (det A)In you get a relation among the determinants of the submatrices
A(ij) .
Formula (17) provides a recursive algorithm for computing det(A). You can use it to work out the
determinant of the 3 × 3 matrix (6). You should find it to be
It can be fairly unhealthy calculating determinants of large matrices this way. It can sometimes be
done but it can also get extremely tedious. The better method is to use some form of Gaussian
elimination, which we will learn how to do in §9. The determinant has many wonderful properties,
which are collected together in §8.
Formula (16) also tells us the inverse of A. If det A 6= 0, then it should be clear from the definition
of an inverse that
A−1 = 1 adj A (19)
det(A)
If det(A) = 0 then A does not have a multiplicative inverse. That is another important difference
between matrices and numbers. The only number which does not have a multiplicative inverse is 0
but there are lots of non-zero matrices that do not have a multiplicative inverse (try and write down
a couple).
So now you know how to work out the matrix inverse. The same caveat as for determinants holds for
the inverse: (19) can be helpful to understand the inverse and its properties but Gaussian elimination
is a better way of computing the inverse matrix if you have to invert a large matrix by hand.
We have now got all the ingredients we need to solve a system of n linear equations in n unknowns.
Suppose that you have expressed this system, as we did in (10), in the form of a single matrix
equation, Ax = u, where the unknown quantities are x1 , x2 , · · · , xn and the constant quantities are
u1 , u2 , · · · , un . In this case the n × 1 matrices x and u are given by
x1 u1
x u
2 2
.. .. .
. .
x= u=
xn un
The first thing to do is to work out det A. You could use the Laplace expansion in (17), Gaussian
elimination as in §9 or even get MATLAB to calculate it for you. If det A = 0 then the system may
have no solution. (There is a lot more that could be said about this case but it takes more work
and is not particularly useful for us.) If det A 6= 0 then, you can form the inverse matrix given in
(19) and follow the strategy that works for numbers by multiplying through by the inverse matrix
and using associativity to simply the algebra. You should find that
1
x= (adj A)u . (20)
det A
The only problem with this is the adjugate, which is quite complicated to calculate. There is a clever
way of avoiding this which was worked out by the Swiss mathematician Gabriel Cramer in the 18th
about:blank 10/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 11/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 12/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 13/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 14/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 15/16
5/23/24, 12:03 PM Matrices-1 - matrices
about:blank 16/16