0% found this document useful (0 votes)
8 views13 pages

Day 12 MATRICES

Chapter Six introduces matrices as rectangular arrays of numbers or symbols, detailing their types, dimensions, and operations such as addition, subtraction, scalar multiplication, and multiplication by another matrix. It also covers concepts like matrix inversion, transposition, eigenvalues, and eigenvectors, explaining their significance and providing examples for clarity. The chapter emphasizes the importance of eigenvalues and eigenvectors in understanding matrix properties and includes exercises for practice.

Uploaded by

machalanestor100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

Day 12 MATRICES

Chapter Six introduces matrices as rectangular arrays of numbers or symbols, detailing their types, dimensions, and operations such as addition, subtraction, scalar multiplication, and multiplication by another matrix. It also covers concepts like matrix inversion, transposition, eigenvalues, and eigenvectors, explaining their significance and providing examples for clarity. The chapter emphasizes the importance of eigenvalues and eigenvectors in understanding matrix properties and includes exercises for practice.

Uploaded by

machalanestor100
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CHAPTER SIX: MATRICES

Introduction
A matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions
(called entries), arranged in rows and columns, which is used to represent a mathematical
object or a property of such an object.

A real matrix and a complex matrix are matrices whose entries are respectively real numbers
or complex numbers.

The numbers, symbols, or expressions in the matrix are called its entries or its elements. The
horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

The size of a matrix is defined by the number of rows and columns it contains. There is no
limit to the numbers of rows and columns a matrix (in the usual sense) can have as long as
they are positive integers. A matrix with m rows and n columns is called an m × n matrix, or
m-by-n matrix, while m and n are called its dimensions. For example, the matrix A below is a
3 × 2 matrix.

[ ]

Matrices with a single row are called row vectors, and those with a single column are called
column vectors. A matrix with the same number of rows and columns is called a square
matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite
matrix. In some contexts, such as computer algebra programs, it is useful to consider a
matrix with no rows or no columns, called an empty matrix. A matrix is usually shown by a
capital letter (such as A, or B).

Operations with matrices


Adding
To add two matrices: add the numbers in the matching positions:
These are the calculations:

The two matrices must be the same size, i.e. the rows must match in size, and the columns
must match in size.

Example:
A matrix with 3 rows and 5 columns can be added to another matrix of 3 rows and 5
columns.
But it could not be added to a matrix with 3 rows and 4 columns (the columns don't match in
size).

Subtracting
To subtract two matrices: subtract the numbers in the matching positions:

These are the calculations:

Note: subtracting is actually defined as the addition of a negative matrix: A + (−B).

Multiplying by a Constant
We can multiply a matrix by a constant (the value 2 in this case):

These are the calculations:


We call the constant a scalar, so officially this is called "scalar multiplication".

Multiplying a Matrix by Another Matrix


But to multiply a matrix by another matrix we need to do the "dot product" of rows and
columns ... what does that mean? Let us see with an example:
To work out the answer for the 1st row and 1st column:

The "Dot Product" is where we multiply matching members, then sum up:

We match the 1st members (1 and 7), multiply them, likewise for the 2nd members (2 and 9)
and the 3rd members (3 and 11), and finally sum them up.

Want to see another example? Here it is for the 1st row and 2nd column:

We can do the same thing for the 2nd row and 1st column:

And for the 2nd row and 2nd column:


And we get:

Dividing
Now, what about division? Well we don't actually divide matrices, we do it this way:

where B-1 means the "inverse" of B.


So we don't divide; instead we multiply by an inverse.
When we multiply a number by its reciprocal we get 1:

Similarly, when we multiply a matrix by its inverse we get the Identity Matrix (which is like
"1" for matrices):

Same thing when the inverse comes first:

An identity matrix is the matrix equivalent of the number "1":

A 3x3 Identity Matrix

Transposing
To "transpose" a matrix, swap the rows and columns.
We put a "T" in the top right-hand corner to mean transpose:
The Inverse of a Matrix
Inversion of a 2×2 Matrix

etc.
Eigenvalues and Eigenvectors of a Matrix

(1)

for some scalar λ. Then λ is called an eigenvalue of the matrix A and X is called an
eigenvector of A associated with λ, or a λ-eigenvector of A.

The set of all eigenvalues of an n×n matrix A is denoted by σ(A) and is referred to as the
spectrum of A.

The eigenvectors of a matrix A are those vectors X for which multiplication by A results in a
vector in the same direction or opposite direction to X. Since the zero vector 0 has no
direction this would make no sense for the zero vector. As noted above, 0 is never allowed to
be an eigenvector.

Let’s look at eigenvectors in more detail. Suppose X satisfies equation (1). Then

for some . Equivalently one could write


which is more commonly used. Hence, when we are looking for eigenvectors, we are looking
for nontrivial solutions to this homogeneous system of equations!

Recall that the solutions to a homogeneous system of equations consist of basic solutions,
and the linear combinations of those basic solutions. In this context, we call the basic
solutions of the equation

(2)

basic eigenvectors. It follows that any (nonzero) linear combination of basic eigenvectors is
again an eigenvector.
Suppose the matrix is invertible, so that exists. Then the following
equation would be true.

This claims that . However, we have required that . Therefore


cannot have an inverse!
Recall that if a matrix is not invertible, then its determinant is equal to 0.
Therefore we can conclude that
(3)

Note that this is equivalent to .


The expression is a polynomial (in the variable x) called the characteristic
polynomial of A, and is called the characteristic equation. For this
reason we may also refer to the eigenvalues of A as characteristic values, but the former is
often used for historical reasons. The following theorem claims that the roots of the
characteristic polynomial are the eigenvalues of A. Thus when the theorem holds, A has a
nonzero eigenvector.
Theorem:
Let A be an matrix and suppose for some .
Then λ is an eigenvalue of A and thus there exists a nonzero vector such that
.
Theorem:
Let A be an matrix with characteristic polynomial given by . Then, the
multiplicity of an eigenvalue λ of A is the number of times λ occurs as a root of that
characteristic polynomial.

For example, suppose the characteristic polynomial of A is given by . Solving for


the roots of this polynomial, we set and solve for λ. We find that is a
root that occurs twice. Hence, in this case, is an eigenvalue of A of multiplicity equal
to 2.

Example 1:

Solution
First we find the eigenvalues of A by solving the equation:

This gives

Computing the determinant as usual, the result is

Solving this equation, we find that

Now we need to find the basic eigenvectors for each λ.


First we will find the eigenvectors for . We wish to find all vectors such that
. These are the solutions to :

The augmented matrix for this system and corresponding are given by

The solution is any vector of the form

Multiplying this vector by 7 we obtain a simpler description for the solution to this system,
given by

This gives the basic eigenvector for as

To check, we verify that for this basic eigenvector.

This is what we wanted, so we know this basic eigenvector is correct.


Next we will repeat this process to find the basic eigenvector for . We wish to find
all vectors such that . These are the solutions to .
The augmented matrix for this system and corresponding are given by

The solution is any vector of the form

This gives the basic eigenvector for as

To check, we verify that for this basic eigenvector.

This is what we wanted, so we know this basic eigenvector is correct.

Example 2:
Find the eigenvalues and eigenvectors for the matrix

Solution
First we need to find the eigenvalues of A. Recall that they are the solutions of the equation

In this case the equation is

which becomes

Computing this determinant and simplifying we obtain the following equation:


Solving this equation, we find that the eigenvalues are , and . Notice
that 10 is a root of multiplicity two due to

Therefore, is an eigenvalue of multiplicity 2. Now that we have found the


eigenvalues for A, we can compute the eigenvectors. First we will find the basic eigenvectors
for . In other words, we want to find all non-zero vectors so that . This
requires that we solve the equation for as follows.

That is, one must find the solutions to

That is we need to find the solution to

We now set up the augmented matrix and row - reduce to get the solution. Thus the matrix
to be row - reduced is

The reduced row-echelon form is

and so the solution is any vector of the form


where . If we multiply this vector by 4, we obtain a simpler description for the solution
to this system, as given by

where . Here, the basic eigenvector is given by

Notice that we cannot let here, because this would result in the zero vector and
eigenvectors are never equal to Other than this value, every other choice of results in an
eigenvector.
It is a good idea to check your work! To do so, we will take the original matrix and multiply
by the basic eigenvector . We check to see if we get .

This is what we wanted, so we know that our calculations were correct.

Next we will find the basic eigenvectors for . These vectors are the basic
solutions to the equation,

That is you must find the solutions to


Considering the augmented matrix, we have

The reduced row-echelon form for this matrix is

and so the eigenvectors are of the form

Note that you can’t pick t and s both equal to zero because this would result in the zero
vector and eigenvectors are never equal to zero.
Here, there are two basic eigenvectors, given by

Taking any (nonzero) linear combination of and will also result in an eigenvector for
the eigenvalue . As in the case for , always check your work! For the first basic
eigenvector, we can check as follows:

This is what we wanted. Checking the second basic eigenvector, , is left as an exercise.
It is important to remember that for any eigenvector , . However, it is possible to
have eigenvalues equal to zero. This is illustrated in the following example.

Exercise
Find the eigenvalues and eigenvectors of the matrices:

You might also like