0% found this document useful (0 votes)
18 views10 pages

6 25 Notes

math 142 ucla notes

Uploaded by

Amna Syeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

6 25 Notes

math 142 ucla notes

Uploaded by

Amna Syeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1 June 25th

1.1 Some Review


1.1.1 Linear Algebra - Terms
The symbol R stands for real numbers and the symbol C stands for complex numbers. For
n
most of our purposes the vector spaces we will be dealing with will be R .
n n
R is n-dimensional space. We can view vectors v ∈ R as coordinate vectors, n-tuples of
real numbers:
v = (x1 , x2 , . . . , xn ), xi ∈ R
You can add vectors by adding their components:

(x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn )

You can multiply (or scale) vectors by real numbers:

c ⋅ (x1 , . . . , xn ) = (c ⋅ x1 , . . . , c ⋅ xn ), c∈R

A real number c in R scaling vectors in this way is called a scalar.


n
Definition 1. A linear combination of vectors v1 , . . . , vk in R is a sum:

a1 v1 + ⋅ ⋅ ⋅ + ak vk

with ai ∈ R scalars.
Definition 2. A set of vectors v1 , . . . , vk in a vector space V is linearly dependent if there
exists scalars ai such that:
a1 v1 + ⋅ ⋅ ⋅ + ak vk = 0
with not all ai = 0.
Another way to phrase this is that the set vi are linearly dependent if a non-zero linear
combination of them is equal to 0.
Definition 3. A set of vectors v1 , . . . , vk is linearly independent if they are not linearly
dependent.
Definition 4. A basis for R is a set of n linearly independent vectors { v1 , . . . , vn }.
n

Theorem 1. A basis β = { v1 , . . . , vn } spans R : every vector v ∈ R can be written as a


n n

unique linear combination of the basis vectors v1 , . . . , vn .


n
Let v ∈ R be a vector given by the linear combination:

v = y1 v1 + ⋅ ⋅ ⋅ + yn vn

where yi are scalars. Then yi are the coordinates of v in the basis β. The coordinate vector
for v in the basis β is denoted:
[v]β = (y1 , . . . , yn )

1
Example 1. If we view R as the space of n-tuples (x1 , . . . , xn ) then we have a standard
n

basis { e1 , . . . , en }. This is the basis:

ei = (0, . . . , 0, 1, 0, . . . , 0)

where ei has 1 in its ith component.


A vector v = (x1 , . . . , xn ) in R can then be written as the linear combination:
n

v = x1 e1 + ⋅ ⋅ ⋅ + xn en
n m
Definition 5. A linear transformation T ∶ R → R is a map between vector spaces such
that:
T (v1 + v2 ) = T (v1 ) + T (v2 ), T (c ⋅ v1 ) = c ⋅ T (v1 )
n
For v1 , v2 vectors in R and c ∈ R a scalar.
If you choose a basis α = { v1 , . . . , vn } for R and a basis β for R , then you can write
n m

T as an n × m matrix given by:

[[T (v1 )]β . . . [T (vn )]β ]

The columns are the images of the basis vectors in α, T (vi ), written as coordinate vectors in
the basis β.
Example 2. The space of smooth functions on R is a vector space V . Differentation is a
linear transformation V → V .
n n
Example 3. Let I ∶ R → R be the identity transformation:

I(v) = v

Then in any basis, I is given by the matrix:



⎢ 1 0 ... 0⎤


⎢ ⎥
⎢ 0⎥

I=⎢ ⎥
⎢ 0 1 ... ⎥

⎢ ⋮⎥


⎢ ⋮ ⋮ ... ⎥


⎢ 1⎥

⎣0 0 ... ⎦
with 1-s along the diagonal and all other entries 0.
Example 4. Let e1 = (1, 0) and e2 = (0, 1) be the standard basis vectors for R . The linear
2

transformation:
T (e1 ) = 2e1 , T (e2 ) = e1 − e2
corresponds to the matrix:
2 1
[ ]
0 −1
If v = (x, y) in the standard basis then:

2 1 x 2x + y
T (v) = [ ][ ] = [ ]
0 −1 y −y

2
Definition 6. Let α = { v1 , . . . , vn }, β be two basis for R . The change of basis matrix from
n

α to β is the matrix:
B = [[v1 ]β . . . [vn ]β ]
n
If v ∈ R is a vector then B sends its α coordinates to β coordinates:

B[v]α = [v]β

This changes the coordinate vector of v but not the actual vector itself. Note that if B is the
−1
change of basis α → β then B is the change of basis matrix β → α.
n n
Theorem 2. Let T ∶ R → R be a linear transformation and α, β two basis. Then:

[T ]β = B[T ]α B
−1

where B is the change of basis matrix from α to β.

1.1.2 Finding Eigenvalues


n n
Definition 7. Let T be a linear operator T ∶ R → R . Then an eigenvalue is a scalar
n
λ ∈ R such that there exists a non-zero vector v ∈ R such that:

T (v) = λ ⋅ v

That is T scales v by λ. If v exists then it is called an eigenvector of T with eigenvalue λ.


n n
Theorem 3. Let T ∶ R → R be a linear operator. The eigenvalues of T are the roots of its
characteristic polynomial:
χ(T ) = det(T − tI)
This is a polynomial of degree n in t.
n n
Theorem 4. Eigenvalues of T ∶ R → R are either real or complex. If they are complex
they come in conjugate pairs:
a + bi, a − bi
Example 5 (Diagonalizable). Let’s find the eigenvalues of the matrix:
2 −1
A=[ ]
−1 2
Then:
2 −1 1 0 2 − t −1
A − tI = [ ] − t[ ]=[ ]
−1 2 0 1 −1 2 − t
Then:
2 2 2
det(A − tI) = (2 − t) − (−1) = 4 − 4t + t − 1
2
= 3 − 4t + t
= (t − 3)(t − 1)

The roots of this polynomial are 3, 1 which are the eigenvalues of A.

3
Example 6 (Complex Eigenvalues). Let’s find the eigenvalues of the matrix:

0 1
A=[ ]
−1 0

Then:
−t 1
det(A − tI) = det ([ ]) = t + 1
2
−1 −t
This factor over C as:
2
t + 1 = (t − i)(t + i)
√ 2
since ±i = ± −1 is a root of t + 1.
Therefore the eigenvalues of A are ±i.

1.1.3 Finding Eigenvectors


n m
Definition 8. The nullspace or kernel of a linear transformation T ∶ R → R is the set of
n
vectors v ∈ R such that:
T (v) = 0
Writing T as a matrix, this amounts to the solution space of a system of linear equations.
n n
Theorem 5. Let T ∶ R → R be a linear operator. Then the set of eigenvectors with
eigenvalue λ form a subspace called the eigenspace of T for the eigenvalue λ.
n n
Theorem 6. Let T ∶ R → R be a linear operator. Let λ be an eigenvalue of T . Then v is
an eigenvector of T with eigenvalue λ if and only if v is in the kernel (nullspace) of T − λI.
In otherwords the eigenspace for λ is equal to ker(T − λI)

Example 7. Let’s find the eigenvectors for the matrix

2 −1
A=[ ]
−1 2

We found two eigenvalues λ = 3, 1 for this matrix before.


To find the eigenvectors with eigenvalue 3 we consider the kernel of:

−1 −1
A − 3I = [ ]
−1 −1

To find the kernel we consider the system of equations:

0 −1 −1 x −x − y
[ ]=[ ]⋅[ ]=[ ]
0 −1 −1 y −x − y

Therefore solutions to this system are vectors (x, y) such that:

0 = −x − y ⟹ y = −x

4
That is vectors of the form (x, −x). Pulling out x as a scalar, every solution can be written
as x ⋅ (1, −1). Therefore:
ker(A − 3I) = span { (1, −1) }
and (1, −1) is a basis for this eigenspace.
Similarly eigenvectors with eigenvalue 1 are in the kernel of:

1 −1
A−I =[ ]
−1 1

The corresponding system of equations is:

0 1 −1 x x−y
[ ]=[ ]⋅[ ]=[ ]
0 −1 1 y −x + y

Solutions are vectors (x, y) such that:

0=x−y ⟹ x=y

Therefore:
ker(A − I) = span { (1, 1) }
is the eigenspace and (1, 1) is a basis vector.

Example 8. Let’s find the eigenvectors of the matrix:

0 1
A=[ ]
−1 0

A has eigenvalues ±i. The eigenspace for i is the kernel:

−i 1
A − iI = [ ]
−1 −i

so solutions of the equation:

0 −i 1 x −ix + y
[ ]=[ ][ ] = [ ]
0 −1 −i y −x − iy

Note that:
2
(−i)(−ix + y) = i x − iy = −x − iy
so the first and second equation are linearly dependent. This means that solutions over C
are vectors (x, y) such that:
0 = −ix + y ⟹ ix = y
so vectors (x, ix). Pulling out x as a scalar, we have:

ker(A − iI) = span { (1, i) }

is the eigenspace for i which has basis (1, i).

5
The eigenspace for −i is the kernel of:

i 1
A + iI = [ ]
−1 i

so vectors (x, y) such that:


0 = ix + y ⟹ −ix = y
Therefore:
ker(A + iI) = span { (1, −i) }
is the eigenspace of −i which has basis (1, −i).

1.1.4 Diagonalizable matrices / The point of eigenvectors


What do eigenvalues and eigenvectors tell us about a matrix T ? Eigenvectors give us directions
or axes in which the linear transformation T just scales vectors. This makes understanding T
much simpler. The more eigenvectors we have, the simpler we can make T . The best version
of this is diagonalizable matrices:
n n n
Definition 9. A matrix T ∶ R → R is diagonalizable if there is a basis for R consisting
of eigenvectors of T .
In other words, T is diagonalizable if it has n linearly independent eigenvectors. This
means that there is a basis in which T is a diagonal matrix.
n n
Example 9. Let T ∶ R → R . Suppose we can find v1 , . . . , vn be n-linearly independent
eigenvectors of T with corresponding eigenvalues λ1 , . . . , λn . Then β = { v1 , . . . , vn } forms a
n
basis for R which is often called an eigenbasis.
As a matrix with coordinates in this basis, T is given by:

[T ]β = [[T (v1 )]β . . . [T (vn )]β ]

But:
T (vi ) = λi vi
Therefore:
[T (vi )]β = (0, . . . , λi , . . . , 0)
i.e. λi in position i. Therefore:

⎢ λ1 0 ... 0⎤ ⎥

⎢ ⎥
⎢ 0⎥ ⎥
[T ]β = ⎢ ⎥
⎢ 0 λ2 ... ⎥

⎢ ⋮⎥⎥

⎢ ⋮ ⋮ ... ⎥


⎢ λn ⎥

⎣0 0 ... ⎦
the diagonal matrix with entries λi along the diagonal.
So changing basis to an eigenbasis makes a potentially complicated transformation T very
simple: it just stretches each coordinate by scalars λi !

Theorem 7. Eigenvectors with different eigenvalues are linearly independent.

6
Theorem 8. If A is a n × n matrix with n-distinct eigenvalues then each of its eigenspaces
is 1-dimensional and A is diagonalizable.
Example 10. For the matrix:
2 −1
A=[ ]
−1 2
we calculated that it had eigenvectors and eigenvalues:
v1 = (1, −1), λ1 = 3
v2 = (1, 1), λ2 = 1
Therefore in the eigenbasis β = { v1 , v2 } A is given by the diagonal matrix:
3 0
[A]β = [ ]
0 1
Example 11. For the matrix:
0 1
A=[ ]
−1 0
we found eigenvectors and eigenvalues:
v1 = (1, i), λ1 = i
v2 = (1, −i), λ2 = −i
Therefore in the basis β = { v1 , v2 } for C , A is the diagonal matrix:
2

i 0
[A]β = [ ]
0 −i
A useful fact:
Theorem 9. Let ⟨v1 , v2 ⟩ be the inner product:
T
⟨v1 , v2 ⟩ = v1 v2
Then eigenvectors v1 , v2 with distinct eigenvalues are orthogonal with respect to this inner
product, i.e.:
⟨v1 , v2 ⟩ = 0
Theorems you have seen but may or may not be useful:
n n
Theorem 10 (Spectral Theorem). If A ∶ R → R is a real matrix that is symmetric,
T
A = A , then it is diagonalizable.
Theorem 11 (Perron-Frobenius). Let A be a real matrix whose entries are all positive
(non-zero). Then A has a real eigenvalue λ such that:

1. λ has maximal magnitude, meaning for all other eigenvalues ζ (including complex
ones):
∣λ∣ > ∣ζ ∣

2. The eigenspace for λ is 1-dimensional, so spanned by a single vector.


3. There is an eigenvector for λ which has all positive coordinates.

7
1.2 ODES
Some very basic review of ways to solve an ODE that you will see in the homework:

1.2.1 Separation of variables


Suppose you are given an ODE:
dx g(y)
=
dy f (x)
You can solve this by separating variables and integrating:

f (x)dx = g(y)dy

⟹ ∫ f (x)dx = ∫ g(y)dy

Example 12. Consider the ODE:


dx
= ax + b
dt
where a, b are constants. Then:
dx
= adt
x + b/a
dx
⟹ ∫ = ∫ adt
x + b/a
⟹ ln(x + b/a) = at + C
′ at
⟹ x + b/a = C e

Suppose x(0) = X0 then we have:



X0 + b/a = C
Therefore:
at
x(t) = (X0 + b/a)e − b/a
Note that if b = 0 then we have the solution:
at
x(t) = X0 e

as expected.

1.2.2 Integrating Factor


Given an ODE of the form:
dy
= f (x)y + g(x)
dx
Multiply by the equation by the integrating factor:
− ∫ f (x)
e

8
and rearrange to get:
− ∫ f (x) dy − ∫ f (x) − ∫ f (x)g(x)
e −e f (x)y = e
dx
By product rule and chain rule:
d − ∫ f (x) − ∫ f (x) dy − ∫ f (x)
(e y) = e − f (x)e y
dx dx
Therefore the equation can be written as:
d − ∫ f (x) − inf tf (x)
(e y) = e g(x)
dx
Integrating we have:
− ∫ f (x) − ∫ f (x)
e y=∫ e g(x)dx

So if the right side is integrable we have a solution for y.

Example 13. Consider the equation:


x
dy 2y e
+ x = 2
dx x

Then f (x) = −1/x and g(x) = e /x . The integrating factor is:


x 2

∫ 2/x 2 ln(x) 2
e =e =x

Following the process above we get:

2 2 x x
x y = ∫ x g(x)dx = ∫ e dx = e + C

Therefore: x
e +C
y=
x2

2 Population Growth
In this section we model systems in discrete time. That is we use a fixed time step, e.g. years,
days, months, and we try to model the change observed in the system when sampling at this
time step. This yields a difference equation.

Example 14. Let us try to model the growth of a population over time, given by the function
N (t). Fix a time step ∆t.
The growth rate with respect to ∆t is the percentage change in the population over the
time step ∆t, i.e.:
N (t + ∆t) − N (t)
R=
∆tN (t)

9
Assume that R is constant, so that it does not depend on t. Then:

N (t + ∆t) − N (t) = R∆tN (t)

Rearranging we get a difference equation:

N (t + ∆t) = (1 + R∆t)N (t)

Let N0 = N (0) be the initial value of the population. Then:


n
N (t + n∆t) = (1 + R∆t) N0

Example 15. Let N be a population which has a birth rate of 59 per 100 every year and a
death rate of 48 per 100 every year. With ∆t being a year, the change in population between
time steps is:
59 49 11
N (t + ∆t) − N (t) = N (t) − N (t) = N (t)
100 100 100
The resulting recurrence relation is:

N (t + ∆t) = (1 + 0.11)N (t)

Therefore the solution to the system is given by:


m
N (m∆t) = (1 + 0.11) N0

Example 16. With the same setup as the previous example, when does the population
double? If the population doubles in m years then:
m
2N0 = N (m∆t) = (1 + 0.11) N0

Then:
m
2 = (1 + 0.11) ⟹ ln(2) = m ln(1 + 0.11)
Solving for m:
ln(2)
m=
ln(1 + 0.11)
Example 17. With the same setup, suppose that every year 50 members of another population
migrate into this one. Then the resulting difference equation is:

N (t + ∆t) = (1 + 0.11)N (t) + 50

which is no longer a linear system.

10

You might also like