Phase Plane Analysis - 3
Phase Plane Analysis - 3
Given the Linearity Principle from the previous chapter, we may now com-
pute the general solution of any planar system. There is a seemingly endless
number of distinct cases, but we will see that these represent in the simplest
possible form nearly all of the types of solutions we will encounter in the
higher-dimensional case.
39
40 Chapter 3 Phase Portraits for Planar Systems
λ
0
A= 1
0 λ2
with λ1 < 0 < λ2 . This can be solved immediately since the system decouples
into two unrelated first-order equations:
x 0 = λ1 x
y 0 = λ2 y.
We already know how to solve these equations, but, having in mind what
comes later, let’s find the eigenvalues and eigenvectors. The characteristic
equation is
(λ − λ1 )(λ − λ2 ) = 0,
Since λ1 < 0, the straight-line solutions of the form αe λ1 t (1, 0) lie on the
x-axis and tend to (0, 0) as t → ∞. This axis is called the stable line. Since
λ2 > 0, the solutions βe λ2 t (0, 1) lie on the y-axis and tend away from (0, 0) as
t → ∞; this axis is the unstable line. All other solutions (with α, β 6= 0) tend
to ∞ in the direction of the unstable line, as t → ∞, since X(t) comes closer
and closer to (0, βe λ2 t ) as t increases. In backward time, these solutions tend
to ∞ in the direction of the stable line.
In Figure 3.1 we have plotted the phase portrait of this system. The phase
portrait is a picture of a collection of representative solution curves of the
system in R2 , which we call the phase plane. The equilibrium point of a system
of this type (eigenvalues satisfying λ1 < 0 < λ2 ) is called a saddle.
For a slightly more complicated example of this type, consider X 0 = AX,
where
1 3
A= .
1 −1
(1, −1). Thus, we have an unstable line that contains straight-line solutions of
the form
2t 3
X1 (t) = αe ,
1
each of which tends away from the origin as t → ∞. The stable line contains
the straight-line solutions
1
X2 (t) = βe −2t
,
−1
which tend toward the origin as t → ∞. By the Linearity Principle, any other
solution assumes the form
2t 3 1
X(t) = αe + βe −2t
1 −1
whereas, if β 6= 0, as t → −∞,
1
X(t) ∼ βe −2t = X2 (t).
−1
Thus, as time increases, the typical solution approaches X1 (t) while, as time
decreases, this solution tends toward X2 (t), just as in the previous case.
Figure 3.2 displays this phase portrait.
42 Chapter 3 Phase Portraits for Planar Systems
In the general case where A has a positive and negative eigenvalue, we always
find a similar stable and unstable line on which solutions tend toward or away
from the origin. All other solutions approach the unstable line as t → ∞, and
tend toward the stable line as t → −∞.
λ1 0
A=
0 λ2
but λ1 < λ2 < 0. As before, we find two straight-line solutions and then the
general solution
1 0
X(t) = αe λ1 t + βe λ2 t .
0 1
Unlike the saddle case, now all solutions tend to (0, 0) as t → ∞. The question
is this: How do they approach the origin? To answer this, we compute the slope
dy/dx of a solution with β 6= 0. We write
x(t) = αe λ1 t
y(t) = βe λ2 t
and compute
(a) (b)
Since λ1 < λ2 < 0, we call λ1 the stronger eigenvalue and λ2 the weaker
eigenvalue. The reason for this in this particular case is that the x-coordinates
of solutions tend to 0 much more quickly than the y-coordinates. This
accounts for why solutions (except those on the line corresponding to λ1 -
eigenvector) tend to “hug” the straight-line solution corresponding to the
weaker eigenvalue as they approach the origin. The phase portrait for this
system is displayed in Figure 3.3a. In this case the equilibrium point is called a
sink.
More generally, if the system has eigenvalues λ1 < λ2 < 0 with eigenvectors
(u1 , u2 ) and (v1 , v2 ) respectively, then the general solution is
λ1 t u1 λ2 t v1
αe + βe .
u2 v2
dy λ1 αe λ1 t u2 + λ2 βe λ2 t v2
=
dx λ1 αe λ1 t u1 + λ2 βe λ2 t v1
λ1 αe λ1 t u2 + λ2 βe λ2 t v2 e −λ2 t
=
λ1 αe λ1 t u1 + λ2 βe λ2 t v1 e −λ2 t
λ1 αe (λ1 −λ2 )t u2 + λ2 βv2
= ,
λ1 αe (λ1 −λ2 )t u1 + λ2 βv1
λ
0
A= 1
0 λ2
satisfies 0 < λ2 < λ1 , our vector field may be regarded as the negative of the
previous example. The general solution and phase portrait remain the same,
except that all solutions now tend away from (0, 0) along the same paths. See
Figure 3.3b.
One may argue that we are presenting examples here that are much too
simple. Although this is true, we will soon see that any system of differential
equations with a matrix that has real distinct eigenvalues can be manipulated
into the preceding special forms by changing coordinates.
Finally, a special case occurs if one of the eigenvalues is equal to 0. As we
have seen, there is a straight line of equilibrium points in this case. If the other
eigenvalue λ is nonzero, then the sign of λ determines whether the other solu-
tions tend toward or away from these equilibria (see Exercises 10 and 11 of this
chapter).
It may happen that the roots of the characteristic polynomial are complex
numbers. In analogy with the real case, we call these roots complex eigenvalues.
When the matrix A has complex eigenvalues, we no longer have straight-line
solutions. However, we can still derive the general solution as before by using a
few tricks involving complex numbers and functions. The following examples
indicate the general procedure.
0 β
A=
−β 0
β
−iβ x 0
= ,
−β −iβ y 0
3.2 Complex Eigenvalues 45
or iβx = βy, since the second equation is redundant. Thus we find a complex
eigenvector (1, i), and so the function
iβt 1
X(t) = e
i
where
cos βt sin βt
Xre (t) = , Xim (t) = .
− sin βt cos βt
But now we see that both Xre (t) and Xim (t) are (real!) solutions of the original
system. To see this, we simply check
Equating the real and imaginary parts of this equation yields Xre0 = AXre and
0 = AX , which shows that both are indeed solutions. Moreover, since
Xim im
1 0
Xre (0) = , Xim (0) = ,
0 1
where c1 and c2 are arbitrary constants, provides a solution to any initial value
problem.
We claim that this is the general solution of this equation. To prove this, we
need to show that these are the only solutions of this equation. So suppose
that this is not the case. Let
u(t)
Y (t) =
v(t)
be another solution. Consider the complex function f (t) = (u(t) + iv(t))e iβt .
Differentiating this expression and using the fact that Y (t) is a solution of
the equation yields f 0 (t) = 0. Thus, u(t) + iv(t) is a complex constant times
e −iβt . From this it follows directly that Y (t) is a linear combination of Xre (t)
and Xim (t).
Note that each of these solutions is a periodic function with period 2π/β.
Indeed, the phase portrait shows that all solutions lie on circles centered at
the origin. These circles are traversed in the clockwise direction if β > 0,
counterclockwise if β < 0. See Figure 3.4. This type of system is called a
center.
(α − (α + iβ))x + βy = 0.
3.3 Repeated Eigenvalues 47
As before, both Xre (t) and Xim (t) yield real solutions of the system with initial
conditions that are linearly independent. Thus we find the general solution,
cos βt sin βt
X(t) = c1 e αt + c2 e αt .
− sin βt cos βt
Without the term e αt , these solutions would wind periodically around circles
centered at the origin. The e αt term converts solutions into spirals that either
spiral into the origin (when α < 0) or away from the origin (α > 0). In these
cases the equilibrium point is called a spiral sink or spiral source respectively.
See Figure 3.5.
The only remaining cases occur when A has repeated real eigenvalues. One
simple case occurs when A is a diagonal matrix of the form
λ 0
A= .
0 λ
48 Chapter 3 Phase Portraits for Planar Systems
The eigenvalues of A are both equal to λ. In this case every nonzero vector is
an eigenvector since
AV = λV
X(t) = αe λt V .
Each such solution lies on a straight line through (0, 0) and either tends to
(0, 0) (if λ < 0) or away from (0, 0) (if λ > 0). So this is an easy case.
A more interesting case occurs when
λ 1
A= .
0 λ
Again, both eigenvalues are equal to λ, but now there is only one linearly inde-
pendent eigenvector that is given by (1, 0). Thus, we have one straight-line
solution
λt 1
X1 (t) = αe .
0
x 0 = λx + y
y 0 = λy.
y(t) = βe λt .
x 0 = λx + βe λt .
x(t) = αe λt + µte λt
for some constants α and µ. This technique is often called the method of
undetermined coefficients. Inserting this guess into the differential equation
3.4 Changing Coordinates 49
shows that µ = β while α is arbitrary. Thus, the solution of the system may be
written
1 t
αe λt + βe λt .
0 1
Despite differences in the associated phase portraits, we really have dealt with
only three type of matrices in these past four sections:
λ 0 α β λ 1
, , .
0 µ −β α 0 λ
We will thus think of the linear map and its matrix as being interchangeable,
so that we also write
a b
T= .
c d
Y 0 = (T −1 AT)Y ,
for some invertible linear map T. Note that if Y (t) is a solution of this new
system, then X(t) = TY (t) solves X 0 = AX. Indeed, we have
Example. (Real Eigenvalues) Suppose the matrix A has two real, distinct
eigenvalues λ1 and λ2 with associated eigenvectors V1 and V2 . Let T be the
matrix with columns V1 and V2 . Thus, TEj = Vj for j = 1, 2 where the Ej form
the standard basis of R2 . Also, T −1 Vj = Ej . Therefore, we have
so that
1 0
T −1 = .
−1 1
Finally, we compute
−1 0
T −1 AT = ,
0 −2
Thus the linear map T converts the phase portrait for the system,
−1 0
Y0 = Y,
0 −2
Equating the real and imaginary components of this vector equation, we find
and similarly
α β
−1
T AT = .
−β α
x 00 + 4x = 0.
λ2 + 4 = 0,
3.4 Changing Coordinates 55
One such solution is the vector (1, 2i). So we have a complex solution of the
form
2it 1
e .
2i
Breaking this solution into its real and imaginary parts, we find the general
solution
cos 2t sin 2t
X(t) = c1 + c2 .
−2 sin 2t 2 cos 2t
which is in canonical form. The phase portraits of these systems are shown
in Figure 3.8. Note that T maps the circular solutions of the system Y 0 =
(T −1 AT)Y to elliptic solutions of X 0 = AX.
For the more complicated case, let’s assume that V is an eigenvector and
that every other eigenvector is a multiple of V . Let W be any vector for which
V and W are linearly independent. Then we have
AW = µV + νW
µ µ
A W+ V =ν W + V .
ν −λ ν −λ
This says that ν is a second eigenvalue different from λ. Thus, we must have
ν = λ.
Finally, let U = (1/µ)W . Then
λ
AU = V + W = V + λU .
µ
λ 1
−1
T AT = ,
0 λ
EXERCISES
1. In Figure 3.9, you see six phase portraits. Match each of these phase
portraits with one of the following linear systems:
3 5 −3 −2 3 −2
(a) (b) (c)
−2 −2 5 2 5 −2
−3 5 3 5 −3 5
(d) (e) (f)
−2 3 −2 −3 −2 2
1. 2. 3.
4. 5. 6.
Figure 3.9 Match these phase portraits with the systems in Exercise 1.
58 Chapter 3 Phase Portraits for Planar Systems
1 1 1 1
(iii) A = (iv) A =
−1 0 −1 3
1 1 1 1
(v) A = (vi) A =
−1 −3 1 −1
Sketch the regions in the ab-plane where this system has different types
of canonical forms.
7. Consider the system
λ 1
0
X = X
0 λ
8. Find all 2 × 2 matrices that have pure imaginary eigenvalues. That is,
determine conditions on the entries of a matrix that guarantee the matrix
has pure imaginary eigenvalues.
9. Determine a computable condition that guarantees that, if a matrix A
has complex eigenvalues with nonzero imaginary parts, then solutions
of X 0 = AX travel around the origin in the counterclockwise direction.
10. Consider the system
0 a b
X = X,
c d
13. Prove that a 2 × 2 matrix A always satisfies its own characteristic equa-
tion. That is, if λ2 + αλ + β = 0 is the characteristic equation associated
with A, then the matrix A2 + αA + βI is the 0-matrix.
14. Suppose the 2 × 2 matrix A has repeated eigenvalues λ. Let V ∈ R2 .
Using the previous problem, show that either V is an eigenvector for
A or else (A − λI)V is an eigenvector for A.
15. Suppose the matrix A has repeated real eigenvalues λ and there exist, a
pair of linearly independent eigenvectors associated with A. Prove that
λ 0
A= .
0 λ
x 0 = |y|
y 0 = −x.