0% found this document useful (0 votes)
38 views21 pages

Phase Plane Analysis - 3

The document discusses phase portraits for planar systems, focusing on the behavior of solutions based on the eigenvalues of the system's matrix A. It categorizes cases with real distinct eigenvalues into saddle, sink, and source types, illustrating how solutions behave in relation to equilibrium points. Additionally, it addresses complex eigenvalues, leading to periodic solutions and the classification of systems as centers or spiral sinks/sources.

Uploaded by

saxenanikita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views21 pages

Phase Plane Analysis - 3

The document discusses phase portraits for planar systems, focusing on the behavior of solutions based on the eigenvalues of the system's matrix A. It categorizes cases with real distinct eigenvalues into saddle, sink, and source types, illustrating how solutions behave in relation to equilibrium points. Additionally, it addresses complex eigenvalues, leading to periodic solutions and the classification of systems as centers or spiral sinks/sources.

Uploaded by

saxenanikita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

3

Phase Portraits for


Planar Systems

Given the Linearity Principle from the previous chapter, we may now com-
pute the general solution of any planar system. There is a seemingly endless
number of distinct cases, but we will see that these represent in the simplest
possible form nearly all of the types of solutions we will encounter in the
higher-dimensional case.

3.1 Real Distinct Eigenvalues

Consider X 0 = AX and suppose that A has two real eigenvalues λ1 < λ2 .


Assuming for the moment that λi 6= 0, there are three cases to consider:
1. λ1 < 0 < λ2
2. λ1 < λ2 < 0
3. 0 < λ1 < λ2
We give a specific example of each case; any system that falls into any one of
these three categories may be handled similarly, as we show later.

Differential Equations, Dynamical Systems, and an Introduction to Chaos. DOI: 10.1016/B978-0-12-382010-5.00003-8


c 2013 Elsevier Inc. All rights reserved.

39
40 Chapter 3 Phase Portraits for Planar Systems

Example. (Saddle) First consider the simple system X 0 = AX, where

λ
 
0
A= 1
0 λ2

with λ1 < 0 < λ2 . This can be solved immediately since the system decouples
into two unrelated first-order equations:

x 0 = λ1 x
y 0 = λ2 y.

We already know how to solve these equations, but, having in mind what
comes later, let’s find the eigenvalues and eigenvectors. The characteristic
equation is

(λ − λ1 )(λ − λ2 ) = 0,

so λ1 and λ2 are the eigenvalues. An eigenvector corresponding to λ1 is (1, 0)


and to λ2 is (0, 1). Thus, we find the general solution
   
λ1 t 1 λ2 t 0
X(t) = αe + βe .
0 1

Since λ1 < 0, the straight-line solutions of the form αe λ1 t (1, 0) lie on the
x-axis and tend to (0, 0) as t → ∞. This axis is called the stable line. Since
λ2 > 0, the solutions βe λ2 t (0, 1) lie on the y-axis and tend away from (0, 0) as
t → ∞; this axis is the unstable line. All other solutions (with α, β 6= 0) tend
to ∞ in the direction of the unstable line, as t → ∞, since X(t) comes closer
and closer to (0, βe λ2 t ) as t increases. In backward time, these solutions tend
to ∞ in the direction of the stable line. 

In Figure 3.1 we have plotted the phase portrait of this system. The phase
portrait is a picture of a collection of representative solution curves of the
system in R2 , which we call the phase plane. The equilibrium point of a system
of this type (eigenvalues satisfying λ1 < 0 < λ2 ) is called a saddle.
For a slightly more complicated example of this type, consider X 0 = AX,
where
 
1 3
A= .
1 −1

As we saw in Chapter 2, the eigenvalues of A are ±2. The eigenvector associ-


ated with λ = 2 is the vector (3, 1); the eigenvector associated with λ = −2 is
3.1 Real Distinct Eigenvalues 41

Figure 3.1 Saddle


phase portrait for
x0 = −x, y0 = y.

(1, −1). Thus, we have an unstable line that contains straight-line solutions of
the form
 
2t 3
X1 (t) = αe ,
1

each of which tends away from the origin as t → ∞. The stable line contains
the straight-line solutions
 
1
X2 (t) = βe −2t
,
−1

which tend toward the origin as t → ∞. By the Linearity Principle, any other
solution assumes the form
   
2t 3 1
X(t) = αe + βe −2t
1 −1

for some α, β. Note that, if α 6= 0, as t → ∞, we have


 
2t 3
X(t) ∼ αe = X1 (t),
1

whereas, if β 6= 0, as t → −∞,
 
1
X(t) ∼ βe −2t = X2 (t).
−1

Thus, as time increases, the typical solution approaches X1 (t) while, as time
decreases, this solution tends toward X2 (t), just as in the previous case.
Figure 3.2 displays this phase portrait.
42 Chapter 3 Phase Portraits for Planar Systems

Figure 3.2 Saddle


phase portrait for
x0 = x + 3y, y0 = x − y.

In the general case where A has a positive and negative eigenvalue, we always
find a similar stable and unstable line on which solutions tend toward or away
from the origin. All other solutions approach the unstable line as t → ∞, and
tend toward the stable line as t → −∞.

Example. (Sink) Now consider the case X 0 = AX where

λ1 0
 
A=
0 λ2

but λ1 < λ2 < 0. As before, we find two straight-line solutions and then the
general solution
   
1 0
X(t) = αe λ1 t + βe λ2 t .
0 1

Unlike the saddle case, now all solutions tend to (0, 0) as t → ∞. The question
is this: How do they approach the origin? To answer this, we compute the slope
dy/dx of a solution with β 6= 0. We write

x(t) = αe λ1 t
y(t) = βe λ2 t

and compute

dy dy/dt λ2 βe λ2 t λ2 β (λ2 −λ1 )t


= = λ
= e .
dx dx/dt λ1 αe 1 t λ1 α

Since λ2 − λ1 > 0, it follows that these slopes approach ±∞ (provided β 6= 0).


Thus these solutions tend to the origin tangentially to the y-axis. 
3.1 Real Distinct Eigenvalues 43

(a) (b)

Figure 3.3 Phase portraits for a sink and a source.

Since λ1 < λ2 < 0, we call λ1 the stronger eigenvalue and λ2 the weaker
eigenvalue. The reason for this in this particular case is that the x-coordinates
of solutions tend to 0 much more quickly than the y-coordinates. This
accounts for why solutions (except those on the line corresponding to λ1 -
eigenvector) tend to “hug” the straight-line solution corresponding to the
weaker eigenvalue as they approach the origin. The phase portrait for this
system is displayed in Figure 3.3a. In this case the equilibrium point is called a
sink.
More generally, if the system has eigenvalues λ1 < λ2 < 0 with eigenvectors
(u1 , u2 ) and (v1 , v2 ) respectively, then the general solution is
   
λ1 t u1 λ2 t v1
αe + βe .
u2 v2

The slope of this solution is given by

dy λ1 αe λ1 t u2 + λ2 βe λ2 t v2
=
dx λ1 αe λ1 t u1 + λ2 βe λ2 t v1
λ1 αe λ1 t u2 + λ2 βe λ2 t v2 e −λ2 t
 
=
λ1 αe λ1 t u1 + λ2 βe λ2 t v1 e −λ2 t
λ1 αe (λ1 −λ2 )t u2 + λ2 βv2
= ,
λ1 αe (λ1 −λ2 )t u1 + λ2 βv1

which tends to the slope v2 /v1 of the λ2 -eigenvector, unless we have β = 0. If


β = 0, our solution is the straight-line solution corresponding to the eigen-
value λ1 . Thus, in this case as well, all solutions (except those on the straight
line corresponding to the stronger eigenvalue) tend to the origin tangentially
to the straight-line solution corresponding to the weaker eigenvalue.
44 Chapter 3 Phase Portraits for Planar Systems

Example. (Source) When the matrix

λ
 
0
A= 1
0 λ2

satisfies 0 < λ2 < λ1 , our vector field may be regarded as the negative of the
previous example. The general solution and phase portrait remain the same,
except that all solutions now tend away from (0, 0) along the same paths. See
Figure 3.3b. 

One may argue that we are presenting examples here that are much too
simple. Although this is true, we will soon see that any system of differential
equations with a matrix that has real distinct eigenvalues can be manipulated
into the preceding special forms by changing coordinates.
Finally, a special case occurs if one of the eigenvalues is equal to 0. As we
have seen, there is a straight line of equilibrium points in this case. If the other
eigenvalue λ is nonzero, then the sign of λ determines whether the other solu-
tions tend toward or away from these equilibria (see Exercises 10 and 11 of this
chapter).

3.2 Complex Eigenvalues

It may happen that the roots of the characteristic polynomial are complex
numbers. In analogy with the real case, we call these roots complex eigenvalues.
When the matrix A has complex eigenvalues, we no longer have straight-line
solutions. However, we can still derive the general solution as before by using a
few tricks involving complex numbers and functions. The following examples
indicate the general procedure.

Example. (Center) Consider X 0 = AX with

0 β
 
A=
−β 0

and β 6= 0. The characteristic polynomial is λ2 + β 2 = 0, so the eigenvalues


are now the imaginary numbers ±iβ. Without worrying about the resulting
complex vectors, we react just as before to find the eigenvector corresponding
to λ = iβ. We therefore solve

β
    
−iβ x 0
= ,
−β −iβ y 0
3.2 Complex Eigenvalues 45

or iβx = βy, since the second equation is redundant. Thus we find a complex
eigenvector (1, i), and so the function
 
iβt 1
X(t) = e
i

is a complex solution of X 0 = AX.


Now in general it is not polite to hand someone a complex solution to a
real system of differential equations, but we can remedy this with the help of
Euler’s formula:

e iβt = cos βt + i sin βt.

Using this fact, we rewrite the solution as

cos βt + i sin βt cos βt + i sin βt


   
X(t) = = .
i(cos βt + i sin βt) − sin βt + i cos βt
Better yet, by breaking X(t) into its real and imaginary parts, we have

X(t) = Xre (t) + iXim (t),

where
cos βt sin βt
   
Xre (t) = , Xim (t) = .
− sin βt cos βt
But now we see that both Xre (t) and Xim (t) are (real!) solutions of the original
system. To see this, we simply check

Xre0 (t) + iXim


0
(t) = X 0 (t)
= AX(t)
= A(Xre (t) + iXim (t))
= AXre + iAXim (t).

Equating the real and imaginary parts of this equation yields Xre0 = AXre and
0 = AX , which shows that both are indeed solutions. Moreover, since
Xim im
   
1 0
Xre (0) = , Xim (0) = ,
0 1

the linear combination of these solutions,

X(t) = c1 Xre (t) + c2 Xim (t),


46 Chapter 3 Phase Portraits for Planar Systems

Figure 3.4 Phase


portrait for a center.

where c1 and c2 are arbitrary constants, provides a solution to any initial value
problem.
We claim that this is the general solution of this equation. To prove this, we
need to show that these are the only solutions of this equation. So suppose
that this is not the case. Let
 
u(t)
Y (t) =
v(t)

be another solution. Consider the complex function f (t) = (u(t) + iv(t))e iβt .
Differentiating this expression and using the fact that Y (t) is a solution of
the equation yields f 0 (t) = 0. Thus, u(t) + iv(t) is a complex constant times
e −iβt . From this it follows directly that Y (t) is a linear combination of Xre (t)
and Xim (t).
Note that each of these solutions is a periodic function with period 2π/β.
Indeed, the phase portrait shows that all solutions lie on circles centered at
the origin. These circles are traversed in the clockwise direction if β > 0,
counterclockwise if β < 0. See Figure 3.4. This type of system is called a
center. 

Example. (Spiral Sink, Spiral Source) More generally, consider X 0 = AX,


where
α β
 
A=
−β α

and α, β 6= 0. The characteristic equation is now λ2 − 2αλ + α 2 + β 2 , so


the eigenvalues are λ = α ± iβ. An eigenvector associated with α + iβ is
determined by the equation

(α − (α + iβ))x + βy = 0.
3.3 Repeated Eigenvalues 47

Figure 3.5 Phase portraits for a spiral sink and a


spiral source.

Thus (1, i) is again an eigenvector, and so we have complex solutions of the


form
 
(α+iβ)t 1
X(t) = e
i
cos βt αt sin βt
   
αt
=e + ie
− sin βt cos βt
= Xre (t) + iXim (t).

As before, both Xre (t) and Xim (t) yield real solutions of the system with initial
conditions that are linearly independent. Thus we find the general solution,

cos βt sin βt
   
X(t) = c1 e αt + c2 e αt .
− sin βt cos βt

Without the term e αt , these solutions would wind periodically around circles
centered at the origin. The e αt term converts solutions into spirals that either
spiral into the origin (when α < 0) or away from the origin (α > 0). In these
cases the equilibrium point is called a spiral sink or spiral source respectively.
See Figure 3.5. 

3.3 Repeated Eigenvalues

The only remaining cases occur when A has repeated real eigenvalues. One
simple case occurs when A is a diagonal matrix of the form

λ 0
 
A= .
0 λ
48 Chapter 3 Phase Portraits for Planar Systems

The eigenvalues of A are both equal to λ. In this case every nonzero vector is
an eigenvector since

AV = λV

for any V ∈ R2 . Thus, solutions are of the form

X(t) = αe λt V .

Each such solution lies on a straight line through (0, 0) and either tends to
(0, 0) (if λ < 0) or away from (0, 0) (if λ > 0). So this is an easy case.
A more interesting case occurs when

λ 1
 
A= .
0 λ

Again, both eigenvalues are equal to λ, but now there is only one linearly inde-
pendent eigenvector that is given by (1, 0). Thus, we have one straight-line
solution
 
λt 1
X1 (t) = αe .
0

To find other solutions note that the system may be written

x 0 = λx + y
y 0 = λy.

Thus, if y 6= 0, we must have

y(t) = βe λt .

Therefore, the differential equation for x(t) reads

x 0 = λx + βe λt .

This is a nonautonomous, first-order differential equation for x(t). One might


first expect solutions of the form e λt , but the nonautonomous term is also in
this form. As you perhaps saw in calculus, the best option is to guess a solution
of the form

x(t) = αe λt + µte λt

for some constants α and µ. This technique is often called the method of
undetermined coefficients. Inserting this guess into the differential equation
3.4 Changing Coordinates 49

Figure 3.6 Phase


portrait for a system with
repeated negative
eigenvalues.

shows that µ = β while α is arbitrary. Thus, the solution of the system may be
written
   
1 t
αe λt + βe λt .
0 1

This is in fact the general solution (see Exercise 12 of this chapter).


Note that, if λ < 0, each term in this solution tends to 0 as t → ∞. This is
clear for the αe λt and βe λt terms. For the term βte λt , this is an immediate con-
sequence of l’Hôpital’s rule. Thus, all solutions tend to (0, 0) as t → ∞. When
λ > 0, all solutions tend away from (0, 0). See Figure 3.6. In fact, solutions
tend toward or away from the origin in a direction tangent to the eigenvector
(1, 0) (see Exercise 7 at the end of this chapter).

3.4 Changing Coordinates

Despite differences in the associated phase portraits, we really have dealt with
only three type of matrices in these past four sections:

λ 0 α β λ 1
   
, , .
0 µ −β α 0 λ

Any 2 × 2 matrix that is in one of these three forms is said to be in canonical


form. Systems in this form may seem rather special, but they are not. Given
any linear system X 0 = AX, we can always “change coordinates” so that the
new system’s coefficient matrix is in canonical form and so is easily solved.
Here is how to do this.
50 Chapter 3 Phase Portraits for Planar Systems

A linear map (or linear transformation) on R2 is a function T : R2 → R2 of


the form
   
x ax + by
T = .
y cx + dy

That is, T simply multiplies any vector by the 2 × 2 matrix


 
a b
.
c d

We will thus think of the linear map and its matrix as being interchangeable,
so that we also write
 
a b
T= .
c d

Hopefully no confusion will result from this slight imprecision.


Now suppose that T is invertible. This means that the matrix T has an
inverse matrix S that satisfies TS = ST = I where I is the 2 × 2 identity matrix.
It is traditional to denote the inverse of a matrix T by T −1 . As is easily checked,
the matrix
 
1 d −b
S=
det T −c a

serves as T −1 if det T 6= 0. If det T = 0, we know from Chapter 2 that there are


infinitely many vectors (x, y) for which
   
x 0
T = .
y 0

Thus, there is no inverse matrix in this case, for we would need


     
x −1 x −1 0
=T T =T
y y 0

for each such vector. We have shown this.

Proposition. T he 2 × 2 matrix T is invertible if and only if det T 6= 0. 


3.4 Changing Coordinates 51

Now, instead of considering a linear system X 0 = AX, suppose we consider


a different system,

Y 0 = (T −1 AT)Y ,

for some invertible linear map T. Note that if Y (t) is a solution of this new
system, then X(t) = TY (t) solves X 0 = AX. Indeed, we have

(TY (t))0 = TY 0 (t)


= T(T −1 AT)Y (t)
= A(TY (t)),

as required. That is, the linear map T converts solutions of Y 0 = (T −1 AT)Y


to solutions of X 0 = AX. Alternatively, T −1 takes solutions of X 0 = AX to
solutions of Y 0 = (T −1 AT)Y .
We therefore think of T as a change of coordinates that converts a given
linear system into one with a different coefficient matrix. What we hope to
be able to do is find a linear map T that converts the given system into a sys-
tem of the form Y 0 = (T −1 AT)Y that is easily solved. And, as you may have
guessed, we can always do this by finding a linear map that converts a given
linear system to one in canonical form.

Example. (Real Eigenvalues) Suppose the matrix A has two real, distinct
eigenvalues λ1 and λ2 with associated eigenvectors V1 and V2 . Let T be the
matrix with columns V1 and V2 . Thus, TEj = Vj for j = 1, 2 where the Ej form
the standard basis of R2 . Also, T −1 Vj = Ej . Therefore, we have

(T −1 AT)Ej = T −1 AVj = T −1 (λj Vj )


= λj T −1 Vj
= λj Ej .

Thus the matrix T −1 AT assumes the canonical form


λ1 0
 
−1
T AT =
0 λ2

and the corresponding system is easy to solve. 

Example. As a further specific example, suppose


 
−1 0
A= .
1 −2
52 Chapter 3 Phase Portraits for Planar Systems

The characteristic equation is λ2 + 3λ + 2, which yields eigenvalues λ = −1


and λ = −2. An eigenvector corresponding to λ = −1 is given by solving
      
x 0 0 x 0
(A + I) = = ,
y 1 −1 y 0

which yields an eigenvector (1, 1). Similarly an eigenvector associated with


λ = −2 is given by (0, 1).
We therefore have a pair of straight-line solutions, each tending to the origin
as t → ∞. The straight-line solution corresponding to the weaker eigen-
value lies along the line y = x; the straight-line solution corresponding to the
stronger eigenvalue lies on the y-axis. All other solutions tend to the origin
tangentially to the line y = x.
To put this sytem in canonical form, we choose T to be the matrix with
columns that are these eigenvectors:
 
1 0
T= ,
1 1

so that
 
1 0
T −1 = .
−1 1

Finally, we compute
 
−1 0
T −1 AT = ,
0 −2

so T −1 AT is in canonical form. The general solution of the system Y 0 =


(T −1 AT)Y is
   
1 −2t 0
Y (t) = αe −t
+ βe ,
0 1

so the general solution of X 0 = AX is


     
1 0 −t 1 −2t 0
TY (t) = αe + βe
1 1 0 1
   
1 0
= αe −t + βe −2t .
1 1
3.4 Changing Coordinates 53

Figure 3.7 Change of variables T in the case of a (real) sink.

Thus the linear map T converts the phase portrait for the system,
 
−1 0
Y0 = Y,
0 −2

to that of X 0 = AX as shown in Figure 3.7. 

Note that we really do not have to go through the step of converting a


specific system to one in canonical form; once we have the eigenvalues and
eigenvectors, we can simply write down the general solution. We take this
extra step because, when we attempt to classify all possible linear systems, the
canonical form of the system will greatly simplify this process.

Example. (Complex Eigenvalues) Now suppose that the matrix A has


complex eigenvalues α ± iβ with β 6= 0. Then we may find a complex eigen-
vector V1 + iV2 corresponding to α + iβ, where both V1 and V2 are real
vectors. We claim that V1 and V2 are linearly independent vectors in R2 . If
this were not the case, then we would have V1 = cV2 for some c ∈ R. But then
we have

A(V1 + iV2 ) = (α + iβ)(V1 + iV2 ) = (α + iβ)(c + i)V2 .

But we also have

A(V1 + iV2 ) = (c + i)AV2 .

So we conclude that AV2 = (α + iβ)V2 . This is a contradiction since the left


side is a real vector while the right is complex.
Since V1 + iV2 is an eigenvector associated with α + iβ, we have

A(V1 + iV2 ) = (α + iβ)(V1 + iV2 ).


54 Chapter 3 Phase Portraits for Planar Systems

Equating the real and imaginary components of this vector equation, we find

AV1 = αV1 − βV2


AV2 = βV1 + αV2 .

Let T be the matrix with columns V1 and V2 . Thus TEj = Vj for j = 1, 2.


Now consider T −1 AT. We have

(T −1 AT)E1 = T −1 (αV1 − βV2 )


= αE1 − βE2

and similarly

(T −1 AT)E2 = βE1 + αE2 .

Thus the matrix T −1 AT is in the canonical form

α β
 
−1
T AT = .
−β α

We saw that the system Y 0 = (T −1 AT)Y has phase portrait corresponding to


a spiral sink, center, or spiral source depending on whether α < 0, α = 0, or
α > 0. Therefore, the phase portrait of X 0 = AX is equivalent to one of these
after changing coordinates using T. 

Example. (Another Harmonic Oscillator) Consider the second-order equa-


tion

x 00 + 4x = 0.

This corresponds to an undamped harmonic oscillator with mass 1 and spring


constant 4. As a system, we have
 
0 0 1
X = X = AX.
−4 0

The characteristic equation is

λ2 + 4 = 0,
3.4 Changing Coordinates 55

so that the eigenvalues are ±2i. A complex eigenvector associated with λ = 2i


is a solution of the system
−2ix + y = 0
−4x − 2iy = 0.

One such solution is the vector (1, 2i). So we have a complex solution of the
form
 
2it 1
e .
2i

Breaking this solution into its real and imaginary parts, we find the general
solution
   
cos 2t sin 2t
X(t) = c1 + c2 .
−2 sin 2t 2 cos 2t

Thus the position of this oscillator is given by


x(t) = c1 cos 2t + c2 sin 2t,

which is a periodic function of period π.


Now, let T be the matrix with columns that are the real and imaginary parts
of the eigenvector (1, 2i); that is
 
1 0
T= .
0 2

Then we compute easily that


 
−1 0 2
T AT = ,
−2 0

which is in canonical form. The phase portraits of these systems are shown
in Figure 3.8. Note that T maps the circular solutions of the system Y 0 =
(T −1 AT)Y to elliptic solutions of X 0 = AX. 

Example. (Repeated Eigenvalues) Suppose A has a single real eigenvalue λ.


If there exists a pair of linearly independent eigenvectors, then in fact A must
be in the form
λ 0
 
A= ,
0 λ

so the system X 0 = AX is easily solved (see Exercise 15 of this chapter).


56 Chapter 3 Phase Portraits for Planar Systems

Figure 3.8 Change of variables T in the case of a center.

For the more complicated case, let’s assume that V is an eigenvector and
that every other eigenvector is a multiple of V . Let W be any vector for which
V and W are linearly independent. Then we have

AW = µV + νW

for some constants µ, ν ∈ R. Note that µ 6= 0, for otherwise we would have a


second linearly independent eigenvector W with eigenvalue ν. We claim that
ν = λ. If ν − λ 6= 0, a computation shows that

µ µ
       
A W+ V =ν W + V .
ν −λ ν −λ

This says that ν is a second eigenvalue different from λ. Thus, we must have
ν = λ.
Finally, let U = (1/µ)W . Then

λ
AU = V + W = V + λU .
µ

Thus if we define TE1 = V , TE2 = U , we get

λ 1
 
−1
T AT = ,
0 λ

as required. X 0 = AX is therefore again in canonical form after this change of


coordinates. 
Exercises 57

EXERCISES

1. In Figure 3.9, you see six phase portraits. Match each of these phase
portraits with one of the following linear systems:
     
3 5 −3 −2 3 −2
(a) (b) (c)
−2 −2 5 2 5 −2
     
−3 5 3 5 −3 5
(d) (e) (f)
−2 3 −2 −3 −2 2

2. For each of the following systems of the form X 0 = AX


(a) Find the eigenvalues and eigenvectors of A.
(b) Find the matrix T that puts A in canonical form.
(c) Find the general solution of both X 0 = AX and Y 0 = (T −1 AT)Y .
(d) Sketch the phase portraits of both systems.
   
0 1 1 1
(i) A = (ii) A =
1 0 1 0

1. 2. 3.

4. 5. 6.

Figure 3.9 Match these phase portraits with the systems in Exercise 1.
58 Chapter 3 Phase Portraits for Planar Systems
   
1 1 1 1
(iii) A = (iv) A =
−1 0 −1 3

   
1 1 1 1
(v) A = (vi) A =
−1 −3 1 −1

3. Find the general solution of the following harmonic oscillator equations:


(a) x 00 + x 0 + x = 0
(b) x 00 + 2x 0 + x = 0
4. Consider the harmonic oscillator system
 
0 1
X0 = X,
−k −b

where b ≥ 0, k > 0, and the mass m = 1.


(a) For which values of k, b does this system have complex eigenvalues?
Repeated eigenvalues? Real and distinct eigenvalues?
(b) Find the general solution of this system in each case.
(c) Describe the motion of the mass when the mass is released from
the initial position x = 1 with zero velocity in each of the cases in
part (a).
5. Sketch the phase portrait of X 0 = AX where
 
a 1
A= .
2a 2

For which values of a do you find a bifurcation? Describe the phase


portrait for a-values above and below the bifurcation point.
6. Consider the system
 
2a b
X0 = X.
b 0

Sketch the regions in the ab-plane where this system has different types
of canonical forms.
7. Consider the system

λ 1
 
0
X = X
0 λ

with λ 6= 0. Show that all solutions tend to (respectively, away from)


the origin tangentially to the eigenvector (1, 0) when λ < 0 (respectively,
λ > 0).
Exercises 59

8. Find all 2 × 2 matrices that have pure imaginary eigenvalues. That is,
determine conditions on the entries of a matrix that guarantee the matrix
has pure imaginary eigenvalues.
9. Determine a computable condition that guarantees that, if a matrix A
has complex eigenvalues with nonzero imaginary parts, then solutions
of X 0 = AX travel around the origin in the counterclockwise direction.
10. Consider the system
 
0 a b
X = X,
c d

where a + d 6= 0 but ad − bc = 0. Find the general solution of this system


and sketch the phase portrait.
11. Find the general solution and describe completely the phase portrait for
 
0 1
X0 = X.
0 0

12. Prove that


   
λt1 λt t
αe + βe
0 1

is the general solution of


λ 1
 
0
X = X.
0 λ

13. Prove that a 2 × 2 matrix A always satisfies its own characteristic equa-
tion. That is, if λ2 + αλ + β = 0 is the characteristic equation associated
with A, then the matrix A2 + αA + βI is the 0-matrix.
14. Suppose the 2 × 2 matrix A has repeated eigenvalues λ. Let V ∈ R2 .
Using the previous problem, show that either V is an eigenvector for
A or else (A − λI)V is an eigenvector for A.
15. Suppose the matrix A has repeated real eigenvalues λ and there exist, a
pair of linearly independent eigenvectors associated with A. Prove that

λ 0
 
A= .
0 λ

16. Consider the (nonlinear) system

x 0 = |y|
y 0 = −x.

Use the methods of this chapter to describe the phase portrait.

You might also like