Notes On Exercises
Notes On Exercises
This document is complementary to the answers in the book. For some of the
exercises in the study guide it contains hints, some intermediate steps, and an
answer in case of an incorrect answer in the book.
Note: this document does not contain full solutions, and should not be used as
a model for correct ways to write things down. For this, you can have a look at
the solutions of old exams.
Section 2.1
21. The general solution is y(t) = −2 − 23 (sin(t) + cos(t)) + Cet , with C ∈ R
arbitrary. If C 6= 0, the last term is unbounded, since et → ∞ as t → ∞.
If C = 0, the solution is bounded. To which y0 does C = 0 correspond?
23. The general solution is y(t) = − 23 t − 34 − 2et + Ce2t . For C 6= 0, the
last term is dominant for t → ∞. You can see this by rewriting y(t) =
e2t ( 32 te−2t − 34 e−2t − 2e−t + C). Since in the expression between brackets
all terms except C go to 0, we have y(t) ≈ Ce2t as t → ∞.
That means that for C > 0, y(t) → +∞ as t → ∞ and for C < 0,
y(t) → −∞ as t → ∞. What happens if C = 0?
24. First assume a 6= λ. Using the integrating factor µ(t) = eat , we find the
b
general solution y(t) = a−λ e−λt + Ce−at . Now use that limt→∞ e−ct = 0
for c > 0 (standard limits).
Now consider λ = a (why doesn’t the previous work in that case?). Again
we can take µ(t) = eat . We get y(t) = (bt + C)e−at , C ∈ R arbitrary. Now
use standard limits limt→∞ tr e−ct = 0 for all c > 0 and all r ∈ R.
Section 2.2
2. Don’t forget y(x) = 0. Solution in the book can be made explicit to
1
y(x) = sin(x)+C .
√
8. Solution can be made explicit: y(x) = −2 ± x3 − x + C + 4.
17. The interval of definition can be found by looking where the solution stops
to be differentiable. Since
1 + 3x2
y0 = ,
3y 2 − 12y
1
11. Typo’s in answer in the digital version
q of the book and absolute value
signs are missing. Correct: y(t) = ± 23 ln |1 + t3 | + y02 . Solution is differ-
3 2 1
3 2 1
entiable on two intervals: (e− 2 y0 − 1) 3 , ∞ and −∞, (−e− 2 y0 − 1) 3 .
Which is the relevant one and why?
1 1
Rt
21. a. We can pick y1 (t) = µ(t) and y2 (t) = µ(t) t0
µ(s)g(s) ds.
b. Recall that µ(t) = eP (t) , where P (t) is an anti-derivative of p(t). So
µ0 (t) = p(t)µ(t). Using this, we find:
µ0 (t) p(t)µ(t)
y10 (t) = − 2
=− = −p(t)y1 (t).
µ(t) µ(t)2
6. a. The equation is of the form y 0 = f (y) (with f (y) = k(1 − y)2 ), hence
autonomous. f (y) = 0 ⇔ y = 1, so φ(t) = 1 is an equilibrium solu-
tion.
b.
f (y)
1
25. a. In this exercise you have to integrate (p−x)(q−x) (with p 6= q) at some
point. Note that by using partial fractions we can find
1 1 1 1
= − .
(p − x)(q − x) p−q q−x p−x
2
28. c. Red means unstable, blue means stable, green means semi-stable.
y
Section 2.6
3. Note: if you use My = Nx to show exactness, don’t forget to check the
domain is simply connected (certainly the case here, since it is R2 ).
5. Answer in the book can be made explicit.:
p
– For c 6= 0, y(x) = − cb x ± 1c b2 x2 − c(ax2 − 2K);
2K−ax2
– For c = 0, y(x) = 2bx .
3
b 1
√
20. a. Note that r± = − 2a ± 2a b2 − 4ac. Solutions negative means:
b 1p 2
− ± b − 4ac < 0.
2a 2a
Since a > 0 this is equivalent to:
p p
b > − b2 − 4ac AND b > b2 − 4ac.
11. Tip: show that c1 t2 + c2 t−1 solves the DE for all c1 and c2 , then of course
y1 (t) = t2 and y2 (t) = t−1 are just special cases, so you don’t have to
check those separately.
1
12. Tip: again plug in y(t) = c1 + c2 t 2 and show it is only a solution if
c1 = 0 or c2 = 0 (or both). Why is it no contradiction with the theorem
mentioned?
22. In this case we have W [y1 , y2 ](t) = e4t . How does the conclusion follow?
23. In this case we have W [y1 , y2 ](t) = x cos(x) − sin(x). Showing that this is
nonzero on (0, π) is tricky! In this case, I’m ok if you would do it visually.
Here is an argument:
d
Note that dx (x cos(x) − sin(x)) = −x sin(x) < 0 for x ∈ (0, π). Hence:
Z x
x cos(x) − sin(x) = (−s sin(s)) ds < 0
0
Section 3.3
13. Intermediate step: the general solution is y(t) = et (c1 cos(2t) + c2 sin(2t)).
4
14. Tip: write the set of equations to find c1 and c2 in matrix-vector form:
1 1
√
2√ 2 3 c1
=
2
.
− 21 3 1
2
c2 −2
25. Tip: Introduce the function z such that z(x) = z(ln(t)) = y(t). Then:
1
y 0 (t) = z 0 (ln(t)) ·
t
1 1
y (t) = z (ln(t)) · 2 − z 0 (ln(t)) · 2
00 00
t t
Then plug this into the differential equation, and show that z satisfies a
second order homogeneous DE with constant coefficients.
30. Tip: use 25.
Section 3.4
18. Write y2 (t) = v(t) y1 (t) = v(t) t3 .
Plugging into the DE gives: t5 v 00 (t) + 2t4 v 0 (t) = 0. Solving using your
favourite method will give v(t) = A 1t + B as general solution. To get an
independent solutions, we could take A = 1 and B = 0. Any other choice
with A 6= 0 works as well. Don’t forget to multiply with y1 to find y2
(common mistake!).
20. Write y2 (t) = v(t) y1 (t) = v(t) t.
Plugging into the DE gives: t3 v 00 (t) + t3 v 0 (t) = 0. Solving using your
favourite method will give v(t) = Aet + B as general solution. To get
an independent solutions, we could take A = 1 and B = 0. Any other
choice with A 6= 0 works as well. Don’t forget to multiply with y1 to find
y2 (common mistake!).
21. Write y2 (x) = v(x) y1 (x) = v(x) ex .
Plugging into the DE gives: (x − 1)v 00 (x) + (x − 2)v 0 (x) = 0. Solving us-
ing your favourite method (you will need some integration techniques) will
give v(t) = Axe−x + B as general solution. To get an independent solu-
tions, we could take A = 1 and B = 0. Any other choice with A 6= 0 works
as well. Don’t forget to multiply with y1 to find y2 (common mistake!).
Section 3.6
4. There are several typo’s in this exercise (in the electronic version, at least).
The closest interpretation seems y 00 + y 0 = 2 tan(t). In that case, the
homogeneous equation has {y1 (t) = 1, y2 (t) = e−t } as a fundamental set
of solutions (not unique, of course). In that case we find as particular
solution Y (t) = u1 (t)y1 (t) + u2 (t)y2 (t), with u1 and u2 satisfying:
( (
1 · u01 (t) + e−t u02 (t) = 0 u01 (t) = −2 tan(t)
⇒
0 · u01 (t) − e−t u02 (t) = 2 tan(t). u02 (t) = 2et tan(t)
5
Hence we can take:
Z t
u1 (t) = −2 tan(s) ds = 2 ln | cos(t)|
0
Z t
u2 (t) = 2es tan(s) ds
0
The latter integral cannot be evaluated in terms of the functions you know.
So you get:
Z t
−t −t
y(t) = c1 + c2 e + 2 ln | cos(t)| + e 2es tan(s) ds.
0
6. Note that a particular solution is not unique; you can add any solution of
the homogeneous equation to get another particular solution. If your an-
swer differs from the answer in the book by a solution of the homogeneous
equation, your answer is also correct.
3
10. Answer in the book should be 2 + 43 t2 ln(t). Also see remark at question
6.
Section 5.1
6
P∞ P∞
bc y 00 = y ⇔ n=2 an n(n − 1)xn−2 = n=0 an xn . To compare we shift
indices in the first series: choose n = k + 2. In the second, we don’t
shift: choose k = n:
∞
X ∞
X
ak+2 (k + 2)(k + 1)xk = ak xk .
k=0 k=0
Now use the fact that two power series are equal if and only if the
corresponding coefficients are equal.
16. Confusing typo in the exercise: the indices under the summation sign
should be k, not n.
(in what folllows, k is the old index, m is the new one) In the first series
we choose k = m (no shift), in the second k = m − 1. We get
∞
X ∞
X
am+1 xm + am−1 xm .
m=0 m=1
Note that the summation does not start at the same index. How can you
fix this?
19. Following standard methods, you could also get
∞
X
a0 + a1 x + ((n − 1)an−1 + an ) xn .
n=2
Section 5.2
7
b. Graphs:
−1 1
Section 5.3
3. Intermediate results:
8. Intermediate results:
ex y (3) + ex y 00 + xy 0 + y = 0
ex y (4) + 2ex y (3) + (ex + x)y 00 + 2y 0 = 0
ex y (5) + 3ex y (4) + (3ex + x)y (3) + (ex + 3)y 00 = 0
ex y (6) + 4ex y (5) + (6ex + x)y (4) + (4ex + 4)y (3) + ex y (2) = 0
8
n2 −α2
12. a. You should get the recursion relation an+2 = (n+1)(n+2) an .
Section 5.4
31. Intermediate result: from plugging in power series you find a1 = 0 and
1
an+1 = − (2n+3)(n+1) an−1 for n = 1, 2, . . ..
Section 5.5
1. Note that in combining the series you get two separate conditions:
a0 ((4r(r − 1) + r) = 0
a1 (4(r + 1)r − (r + 1)) = 0
Since in the method you assume a0 6= 0, the first condition implies 4r(r −
1) + r = 0, i.e. r = 0 or r = 43 . If you plug these into the second equation,
you see that a1 = 0 in both cases. This is important, but not mentioned
in the answer section of the book.
In the book, a general form of the nth term is given. I will not ask you to
do that, unless it is very easy.
5. Also here, a1 = 0.
6. Actually, in this case the option r = 1 does lead to a solution, namely
y2 (x) = x. In general, this might not work if the solutions of the indicial
equation differ by an integer.
7. Note: in this exercise it is more natural to shift the series such that all
terms are of the form . . . xk+r−1 , rather that . . . xk+r . This is ok: it
actually doesn’t matter what the common power is, as long as it is the
same in the series that you combine. All powers k + r is also possible;
then some of the series start at k = −1. This is also ok!
9
– Odd terms vanish (since a1 = 0), for even terms: take a0 = 1, then
a2 = − 212 , a4 = 421·22 , general:
2 2
k 1 k 1 1
a2k = (−1) = (−1) k
= (−1)k .
(2k) · (2k − 2) · · · · · 2 2 (k) · (k − 1) · · · · · 1 22k (k!)2
Write v 0 = u:
x2 J0 u0 + (2x2 J00 + xJ0 )u = 0.
This is a linear equation. In standard form:
0
0 J0 1
u + 2 + u = 0.
J0 x
J0
We have p(x) = 2 J00 + x1 , so p(x) dx = ln(J02 ) + ln(x) (check!). As
R
R
integrating factor we can take µ(x) = e p(x) dx = xJ02 . Hence:
C
(xJ02 · u)0 = 0 ⇒ u = .
xJ02
We
R can take C as we like, let’s take C = 1. We then find v(x) =
1
xJ0 (x)2 dx and y2 (x) is the given expression.
Now: since J0 (x) is analytic and nonzero (equal to 1) at x = 0, J0 (x)−2
is analytic and equal to 1 at x = 0, so we can write: J0 (x)−2 = 1 + a1 x +
a2 x2 + . . .. Hence:
Z Z
1 1
dx = + a 1 + a2 x + . . . dx = C + ln(x) + analytic function.
xJ02 x
10
Section 6.1
7. Carfully analyze for which s the integral converges. Note that the inte-
grand can be written as a sum of exponential functions. Split into two
integrals; both have to converge!
18. Laplace transformation exists for s > 0.
Section 6.2
6. Recall that in partial fractions, in general you need the degree of the
numerator to be one less than the degree of the denominator.
s+5
9. Intermediate result: Y (s) = (s+1)(s+2) .
1
Tip: It is possible to first split (s+1)(s+2) and then rewrite Y (s), but it is
s+5
faster to split the fraction (s+1)(s+2) directly.
Section 6.3
4.
y
6.
11
y
8.
y
16. The answer can also be written as f (t) = 12 u4 (t) e2(t−4) − e−2(t−4) .
Section 6.4
y(t) = 21 t + 3
2 sin(t) − 21 u6 (t)(t − 6 − sin(t − 6)).
12
y
c. Graph:
13
y
1. Graph:
y
3. Graph:
y
5. Graph:
y
14
6. Graph:
y
Section 6.6
Z t
1. Note that (f ∗ 1)(t) = f (τ ) dτ . There is actually only one function that
Z t 0
s+1 G(s)
13. Intermediate result: Y (s) = s2 +ω 2 + s2 +ω 2 .
Section 7.1
3. Note: you need a dependent variable for u, for u0 and for u00 .
11. Note that you need Hooke’s law: the force exerted by a spring, stretched
by an amount u w.r.t. the equilibrium position, is F = −ku, with k the
spring constant. Then you can argue as follows: the extension of spring
1 is x1 , the extension of spring 2 is x2 − x1 . Then the force on mass m1
is Fmass 1 = −k1 x1 + k2 (x2 − x1 ) + F1 (t). Looking at the picture helps in
getting the signs right! A similar argument gives the force on mass m2 .
c1 x1 (t) + c2 x2 (t) x1 (t) x2 (t)
13. Plug in and use that and are
c1 y1 (t) + c2 y2 (t) y1 (t) y2 (t)
solutions.
12. Mistake in my electronic version: the left hand side of the first equation
should be x01 .
The idea is to eliminate one of the dependent variables. E.g. assuming
that a12 is non-zero, the first equation can be rewritten as
1
x2 = (x0 − a11 x1 − g1 (t)).
a12 1
15
This equation can the be used to eliminate x2 and x02 (how?) in the second
equation. This ultimately gives:
1 00 a11 + a22 0 a11 a22 − a12 a21 g 0 (t) − a22 g1 (t)
x1 − x1 + x1 = 1 + g2 (t).
a12 a12 a12 a12
For the boundary conditions we get:
Invert the matrix, and use the fact that q1 and q2 cannot be negative.
16
Section 7.4
8. The hardest part of this exercise is keeping track of all the notation. First
recall that for a second order homogeneous linear DE the Wronskian of
two solutions y (1) (t), y (2) (t) is defined as
(1)
y (2) (t)
y (t)
W [y (1) , y (2) ](t) = det d (1) d (2) .
dt y (t) dt y (t)
17
equation by adding x(p) to the solutions of the homogeneous equation.
If x(c) is a solution of the homogeneous equation, so (x(c) )0 = P(t)x(c) ,
then we have:
x = x(c) + x(p) ,
Section 7.5
-2
-4
-4 -2 0 2 4
18
4
-2
-4
-4 -2 0 2 4
17. Eigenvector directions in green and red, the trajectory though (2, 3) in
19
orange.
4
-2
-4
-4 -2 0 2 4
1 −t 7 2t
x (t) e + e
The orange curve is given by the solution 1 = 41 −t 47 2t . Here
x2 (t) −2e + 2e
is a plot of the component functions w.r.t. t (blue is x1 , orange is x2 ):
15
10
From the phase portrait you see that x1 (t) → +∞ both for t → ∞ as for
t → −∞, and that x2 (t) → +∞ both for t → ∞ and x2 (t) → −∞ for
t → −∞. You see the same behavior in the graphs above.
Section 7.6
Remark: the general solution of a linear system can be written in many different
ways. In particular in the case of non-real eigenvalues, equivalent solutions
might look very different. So if the answer in the book looks different from your
answer, it is not necessarily incorrect.
1. Phase diagram:
20
1.0
0.5
0.0
-0.5
-1.0
2. Phase diagram:
1.0
0.5
0.0
-0.5
-1.0
3. Phase diagram:
21
1.0
0.5
0.0
-0.5
-1.0
-1
-2
-3
-3 -2 -1 0 1 2 3
−1 −5
The solution of the initial value problem with A = would be:
1 −3
2 cos(2t) − 32 sin(2t)
−2t
x(t) = e .
cos(2t) + 12 sin(2t)
22
−1 −5
Phase diagram for A = :
1 −3
3
-1
-2
-3
-3 -2 -1 0 1 2 3
-1
-2
-2 -1 0 1 2
23
1.5
1.0
0.5
2 4 6 8 10
-0.5
-1.0
-1.5
49
12. Asymptotically stable spiral for α > 24 , asymptotically stable node for
2 < α < 49
24 , saddle point for α < 2.
Phase portrait for α = 1.5 (saddle):
24
1.0
0.5
0.0
-0.5
-1.0
0.5
0.0
-0.5
-1.0
25
1.0
0.5
0.0
-0.5
-1.0
Section 7.7
1. a. Intermediate
results: eigenvalues ±1. Eigenspaces: Er=−1 =
are
−1 −3
span{ } and Er=1 = span{ }. Recall that a fundamental
1 1
matrix is a matrix of which the columns form a fundamental set of
solutions.
b. Sign mistake in the book. It should be:
1 −e−t + 3et −3e−t + 3et
Φ(t) = .
2 e−t − et 3e−t − et
−1 −2
4. Eigenspaces: Er=−1 = span{ } and Er=2 = span{ }.
2 1
1
6. Eigenspaces: Er=−1+2i = span{ } and complex conjugate for other
2i
eigenvalue.
9. The matrix in this exercise should be the same matrix as in 1. In that
case you simply have:
1 −11e−t + 15et
x(t) = Φ(t)x(0) = .
2 11e−t − 5et
However, in my version of the book, the matrix here is the transpose of
the matrix A in ex. 1. In that case, you can also find the standard
fundamental matrix more easily:
∞ ∞
!T
AT t
X 1 T n n X 1 n n
Φ(t) = e = (A ) t = A t = Φ(t)T .
n=0
n! n=0
n!
26
This works because of the rules of calculation for the transpose: (A +
B)T = AT + B T , (cA)T = c(AT ) and (An )T = (AT )n . Hence if you you
want to solve the system that is actually given, you can simply use the
transpose of the standard fundamental matrix of exercise 1.
12. a. Keep s fixed and write Z1 (t) = Φ(t)Φ(s) and Z2 (t) = Φ(t + s). Using
that Φ0 = AΦ, show that on the one hand:
13. Use the series definition of eA . Use that for a diagonal matrix we have:
k k
a1 a1
.. =
..
. .
an akn
Section 7.8
General remark: a generalized eigenvector is not unique. More generally, if ξ
is an eigenvector with eigenvalue r and η is a generalized eigenvector with the
same eigenvalue, then η + c ξ is also a generalized eigenvector with the same
eigenvalue, for any choice of c.
−2
3. Phase portrait; All trajectories are parallel to the line spanned by
1
(in red). All trajectories below that line are directed to the left, all above
to the right:
27
1.0
0.5
0.0
-0.5
-1.0
4. Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
3 + 5t
6. Answer in electronic version incorrect. Should be x(t) = . Tra-
4 + 15t
jectory and x1 -vs-t-plot show straight lines.
7. Trajectory in phase portrait:
28
6
-2
-4
-6
-6 -4 -2 0 2 4 6
x1 vs t:
x1
t
-2 2 4 6 8 10
-2
q
4R2 C
−1 ± 1−
r
1 1 1 4 L
14. a. Eigenvalues are r1,2 =− ± − = .
2RC 2 R2 C 2 LC 2RC
20. b. Inductive argument not necessary, sufficient if the pattern is clear.
c. Use the series:
∞ ∞
1 n n X 1 λn nλn−1
X
Jt
e = J t = .
n! n! 0 λn
n=0 n=0
29
The non-zero off-diagonal element becomes:
∞ ∞ ∞
X n n−1 n X 1 n−1 n−1
X 1 k k
λ t =t λ t =t λ t = teλt .
n=0
n! n=1
(n − 1)! k!
k=0
In the second step we set the start index to n = 1, since the first
term is 0 anyway. In the next step we shift by setting k = n − 1. We
find: λt
teλt
e
eJt = .
0 eλt
Section 7.9
t + 21 cos(2t)
c(t) = .
t + 12 sin(2t)
– This gives:
xp (t) = Ψ(t)c(t) = . . .
– What is the general solution?
t−1
4. Typo in the electronic version: the inhomogeneous term should be .
2t−1 + 2
30
1
– Eigenvalues are r = 0 and r = −5. Eigenspaces: Er=0 = span
2
−2
and Er=−5 = span .
1
– A fundamental matrix could be:
−2e−5t
1
Ψ(t) = .
2 e−5t
– We then have:
1 1 2
Ψ(t)−1 = .
5 −2e5t e5t
– Following the standard procedure, we have:
−1 4
0 −1 t +5
c (t) = Ψ(t) g(t) = 2 5t .
5e
– We then have: t
3et
1 e
Ψ(t) = −t .
2 −e −e−t
– Following the standard procedure, we have:
2t
−e
c0 (t) = Ψ(t)g(t) = .
0
cos( 12 t) sin( 12 t)
1
11b. You can use Ψ(t) = e− 2 t .
4 sin( 21 t) −4 cos( 12 t)
Section 9.1
31
2. Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
3. Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
4. Phase portrait:
32
1.0
0.5
0.0
-0.5
-1.0
5. Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
6. Phase portrait:
33
1.0
0.5
0.0
-0.5
-1.0
7. Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
8. Phase portrait:
34
1.0
0.5
0.0
-0.5
-1.0
14. a. Note that zero eigenvalue means that det(A − rI) = 0 for r = 0.
b. Since 0 is an eigenvalue, there is a non-zero vector ξ such that Aξ = 0.
It follows that A(cξ) = 0, for all c ∈ R. So all points on the line
spanned by ξ are critical.
c. The general solution looks like x(t) = c1 ξ (1) + c2 ξ (2) er2 t . If we write
s = er2 t , you see that the right hand side becomes c1 ξ (1) + c2 ξ (2) s
(with s > 0). This is a parameterization of a line through c1 ξ (1) with
direction c2 ξ (2) . If c2 = 0 we get an equilibrium solution. If c2 6= 0,
the trajectory is a half line (half, since s = er2 t > 0.
Section 9.2
0 2
5. Jacobi matrix: J(x, y) = . What are the eigenvalues at the
−10x 0
two critical points?
Phase portrait (critical points in blue):
35
0.5
0.0
-0.5
-1.0
-1.5
36
3
-1
-2
-2 -1 0 1 2 3
y − 3 − 2x x+3
9. Jacobi matrix: J(x, y) = . Eigenvalues:
y(1 − 2x) 2 + x − x2
– At (0, 0): r = −3 or r = 2;
– At (−3, 0): r = −10 or r = 3;
√
– At (−1, −1): r = −1 ± i 5;
√
– At (2, 2): r = − 52 ± 2i 95.
Phase portrait (critical points in blue, separatices in red):
-2
-4 -2 0 2 4
37
15. Phase portrait:
-1
-2
-2 -1 0 1 2
dy y−2xy
18. We have dx = −x+y+x 2 . This is not separable, but can be written as an
exact DE!
Phase portrait:
1.0
0.5
0.0
-0.5
-1.0
38
4
-2
-4
-4 -2 0 2 4
Section 9.3
Note: on the exam I will not ask you to show that a system is locally linear.
Formally proving this requires calculating types of limits that go beyond the
scope of this course.
0
u −1 1 u
1. Linearized system: = .
v0 −4 −1 v
0
u 0 1 u
2. Linearized system: = .
v0 −4 0 v
0
u 1 0 u
3. Linearized system: = .
v0 1 1 v
Section 10.1
39
3. Note: sin(2L) = 0 ⇔ L = 12 nπ, with n integer.
7. The general solution of the DE (so without the boundary conditions) is
y(t) = c1 cos(2x) + c2 sin(2x) + 31 cos(x).
For 14, 16 and 18: The eigenvalues in this context are the values for λ for
which the boundary value problem has at least one non-trivial solution.
The eigenfunctions are the non-trivial solutions for those values of λ.
14. Tip: distinguish the cases λ < 0, λ = 0 and λ > 0. The eigenvalues
2
as given in the book can also be written as λn = 21 + n , with n =
0, 1, 2, . . ..
16. Tip: distinguish the cases λ < 0, λ = 0 and λ > 0.
18. Note: this is an Euler equation! Tip: distinguish the cases λ < 1, λ = 1
and λ > 1.
Section 10.2
13. Sketch:
y
x
−L L 3L 5L
15. Sketch:
40
y
x
−π π 3π 5π
Note: the Fourier series can also be written as, for example,
∞
1 − (−1)n (−1)n
3π X
− +3 cos(nx) − sin(nx) .
4 n=1
πn2 n
x
−1 1 3 5
Note: the Fourier series can also be written as
∞
1 X 1 − (−1)n
+2 cos(nπx).
2 n=1
n2 π 2
Section 10.4
41
y
17. Sketch (at jump points, a Fourier series converges to the average of the
left and right limit):
y
19. Sketch:
42
y
22. Incorrect answer in the book. The Fourier series should be:
∞
X 2 4
− cos(nπ) + 2 2 sin( 12 nπ) sin( 21 nπx).
n=1
nπ n π
Sketch (at jump points, a Fourier series converges to the average of the
left and right limit):
y
Section 10.5
8. Note: you do not have to calculate any integral to obtain the Fourier
“series”.
12. Tip: Split the integral for the Fourier coefficients in 2 parts.
18. See table 10.5.1 for the material properties.
22. Some intermediate steps and remarks:
– The DE can be written as:
R00 1 R0 1 Θ00 1 T0
+ + 2 = 2 .
R r R r Θ α T
43
– Both sides have to be constant (why?). The book chooses −µ2 as
constant on both sides. Again, it turns out that the square is conve-
nient (has to to with the required periodicity of Θ), but you do not
have to consider it here.
Section 10.6
11. Note: the initial function u(x, 0) is a sine, but you need to write it
as
R a cosine series. For this you have to evaluate integrals of the form
sin(ax) cos(bx) dx. This can be done in various ways, e.g. by the “super-
trick” of doing 2 times partial integration, which gives the same integral,
which you can then move to the other side of the equation to evaluate it.
15.b. This situation (one insulated end, one end constant temperature) has not
been treated in class. There is a trick though: consider a rod with length
2L in which the temperature is 0 at the endpoints and symmetric w.r.t. the
mid-point. Then an differentiable temperature distrubution must satisfy
ux (L, t) = 0, by symmetry. We can apply the theory of 2 endpoints with
0 temperature to this long rod. In particular, we can find the Fourier sine
series by looking at the odd 4L-periodic extension of the temperature in
the long rod.
Since only the odd sines are symmetric on the interval [0, 2L], only they
will contribute to the Fourier series.
Section 10.7
44