0% found this document useful (0 votes)
25 views66 pages

Martins Job

This thesis by Jorge Dimas Granados Del Cid explores the Separation of Variables Method for solving second-order linear partial differential equations (PDEs). It covers the classification of PDEs, reduction to canonical forms, and derivation of solutions under various boundary conditions, including Dirichlet, Neumann, and Robin. The work concludes with discussions on eigenvalues and open problems in the field of PDEs.

Uploaded by

ehihilltop73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views66 pages

Martins Job

This thesis by Jorge Dimas Granados Del Cid explores the Separation of Variables Method for solving second-order linear partial differential equations (PDEs). It covers the classification of PDEs, reduction to canonical forms, and derivation of solutions under various boundary conditions, including Dirichlet, Neumann, and Robin. The work concludes with discussions on eigenvalues and open problems in the field of PDEs.

Uploaded by

ehihilltop73
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

THE SEPARATION OF VARIABLES METHOD FOR SECOND ORDER

LINEAR PARTIAL DIFFERENTIAL EQUATIONS

A Thesis

Presented to

The Faculty of the Department of Mathematics

California State University, Los Angeles

In Partial Fulfillment

of the Requirements for the Degree

Master of Science

in

Mathematics

By

Jorge Dimas Granados Del Cid

June 2016
c 2016

Jorge Dimas Granados Del Cid

ALL RIGHTS RESERVED

ii
The thesis of Jorge Dimas Granados Del Cid is approved.

Borislava Gutarts, Ph.D., Committee Chair

Anthony Shaheen, Ph.D.

Kristin Webster, Ph.D.

Grant Fraser, Ph.D., Department Chair

California State University, Los Angeles

June 2016

iii
ABSTRACT

The Separation of Variables Method for Second Order

Linear Partial Differential Equations

By

Jorge Dimas Granados Del Cid

This thesis provides an overview of various partial differential equations, in-

cluding their applications, classifications, and methods of solving them. We show the

reduction (change of variables process) of an elliptic equation to the Laplace equa-

tion (with lower order terms), as well as other cases. We derive the solutions of some

partial differential equations of 2nd order using the method of separation of variables.

The derivation includes various boundary conditions: Dirichlet, Neumann,

mixed, periodic and Robin. A discussion of the eigenvalues related to various bound-

ary conditions is provided. A discussion of Fourier series, as they apply to computing

the coefficients of the series solutions, is included. The thesis concludes with a pre-

sentation of open problems related to the topic.

iv
TABLE OF CONTENTS

Abstract................................................................................................................. iv

List of Figures........................................................................................................ vii

Chapter

1. Introduction .............................................................................................. 1

2. Reduction to Canonical Form ................................................................... 6

2.1. Chain rule with respect to change of variables ............................... 8

2.2. Hyperbolic reduction ...................................................................... 11

2.3. 2nd Hyperbolic Case........................................................................ 16

2.4. Parabolic Reduction ....................................................................... 18

2.5. Elliptic Canonical Reduction .......................................................... 21

3. Separation Of Variables ............................................................................ 27

3.1. Examples of second-order PDEs and Boundary Conditions .......... 27

3.2. The characteristic polynomial......................................................... 28

3.3. Dirichlet boundary conditions ........................................................ 30

3.4. The Heat Equation with Dirichlet Boundaries ............................... 32

3.5. Neumann Boundary Conditions ..................................................... 36

3.6. The Robin boundary conditions ..................................................... 40

4. Eigenvalues................................................................................................ 46

4.1. Eigenvalues: Dirichlet Boundary Conditions .................................. 46

4.2. Eigenvalues: Neumann Boundary Conditions................................. 48

4.3. Eigenvalues: The Robin’s Boundary Conditions ............................ 49

5. Coefficients ................................................................................................ 53

v
5.1. Coefficients in the Case of Dirichlet Boundary Conditions............. 53

5.2. Coefficients in the Case of Neumann Boundary Conditions ........... 56

vi
LIST OF FIGURES

Figure

1. Chain rule ................................................................................................ 6

2. sin(nπx/l) n = 1, 2, 3, 4. 0 < x < l. ..................................................... 32

3. cosh γl 6= 0................................................................................................ 39

4. a0 > 0, al > 0, eigenvalues as intersections ............................................... 41


(−a0 + al ) β
5. a0 < 0, al > 0 and a0 + al > 0, tan βl = . ............................. 43
β 2 + al a0
6. graph of − βa = tan βl, where a > 0 .......................................................... 44

7. graph of − βa = tan βl, where a < 0 .......................................................... 44

8. a0 = al = 1.9, l = π, β1 ≈ 0.758263778 ................................................... 45

9. for a0 > 0, al > 0, no negative eigenvalue ............................................... 51

10. for a0 < 0, al > 0, a0 + al > 0, no negative eigenvalue. .......................... 51

11. for a0 < 0, a0 + a1 < −a0 al l, one negative eigenvalue where the func-

tions intersect. ........................................................................................ 52

vii
CHAPTER 1

Introduction

A partial differential equation (PDE) is an equality composed of mathematical

entities that include an unidentified, multivariable function and its partial derivatives.

Definition 1.1. A partial derivative is the derivative with respect to one variable of

a function of several variables, with the remaining variables treated as constants.

For instance, the partial derivative of u with respect to x is

u(x + h, y, z, . . .) − u(x, y, z, . . .)
lim
h→0 h
These partial derivatives of u, with respect to independent variables such as x, y,
.
t. . . , are written as

ux , , uxx , , uxy , , uxxx . . . (1.0.1)

or
∂ ∂2 ∂2 ∂3
u, u, u, u, . . .
∂x ∂x2 ∂y∂x ∂x3

we write
∂2
uxy = (ux )y = u,
∂y∂x

to indicate that the partial with respect to x is taken first.

If we have a function of one variable, say x, then the only partial derivative of
∂f
f (x), is just the derivative f 0 (x) and equations involving functions of one variable
∂x
and their derivatives are called ordinary differential equations (ODEs).

Some example of PDEs are

ux + tutt = t2 (1.0.2a)

1
ut − k 2 uxx = cos t (1.0.2b)

utt − c2 uxx + f (x, t) = 0 (1.0.2c)

ut + uux + uxxx = 0 (1.0.2d)

uxx (x, y) + uyy (x, y) = 0, (1.0.2e)

utt (x, t) = uxx (x, t) − u3 (x, t) , (1.0.2f)

ut (x, t) + x2 + t2 ux (x, t) = 0,

(1.0.2g)

Definition 1.2. The order of an ODE, or a PDE equation is the maximal number of

derivatives (or partial derivatives, respectively) taken with respect to the independent

variable(s).

Equation (1.0.2a), is of second order because u has been differentiated twice

with respect to t; equation (1.0.2d) has three x as subscripts, indicating a PDE of

third order, and in (1.0.2g), the first u has been differentiated just once, and so has

the second u; this is a PD equation of first order.

We now define what it means for a differential operator L to be linear.

Definition 1.3. L is linear if for any fucntions u and v and constant c we have

L(u + v) = Lu + Lv & L(cu) = cLu

We show examples of linearity and non-linearity of the following PDEs in a

slightly different way; we examine the factorization of the differential operator L and

u in the following equations,

ux + uy = 0 transport (1.0.3a)

2
ux − yuy = 0 transport (1.0.3b)

ux + uuy = 0 shock wave (1.0.3c)

We will write these equations in the form Lu = ux + uy . If we can factor out

u completely from a differential operator L; that is, separate L from u with no u in

L, then L is linear. We show this with equations (1.0.3a), (1.0.3b) and (1.0.3c)

 
∂ ∂
Lu = ux + uy = + u (1.0.4a)
∂x ∂y
 
∂ ∂
Lu = ux − yuy = −y u (1.0.4b)
∂x ∂y
 
∂ ∂
Lu = ux + uuy = +u u (1.0.4c)
∂x ∂y

The differential operator in (1.0.4a) is linear because there is no u in it; that is, no u
 
∂ ∂
in the differential operator ∂x + ∂y ; in (1.0.4b), the y in the differential operator

makes no difference, so ux − yuy is linear. And equation (1.0.4c) is not linear because

there is a u in the differential operator after factoring.



Here are some operations that give nonlinear operators: u2x , u3 , u4xy . . ., uxx ,

ln(u), sin(u), cos(u), . . ., etc.

Using the definition of linearity we show Lu = ux + uy is linear. For dependent

functions v and u and c constant, we have

L(u + v) = (u + v)x + (u + v)y

= ux + vx + uy + vy

= (ux + uy ) + (vx + vy )

= Lu + Lv,

3
and

L(cu) = (cu)x + (cu)y = cux + cuy = c(ux + uy ) = cL(u)

this proves linearity of Lu = ux + uy .

Consider this nonlinear differential expression Lu = ux + uy + 1, then for dependent

functions u and v,
L(u + v) = (u + v)x + (u + v)y + 1

= ux + vx + uy + vy + 1

but
Lu + Lv = ux + uy + 1 + vx + vy + 1

= ux + vx + uy + vy + 2

this means

L(u + v) 6= Lu + Lv,

hence, the operator L is nonlinear.

However, given

ut − uxx + 1 = 0

we can move the constant 1 to the right side

ut − uxx = −1,

and think of it as Lu = −1; the left side is linear (u can be factored from the

differential operator L, or we can use the the definition of linearity above), so we

have a linear equation.

Definition 1.4. Suppose L is a linear operator, then we define Lu = g, where g is

a function containing only independent variables such as x, y, z, . . . (u such as in ux

is not independent), then equation Lu = g is called an inhomogeneous equation if

4
g 6= 0 and Lu = g is homogeneous if g = 0.

Examples of inhomogeneous (homogeneous) equations are

cos xy 2 ux − y 2 uy = tan x2 + y 2
 
(1.0.5a)

utt − c2 uxx + f (x, t) = 0, where f (x, t) 6= 0 (1.0.5b)

Both equations, (1.0.5a and (1.0.5b) are inhomogeneous, while

uxx (x, y) + uyy (x, y) = 0

is a homogeneous equation.

5
CHAPTER 2

Reduction to Canonical Form

Figure 1: Chain rule

This chapter is dedicated to reducing three types of PDEs to their simplest

possible forms, called canonical form. The most general case of second-order linear,

partial differential equation (PDE) in two independent variables is given by

Auxx + Buxy + Cuyy + Dux + Euy + F u = G (2.0.1)

where the coefficients A, B, and C are functions of x and y and do not vanish

simultaneously...[1, p 57].

The second-order PDE (2.0.1) is classified by way of the discriminant

B 2 − 4AC

in the following definition

Definition 2.1. The second order linear PDE (2.0.1) is called

hyperbolic, if B 2 − 4AC > 0

parabolic, if B 2 − 4AC = 0

elliptic, if B 2 − 4AC < 0.

6
Example 2.2.

uxx (x, y) + uyy (x, y) = 0 (2.0.2)

buxx (x, y) − uyy (x, y) = 0 (2.0.3)

ut − γuxx = 0 (2.0.4)

Equation (2.0.2) has coefficients A = 1, B = 0 and C = 1; this gives B 2 − 4AC < 0,

therefore, it is of the elliptic PDE type; while (2.0.3) has the discriminant of the form

B 2 − 4AC > 0, connoting a hyperbolic PDE type; and (2.0.4) is a parabolic type,

because B = 0, C = 0, and the constant γ > 0, so the discriminant B 2 − 4AC = 0.

The second-order PDE, Auxx + Buxy + Cuyy + Dux + Euy + F u = G, may

be of one type at a set of points, and another type at some other points; this may

happen if the coefficients contain independent variables such as x, y, . . . [2, p.2].

Theorem 2.3. By a linear transformation of the independent variables, the equation

Auxx + Buxy + Cuyy + Dux + Euy + F u = G (2.0.1)

can be reduced (transformed) to one of the three (canonical) forms:

Hyperbolic case: if B 2 − 4AC > 0, it is reducible to

uξξ − uηη + · · · = 0,

Parabolic case: if B 2 − 4AC = 0, it is reducible to

uξξ + · · · = 0,

Elliptic case: if B 2 − 4AC < 0, it is reducible to

uξξ + uηη + · · · = 0.

The dots represent the terms involving u and its first partial derivatives ux and uy

7
only. We will use the change of variables

ξ = ξ(x, y), η = η(x, y)

and the chain rule to transform the general equation Auxx + Buxy + Cuyy + Dux +

Euy + F u = 0 into one of the canonical forms above.

The reason we refer to the equations in Theorem (2.3):

uξξ − uηη + · · · = 0,

uξξ + · · · = 0,

uξξ + uηη + · · · = 0

as canonical forms is that they correspond to particularly simple choices

of the coefficients of the second partial derivatives of u. . . .

[1, p. 58].

2.1 Chain rule with respect to change of variables

Reduction of Auxx + Buxy + Cuyy + Dux + Euy + F u = G to a canonical

form starts by stating the partial derivatives of u with respect to x and y in terms of

partials of ξ and η. v will be used on the right side when applying the chain rule to

u(x, y) = v(ξ(x, y), η(x, y)) to compute the second-order functions uxx , uxy , and uyy :

ux = vξ ξx + vη ηx

(ux )x = (vξ ξx )x + (vη ηx )x

uxx = vξξ ξx2 + vξ ξxx + vξη ξx ηx + vηξ ηx ξx + vηη ηx2 + vη ηxx

8
uy = vξ ξy + vη ηy

(uy )y = (vξ ξy )y + (vη ηy )y

uyy = vξξ ξy2 + vξ ξyy + vξη ξy ηy + vηξ ηy ξy + vηη ηy2 + vη ηyy

(ux )y = (vξ ξx )y + (vη ηx )y

uxy = vξξ ξx ξy + vξ ξxy + vξη ξx ηy + vηξ ηx ξy + vη ηxy + vηη ηx ηy

vanishing the first order terms ux , uy (into the dots) we have

uxx = vξξ ξx2 + 2vξη ξx ηx + vηη ηx2 + · · · , (2.1.1)

uxy = vξξ ξx ξy + vξη ξx ηy + vξη ξy ηx + vηη ηx ηy + · · · , (2.1.2)

uyy = vξξ ξy2 + 2vξη ξy ηy + vηη ηy2 + · · · , (2.1.3)

Multiplying both sides of (2.1.1), (2.1.2), (2.1.3) by A, B, and C we have

Auxx = vξξ Aξx2 + 2vξη Aξx ηx + vηη Aηx2 + · · · , (2.1.4)

Buxy = vξξ Bξx ξy + vξη Bξx ηy + vξη Bξy ηx + vηη Bηx ηy + · · · , (2.1.5)

Cuyy = vξξ Cξy2 + 2vξη Cξy ηy + vηη Cηy2 + · · · , (2.1.6)

We pick the new coefficients, a, b, c, by gathering the multiplicative factors accom-

panying vξξ , vξη , vηη in (2.1.4), (2.1.5), (2.1.6)

vξξ Aξx2 + vξξ Bξx ξy + vξξ Cξy2

Aξx2 + Bξx ξy + Cξy2 vξξ



=

= avξξ

2vξη Aξx ηx + Bvξη (ξx ηy + ξy ηx ) + 2vξη Cξy ηy

= (2Aξx ηx + B(ξx ηy + ξy ηx ) + 2Cξy ηy ) vξη

9
= bvξη

vηη Aηx2 + vηη Bηx ηy + vηη Cηy2

Aηx2 + Bηx ηy + Cηy2 vηη



=

= cvηη

consequently we have from the transformation ξ = ξ(x, y), η = η(x, y), and chain

rule, new coefficients

a = Aξx2 + Bξx ξy + Cξy2 , (2.1.7)

b = 2Aξx ηx + (ξx ηy + ξy ηx )B + 2Cξy ηy , (2.1.8)

c = Aηx2 + Bηx ηy + Cηy2 . (2.1.9)

So equation Auxx + Buxy + Cuyy + Dux + Euy + F u = G becomes

avξξ + bvξη + cvηη + · · · = 0 (2.1.10)

with the dots representing the first order terms.

The Jacobian of the change of variables, ξ = ξ(x, y) and η = η(x, y), is

∂ (ξ, η) ξx ξy
= = ξx ηy − ξy ηx 6= 0
∂ (x, y) ηx ηy

is not singular because

Clearly we should confine our attention to locally one-to-one transforma-

tions whose Jacobians are different than zero....we conclude that the type

of such an [general] equation, Auxx + Buxy + Cuyy + Dux + Euy + F u = 0,

can not be altered by a real change of variables [1, p. 60].

10
The following formula multiplies the discriminant of the general PDE formula Auxx +

Buxy + Cuyy + Dux + Euy + F u = 0, by a positive number, implying that the

transformation does not change its type.

(b)2 − (4ac) = B 2 − 4AC (ξx ηy − ξy ηx )2 ,



(2.1.11)

(where (ξx ηy − ξy ηx )2 is a positve number) and leaves B 2 − 4AC unchanged by the

transformation of coordinates.

We prove (2.1.11) by plugging a = Aξx2 + Bξx ξy + Cξy2 ,

b = 2Aξx ηx + (ξx ηy + ξy ηx )B + 2Cξy ηy and c = Aηx2 + Bηx ηy + Cηy2 into b2 − 4ac

(b)2 − (4ac)

= (B (ξx ηy + ξy ηx ) + 2Aξx ηx + 2Cξy ηy )2

−4 Aξx2 + Bξx ξy + Cξy2 Aηx2 + Bηx ηy + Cηy2


 

= B 2 ξx2 ηy2 − 2B 2 ξx ξy ηx ηy + B 2 ξy2 ηx2 − 4ACξx2 ηy2 + 8ACξx ξy ηx ηy − 4ACξy2 ηx2 ,

and expanding the right side of (2.1.11) gives

B 2 − 4AC (ξx ηy − ξy ηx )2


= B 2 ξx2 ηy2 − 2B 2 ξx ξy ηx ηy + B 2 ξy2 ηx2 − 4ACξx2 ηy2 + 8ACξx ξy ηx ηy − 4ACξy2 ηx2 .

The two sides of (2.1.11) are the same.

2.2 Hyperbolic reduction

We begin the reduction of avξξ + bvξη + cvηη · · · = 0 to an equation of one of

the canonical forms by taking coefficients a and c as polynomials and then complete

11
the square to find the roots, this will make coefficients a and c zero and the reduction

will result in a hyperbolic type PDE. We proceed in the fallowing manner; take both

a = Aξx2 + Bξx ξy + Cξy2 = 0, c = Aηx2 + Bηx ηy + Cηy2 = 0,

and divide each by ξy2 and ηy2 respectively, resulting in

 2    2  
ξx ξx ηx ηx
A +B + C = 0, A +B +C =0
ξy ξy ηy ηy

We find the roots completing the square


2
  
ξx ξx
A +B +C =0
ξy ξy
 2  
ξx B ξx C
+ + =0
ξy A ξy A
 2    2  2
ξx B ξx B C B
+ + =− +
ξy A ξy 2A A 2A
 2  2
ξx B B C
+ =± −
ξy 2A 2A A
s 
2
ξx B B C
+ =± −
ξy 2A 2A A
r
ξx B B2 C
+ =± 2

ξy 2A 4A A
s  
ξx B 4A2 B 2 C
+ =± −
ξy 2A 4A2 4A2 A
s  
ξx B 1 4A 2C
+ =± B2 −
ξy 2A 4A2 A
s 
ξx B 1 4A 2C
+ =± B2 −
ξy 2A 2A A

ξx B 1 p 2
+ =± (B − 4AC)
ξy 2A 2A

ξx B 1 p 2
=− ± (B − 4AC)
ξy 2A 2A

12
p
ξx −B ± (B 2 − 4AC)
we have the two roots =
ξy 2A

ξx ηx
pick one root for ξy
, and one root for ηy

p p
ξx −B − (B 2 − 4AC) ηx −B + (B 2 − 4AC)
= = .
ξy 2A ηy 2A

The total derivative of ξ is

dξ = ξx dx + ξy dy = 0,

along the coordinate line ξ(x, y) = constant. The total derivative re-introduces orig-

inal variables y and x by way of dy and dx by rearranging

dy ξx
dξ = ξx dx + ξy dy = 0 into =− .
dx ξy

Similarly for η(x, y) = constant we have

dy ηx
=− .
dx ηy

ξx ηx
Replacing and above, we get
ξy ηy
√ √
dy −B − B 2 − 4AC dy −B + B 2 − 4AC
− = , − = . (2.2.1)
dx 2A dx 2A

Solving for dy in both equations we have


√ √
B+ B 2 − 4AC B− B 2 − 4AC
dy = dx, dy = dx
2A 2A

and integrate, making sure to add the constant of integration:


√ √
B+ B 2 − 4AC B− B 2 − 4AC
y= x + c1 , y= x + c2 .
2A 2A

13
We solve for c1 and c2

√ √
B+ B 2 − 4AC B− B 2 − 4AC
c1 = y − x, c2 = y − x
2A 2A

and the change of variables gives


√ √
B+ B 2 − 4AC B− B 2 − 4AC
ξ=y− x, η=y− x.
2A 2A

Taking partial derivatives of ξ and η with respect to x and y gives


√ √
B+ B 2 − 4AC B− B 2 − 4AC
ξx = − , ηx = − ,
2A 2A

ξy = 1, ηy = 1.

The Jacobian gives:

∂(ξ, η) ξx ξy 1√ 2
= =− B − 4AC 6= 0.
∂(x, y) ηx ηy A

Plug these partial derivatives, ξx , ξy , ηx , ηy , into the coefficients a, b, and c of

avξξ + bvξη + cvηη · · · ,

to get:

a = Aξx2 + Bξx ξy + Cξy2



1  √ 2
=A − 2
B + B − 4AC
2A

 
1 
+B − 2
B + B − 4AC (1) + C (1)2
2A
1 2 1 √ 2
= B −C + B B − 4AC + C
2A 2A
1 2 1 √ 2
− B − B B − 4AC
2A 2A

14
=0

this gives a = 0. Now for c

c = Aηx2 + Bηx ηy + Cηy2


1  √ 2
=C+ B − B 2 − 4AC
4A
1  √ 
− B B − B 2 − 4AC
2A
1 2 1 √ 2
=C+ B −C − B B − 4AC
2A 2A
1 √ 2 1 2
+ B B − 4AC − B
2A 2A
= 0,

this gives c = 0. And last,

b = 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy


 √  √ 
B + B 2 − 4AC B − B 2 − 4AC
= 2A − −
2A 2A
 √ √ 
2
B + B − 4AC B − B − 4AC 2
+B − − + 2C
2A 2A
1
B 2 − 2AC

= 2C −
A
1
B 2 − 4AC

= −
A

The left side of avξξ + bvξη + cvηη · · · = 0 is reduced to

1
B 2 − 4AC vξη + 0 · vηη + · · ·

0 · vξξ −
A
1
B 2 − 4AC vξη + 0 + · · ·

=0−
A
1
B 2 − 4AC vξη + · · ·

=− as needed
A

15
Therefore, for A 6= 0,

vξη + · · · = 0 (2.2.2)

is in (hyperbolic type) canonical form.

2.3 2nd Hyperbolic Case

We need avξξ + bvξη + cvηη + · · · = 0 reduced to the form

vξξ − vηη + · · · = 0.

Factor 0 = vξξ − vηη

 
∂ ∂ ∂ ∂
= − v
∂ξ ∂ξ ∂η ∂η
 
∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂
= − + − v
∂ξ ∂ξ ∂ξ ∂η ∂ξ ∂η ∂η ∂η
  
∂ ∂ ∂ ∂
= − + v;
∂ξ ∂η ∂ξ ∂η

this gives the gneneral solution for vξξ − vηη = 0:

v(ξ, η) = f (ξ + η) + g(ξ − η).

Let α = ξ + η, and β = ξ − η, and solve the system for ξ and η

ξ+η = α

ξ − η = β,

then we have
β+α α−β
ξ= , η= ,
2 2

16
take partial derivatives of η and ξ with respect to α and β

1 1
ξα = and ξβ =
2 2
1 1
ηα = and ηβ = −
2 2

The Jacobian gives


∂(ξ, η) ξα ξβ 1
= =− =6 0.
∂(α, β) ηα ηβ 2

State partial derivatives of u with respect to α and β, in terms of partial derivatives

with respect to ξ and η of u(α, β) = v(ξ(α, η), η(α, β)). By the chain rule

uα = vξ ξα + vη ηα

(uα )β = (vξ ξα )β + (vη ηα )β

uαβ = vξξ ξα ξβ + vξ ξαβ + vξ ηξα ηβ + vηξ ηα ξβ + vηη ηα ηβ + vη ηαβ

uαβ = vξξ ξα ξβ + vξη ξα ηβ + vηξ ηα ξβ + vηη ηα ηβ + · · ·

plugging partial derivatives: ξα = 21 , ξβ = 1


2
and ηα = 12 , ηβ = − 12 , we have

     
1 1 1 1 1 1 1 1
uαβ = vξξ · + vξη · − + vηξ · + vηη − + ···
2 2 2 2 2 2 2 2
1 1
= vξ2 + [0] − vη2 + · · ·
4 4
= vξξ − vηη + · · ·

= 0.

Then, vξξ − vηη + · · · = 0 is a linear, hyperbolic PDE in canonical form.

17
2.4 Parabolic Reduction

Now we reduce avξξ + bvξη + cvηη · · · = 0 to a parabolic PDE in canonical

form; this implies a and b or c and b must be zero, and B 2 − 4AC = 0. We set

a = Aξx2 + Bξx ξy + Cξy2 = 0

and divide both sides of it by ξy2 and get

 2  
ξx ξx
A +B +C =0. (2.4.1)
ξy ξy

For ξ(x, y) = constant, the total derivative is

dξ = ξx dx + ξy dy = 0 ,
and gives

ξx dy
−ξx dx = ξy dy =⇒ − =
ξy dx

and modifies (2.4.1)

 2    2  
ξx ξx dy dy
A +B +C =0 becomes A −B +C =0
ξy ξy dx dx

and the quadratic formula gives one number since we must have B 2 − 4AC = 0; so,

let

dy B± 0 B
− =− =− .
dx 2A 2A
dy B
So = gives
dx 2A

B
dy = dx
2A
B
y = x + c1
2A

18
B
c1 = y − x
2A
B
⇒ξ = y− x
2A

and b gives

0 = b = 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy


 
ξx ξx
= 2A ηx + B ηy + ηx + 2Cηy
ξy ξy
    
B B
= 2A − ηx + B − ηy + ηx + 2Cηy
2A 2A
B2
= −Bηx − ηy + Bηx + 2Cηy
2A
B2
= − ηy + 2Cηy
2A
= −B 2 ηy + 4ACηy

B 2 − 4AC ηy ,

=

where B 2 − 4AC = 0, take ηy to be an arbitrary function of (x, y); ηy equals zero

means that η is a constant, implying that it can be a function of x; so, letting η = x

gives the change of variables

B
ξ=y− and η = x.
2A

with partial derivatives

B
ξx = − and ξy = 1
2A
ηx = 1 and ηy = 0

19
and the Jacobian is not zero:

∂(ξ, η) ξ ξ
= x y = −1 6= 0. (2.4.2)
∂(x, y) ηx ηy

The Jacobian is not singular, then the expression a2 − ab = (B 2 − AC)(ξx ηy − ξy ηx )2

holds and “shows that the sign of the discriminant B 2 − AC remains invariant”

[1, p. 60].

So, we reduce avξξ + bvξη + cvηη · · · = 0 to a canonical form of the parabolic type; we

take the partial derivatives of η and ξ with respect to x and y and plug them into

each of the coefficients a, b, c as before,

a = Aξx2 + Bξx ξy + Cξy2 ,

b = 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy ,

c = Aηx2 + Bηx ηy + Cηy2 ,

then we have

 2  
B B
a = A − +B − (1) + C (1)2
2A 2A
1 2
= C− B
4A
1
B 2 − 4AC

= −
4A
= 0
1
= − (0)
4A
    
B B
b = 2A − (1) + B − (0) + (1) (1)
2A 2A
+2C (1) (0)

= −B + B

20
= 0

c = A (1)2 + B (1) (0) + C (0)2 = A.

So the equation avξξ + bvξη + cvηη · · · = 0, reduces to Avηη · · · = 0,

=⇒ vηη · · · = 0.

a parabolic PDE in canonical form.

2.5 Elliptic Canonical Reduction

The following is the reduction of avξξ + bvξη + cvηη · · · = 0, to an elliptic type

PDE in canonical form vξξ + vηη · · · = 0.

The discriminant in this case should be b2 − 4ac < 0; so we can set b = 0 and c = a,

or a − c = 0.

So, with the coefficients:

a = Aξx2 + 2Bξx ξy + Cξy2

b = 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy

c = Aηx2 + 2Bηx ηy + Cηy2 ,

we have that

a − c = Aξx2 + 2Bξx ξy + Cξy2 − Aηx2 + 2Bηx ηy + Cηy2




= A ξx2 − ηx2 + B (ξx ξy − ηx ηy ) + C ξy2 − ηy2 ,


 

21
this gives a “coupled” system; that is, both coordinates ξ and η, show up in both

equations,

A ξx2 − ηx2 + B (ξx ξy − ηx ηy ) + C ξy2 − ηy2 = 0,


 

and 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy = 0.

In order to“separate” η and ξ, multiply

2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy

by i and add the result to

A ξx2 − ηx2 + B (ξx ξy − ηx ηy ) + C ξy2 − ηy2 ,


 

giving

A (ξx + iηx )2 + B (ξx + iηx ) (ξy + iηy ) + C (ξy + iηy )2 . (2.5.1)

Dividing (2.5.1) by (ξy + iηy )2 , results in

 2  
ξx + iηx ξx + iηx
A +B + C = 0,
ξy + iηy ξy + iηy
 
ξx + iηx
a quadratic equation. Let = φ; dividing by A, we get
ξy + iηy

B C
φ2 + φ + = 0,
A A

this gives the roots:

 2  2
2 B B C B
φ + φ+ =− +
A 2A A 2A
 2  2
B B C
φ+ = −
2A 2A A

22
s 
2
B B C
φ+ =± −
2A 2A A
s 
2
B B C
φ+ =± −
2A 2A A
r
B B2 C
φ+ =± 2

2A s 4A  A 
B 4A2 B 2 C
φ+ =± −
2A 4A2 4A2 A
s  2 
B 1 B 4A2 C
φ+ =± −
2A 4A2 4A2 A
s 
B 1 4A2 B 2 4A2 C
φ+ =± −
2A 2A 4A2 A
B 1 p 2
φ+ =± (B − 4AC)
2A 2A
B 1 p 2
φ=− ± (B − 4AC)
2A p2A
−B ± (B 2 − 4AC)
φ= .
2A

A PDE of an elliptic type implies that (B 2 − 4AC) < 0, so we have that


p
−B ± (B 2 − 4AC)
φ=
2A
p
−B ± −1 (4AC − B 2 )
=
2A
p
−B ± i (4AC − B 2 )
= ,
2A

αx βx
we have complex roots. Let φ = for one root, and φ = for the other root; so,
αy βy
p p
αx −B + i (4AC − B 2 ) βx −B − i (4AC − B 2 )
= , = ,
αy 2A βy 2A

αx βx
these are complex conjugates. The total derivative replaces and :
αy βy

dα = αx dx + αy dy = 0

23
dy αx
gives =− ;
dx αy

similarly, for dβ = βx dx + βy dy = 0, we have

dy βx
=− ,
dx βy

then we have
p p
B − i (4AC − B 2 ) B + i (4AC − B 2 )
dy = dx , dy = dx,
2A 2A

integrating both gives


p p
B−i (4AC − B 2 ) B+i (4AC − B 2 )
y= x + c1 , y= x + c2 ,
2A 2A

and solving for c1 snf c2 we get,


p p
B − i (4AC − B 2 ) B + i (4AC − B 2 )
c1 = y − x, c2 = y − x.
2A 2A

Let c1 = α, and c2 = β
p p
B−i (4AC − B 2 ) B + i (4AC − B 2 )
α=y− x, β=y− x, (2.5.2)
2A 2A

to find what is α and β in terms of η and ξ we again do the factoring:

∂2 ∂2
 
vξξ + vηη = + v
∂ξ 2 ∂η 2
 
∂ ∂ ∂ ∂
= + v
∂ξ ∂ξ ∂η ∂η
 
∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂
= + − + v
∂ξ ∂ξ ∂ξ ∂η ∂ξ ∂η ∂η ∂η
    
∂ ∂ ∂ ∂ ∂ ∂
= + − + v
∂ξ ∂ξ ∂η ∂η ∂ξ ∂η

24
  
∂ ∂ ∂ ∂
= + − v,
∂ξ ∂η ∂ξ ∂η

this gives the general solution,

u(ξ, η) = f (ξ + η) + g(ξ − η).

It follows that the (complex) characteristic coordinates are

α = ξ + iη, and β = ξ − iη,

solving for η and ξ, we have

1 1 1 1
ξ = α+ β and η = iβ − iα.
2 2 2 2

The equations found above:


p p
B − i (4AC − B 2 ) B+i (4AC − B 2 )
α=y− x, β=y− x,
2A 2A

1 1 √
give ξ=y− Bx and η= x 4AC − B 2 .
2A 2A
Taking partial derivatives of ξ and η we have

1 √
ηy = 0, ‘ ηx = AC − B 2 ,
2A
1
ξx = − B, ξy = 1.
2A

The Jacobian gives

∂(ξ, η) ξx ξy 1 √
= =− AC − B 2 6= 0.
∂(x, y) ηx ηy 2A

Now plug these numbers into

avξξ + bvξη + cvηη · · · = 0.

25
We have a = c, and need b equal to zero,

b = 2Aξx ηx + B (ξx ηy + ξy ηx ) + 2Cξy ηy


1 √
    
1 1
= 2A − B AC − B 2 + B − B (0)
2A 2A 2A

 
1
+B (1) AC − B 2 + 2C (1) (0)
2A
1 √ 1 √
= − B AC − B 2 + B AC − B 2
2A 2A
= 0

and since we set a = c, we get

0 = avξξ + 0 · vξη + avηη · · ·

= a (vξξ + vηη ) · · · ,

this implies that 0 = vξξ + vηη · · ·

is a PDE of an elliptic type in canonical form.

26
CHAPTER 3

Separation Of Variables

3.1 Examples of second-order PDEs and Boundary Conditions

Some well known examples of PDEs are

utt = c2 uxx wave equation (3.1.1)

ut = kuxx diffusion equation (3.1.2)

uxx + uyy = 0 Laplace equation, (3.1.3)

where each represent a different type: the wave equation represents ‘hyperbolic’ equa-

tions; while the Laplace equation is an elliptic type of equation, and the heat equa-

tion’s type is parabolic. Some conditions have to be given for the equations above

Because PDEs typically have so many solutions,. . . we single out one so-

lution by imposing auxiliary condition. . . (Strauss, p. 20)

The following are some of the types of boundary conditions.

Definition 3.1. Three well-known boundary conditions are

(a) The Dirichlet boundary conditions

u(0, t) = g(t) and u(l, t) = h(t), where 0 < x < l,

is used when u is given at the boundary.

(b) The Neumann boundary conditions with the form

ux (0, t) = f1 (t) and ux (l, t) = f2 (t), where 0 < x < l,


∂u
is used when is given at the boundary.
∂n
(c) The Robin boundary conditions with the form

27

ux − a0 u = f3 (t) at x = 0
ux + al u = f4 (t) at x = l
∂u
is used when and u is applied at the boundary.
∂n

3.2 The characteristic polynomial

Solving a linear, homogeneous, second order ODEs using the characteristic polynomial

method.

We have the equation


00 0
aX + bX + cX = 0 (3.2.1)

where X = X(x), with a, b and c constants. We look for a solution X(x) in the form

X = erx , then

0
X = rerx ,

00
X = r2 erx ,

00 0
use this to insert into aX + bX + cX = 0, this gives

ar2 erx + brerx + cerx = 00

and factoring erx we have

erx ar2 + br + c = 0,


which gives the characteristic polynomial

ar2 + br + c = 0. (3.2.2)

The roots of this polynomial are

28
√ √
−b + b2 − 4ac −b − b2 − 4ac
r1 = , r2 = . (3.2.3)
2a 2a

These roots, (3.2.3), give solution cases for the differential equation

00 0
aX + bX + cX = 0 where X(x) = erx .

If r1 6= r2 are two real solution, then X (x) = c1 er1 x + c2 er2 x ;

if r1 = r2 the solution has the form X (x) = c1 er1 x + c2 xer1 x

if r1 6= r2 & b2 − 4ac < 0 (these are complex roots) the solutions have the form

X (x) = c1 e(κ+iγ)x + c2 e(κ−iγ)x ,



−b 4ac − b2
where κ = and γ = .
2a 2a
By Euler’s formula:

eiγ = cos γx + i sin γx,

we get the following:

X (x) = c1 e(κ+iγ)x + c2 e(κ−iγ)x

= eκx c1 eiγx + c2 e−iγx


  

= eκx [c1 (cos γx + i sin γx) + c2 (cos γx − i sin γx)]

= eκx [(c1 cos γx + ic1 sin γx) + (c2 cos γx − ic2 sin γx)]

= eκx [(c1 + c2 ) cos γx + (c1 − c2 ) i sin γx]

= eκx [C1 cos γx + C2 sin γx]

where c1 + c2 = C1 , and (c1 + c2 )i = C2 . [3, Section 3.4].

29
3.3 Dirichlet boundary conditions

We look at the wave equation with Dirichlet boundary conditions:

utt = c2 uxx for 0 < x < l (3.3.1)

u(0, t) = 0 = u(l, t) (3.3.2)

and initial conditions


u(x, 0) = φ(x), ut (x, 0) = ψ(x). (3.3.3)

We look for a solution as a separable function of x and t, of the form

u(x, t) = X(x)T (t), (3.3.4)

00
next is to have XT = c2 X 00 T be divided by c2 XT

00
X(x)T (t) c2 X 00 T
= = −λ,
c2 XT c2 XT

where λ > 0 is a constant. So, we get

X 00 + λX = 0 and T 00 + c2 λT = 0,

where for convenience we let β 2 = λ and β > 0. Then, we have

X 00 + β 2 X = 0, and T 00 + c2 β 2 T = 0.

And by the characteristic method (ar2 + br + c = 0) we obtain a = 1, b = 0 and

c = β 2 , this implies that since r1 6= r2 & 02 − 4β 2 < 0, the solution has the form

X (x) = A cos (βx) + B sin (βx) ,

30
where κ = 0 and γ = β in eκx [C1 cos γx + C2 sin γx] .

Similarly for T 00 + c2 β 2 T = 0, with κ = 0 and γ = cβ, we have

T (t) = C cos (cβt) + D sin (cβt) .

Using Dirichlet condition u(0, t) = 0 for X on the left we have

0 = X (0) = A cos (β0) , +B sin (β0)

= A(1) + B(0)

this gives A = 0.

So X (x) = A cos (βx) + B sin (βx) becomes X(x) = B sin(βx) and the right

boundary condition, u(l, t) = 0, says

X(l) = B sin(βl) = 0.

If B = 0, then u(x, y) = X(x)T (t) = X(0)T (t) = 0; this is trivial. Instead, we let

βl = nπ, where nπ are zeros (roots) of the sine function; so, β = , and
l
 nπ 2  πx 
βn2 = = λn , Xn (x) = sin (n = 1, 2, 3, . . .)
l l

For T, there are no boundary conditions so we have that u(x, t) = X(x)T (t) becomes

 
nπct nπct nπx
un (x, t) = Cn cos + Dn sin sin , (3.3.5)
l l l

n is a set of integers from one to infinity and Cn and Dn are constants. So, we

have infinitely many solutions because for each n, we have a different un and we

have infinitely many n values from 1 to ∞. By the method of superposition, we

31
get a sum of solutions as another solution of utt = c2 uxx for 0 < x < l and

u(0, t) = 0 = u(l, t) :

X nπct nπct

nπx
u(x, t) = Cn cos + Dn sin sin , (3.3.6)
n
l l l
X nπx
and (3.3.6) “solves equations (3.3.1), (3.3.2) and (3.3.3) provided that φ(x) = An sin
l
X nπc nπx
and ψ(x) = Bn sin ” [4, p. 89].
l l

Figure 2: sin(nπx/l) n = 1, 2, 3, 4. 0 < x < l.

3.4 The Heat Equation with Dirichlet Boundaries

The diffusion problem with the Dirichlet boundary condition.

diffusion equation: ut = kuxx (0 < x < l, 0 < t < ∞) (3.4.1)

boundary condition: u(0, t) = u(l, t) = 0 (3.4.2)

initial condition: u(x, 0) = φ(x). (3.4.3)

Continuing with the separation of variables method, we have

0 00
T X
= = −λ = constant,
kT X

32
00
and, as before, the boundary conditions give for −X = λX

 nπ 2 nπx
λn = , Xn (x) = sin (n = 1, 2, 3, . . . ). (3.4.4)
l l
0
T d
For T (with = ln |T |), we have
T dt
0
T
0= = −β 2 k
T
d
ln |T | = −β 2 k
dt
 nπ 2
ln |Tn | = − kt + c
l
 nπ 2
− kt + c
Tn = e l
 nπ 2
− kt
≡ An e l .

So, u(x, t) = X(x)T (t) is now


2 nπx
An e−(nπ/l) kt sin
X
u(x, t) = (3.4.5)
n=1
l

Note that Tn is an exponential function; also note that we have just one initial condi-

tion because the diffusion equation contains only one partial derivative with respect

t. Imposing this initial condition φ(x), we get the following


X nπx
φ(x) = An sin . (3.4.6)
n=1
l

Example 3.2. Consider waves in a resistant medium that satisfy the problem

utt = c2 uxx − rut f or 0 < x < l,

BC: u = 0 at both ends,

IC: u (x, 0) = φ (x) , ut (x, 0) = ψ (x) ,

33
2πc
where r is a constant and 0 < r < . Write down the series expansion of the solu-
l
tion.

Solution: look for

u (x, t) = X (x) T (t) ,

write

ut = X (x) T 0 (t) , utt = X (x) T 00 (t) and uxx = X 00 (x) T (t) .

Writing utt − c2 uxx + rut = 0, we have

X (x) T 00 (t) − c2 X (x) T 00 (t) + rX(x)T 0 (t) = 0,

and dividing by c2 X (x) T (t) gives

X (x) T 00 (t) X 00 (x) T (x) X (x) T 0 (t) 0


− + r = ,
c2 X (x) T (t) c2 X (x) T (t) c2 X (x) T (t) c2 X (x) T (t)

then
T 00 (t) X 00 (x) T 0 (t)
− + r = 0,
c2 T (t) X (x) c2 T (t)

insert −λ and separate

T 00 (t) T 0 (t) X 00 (x)


+ r = −λ and = −λ,
c2 T (t) c2 T (t) X (x)

and solving X as before we have:


)
X 00 (x) + λX (x) = 0 nπx  nπ 2
=⇒ Xn = sin and λn = .
u = 0 at both ends l l

For T we have
1 00 r
2
T (t) + 2 T 0 (t) + λT (t) = 0,
c c

34
and multiplying c2 on both sides gives:

T 00 (t) + rT 0 (t) + λc2 T (t) = 0.

We use the characteristic equation µ2 + rµ + λc2 = 0, to find the roots for T ; by the

quadratic formula we get the roots:


q
nπ 2 2

−r ± r2 − 4c2 λn −r ± r2 − 4 l
c
µn = =
2 2

2πc
Given that 0 < r < , we have
l

2
4π 2 c2 4π 2 c2 n2

2πc 2πc
r< =⇒ r2 < = ≤ , n = 1, 2, 3, . . . ,
l l l2 l2

so q
nπ 2 2

r±i 4 − r2
r 
l
c r nπ 2 2 r2
µn = − =− ±i c − .
2 2 l 4

Then,
 r  r  
− ± iwn t nπ 2 2 r2
Tn (t) = e 2 , where wn = 4 c −
l 4

and
 r 
− ± iwn t
Tn (t) = e 2
r
− t ±iw t
= e 2 e n

r
− t
= e 2 (cos wn t ± i sin wn t) .

Hence,
r ∞ r  
− tX nπx nπ 2 2 r2
un (x, t) = e 2 (cos wn t + i sin wn t) sin , where wn = 4 c − .
n=1
l l 4

Note: r > 0, so u (x, t) is bounded for large t0 s.

35
3.5 Neumann Boundary Conditions

Separation of variables, starting with X(x) as previously found for the wave equation:

X(x) = C cos βx + D sin βx, (3.5.1)

and Neumann boundary conditions,

ux (0, t) = ux (l, t) = 0. (3.5.2)

With the left boundary condition ux (0, t) = 0, we have

0
0 = X (0)

= −Cβ sin β0 + Dβ cos β0

= 0 + Dβ(1),

β 6= 0 for all cases, so D = 0. This gives X(x) = C cos βx.

And for ux (l, t) = 0,


0
0 = X (l) = −βC sin βl,

we have βn = nπ/l, since C = 0 would lead to a trivial solution X(x) ≡ 0. Replacing

β in X(x) = C cos βx gives


nπx
Xn (x) = Cn cos ,
l

this gives
2 kt nπx
u(x, t)n = An e−(nπ/l) cos (3.5.3)
l

For Dirichlet boundary conditions in last section, it could be shown that λ 6= 0; but

for Neumann’s boundary conditions, λ can be zero and λ = 0 adds the term, 12 A0 to

36
what we have; so now (3.5.3) looks like


1 X 2 nπx
u(x, t) = A0 + An e−(nπ/l) kt cos , (3.5.4)
2 n=1
l

and zero is be included as an eigenvalue

 nπ 2
λ = β2 = for n = 0, 1, 2, 3, . . . . (3.5.5)
l

For Neumann BCs in (3.5.4), we have the cosine series in it, instead of the sine series

of Dirichlet BCs. With Neumann’s, now the initial conditions (at t = 0) looks like


1 X nπx
u(x, 0) = φ(x) = A0 + An cos . (3.5.6)
2 n=1
l

The cosine series. For Neumann’s boundary conditions, ux (0, t) = ux (l, t) , we get

for the diffusion equation’s only initial condition u (x, 0) = φ (x) ,


1 X 2 nπx
φ (x) = u (x, 0) = A0 + An e−(nπ/l) k(0) cos
2 n=1
l

1 X nπx
= A0 + An cos .
2 n=1
l

Example 3.3. Solve the diffusion problem ut = kuxx in 0 < x < l, with the mixed

boundary conditions u (0, t) = ux (l, t) = 0.

Solution: Look for

u (x, t) = X (x) T (t) ,

then X (x) T 0 (t) = kX 00 (x) T (t) ,


X (x) T 0 (t) kX 00 (x) T (t)
so = ,
kX (x) T kX (x) T
T 0 (t) X 00 (x)
gives = = −λ.
kT (x) X (x)

37
So X 00 (x) + λX (x) = 0, then if λ > 0, we get

X (x) = C cos βx + D sin βx,

where β 2 = λ, and β > 0.

The boundary condition u (0, t) = 0 implies X (0) T (t) = 0 for all t =⇒ X (0) = 0,

(we do not want u ≡ 0, which would be if T (t) = 0) . So

0 = X (0) = C cos β (0) + D sin β (0)

= C + D (0)

=⇒ C = 0,

hence, X (x) = D sin βx.

With ux (l, t) = 0, we have X 0 (l) T (t) = 0 for all t, since T (t) 6= 0,

0 = X 0 (l) = βD cos β (l) ,

but β 6= 0 and D 6= 0, then

cos βl = 0
n + 12 π
  
1
⇒ βn l = n + π so βn = ,
2 l
1
 !2
n + π
thus λn = βn2 = 2
, n = 0, 1, 2, . . . .
l

n + 12 π

And Xn (x) = Dn sin x.
l

For T,

T (t) + kλT (t) = 0 gives T (t) = Ae−kλt .


0

38
Note: there are no negative eigenvalues, for if λ < 0, we can write λ = γ 2 (where WLOG γ > 0) ,

then

X 00 (x) + λX (x) = 0, becomes X 00 (x) − γ 2 X (x) = 0,

so X (x) = A cosh γx + B sinh γx,

and the boundaries give:

0 = X (0) = A · 1 + B · 0

⇒ A = 0,

we get X (x) = B sinh γx.

and 0 = X 0 (l) = Bγ cosh γl where B 6= 0, γ 6= 0, l 6= 0,

=⇒ cosh γl = 0, but this is not true, cosh x is never zero.

We have a contradiction; that is, there is no negative eigenvalue.


So Tn (t) = An e−kλn t
2 2
∞ (n+ 12 ) π  !
X −k
l2
t n + 12 π
and u (x, t) = Bn e sin x
n=0
l
where Bn = An · Dn.

Figure 3: cosh γl 6= 0

39
3.6 The Robin boundary conditions

X 0 − a0 X = 0 at x = 0

X 0 + al X = 0 at x = l.

where a0 and al are constants. For X (x) = C cos βx + D sin βx with the Robin

boundary condition at x = 0 gives

0 = X 0 (0) − a0 X (0)

= −βC sin β0 + βD cos β0 − a0 (C cos β0 + D sin β0) ,

= βD − a0 (C)
a0 (C)
gives D =
β

similarly, at x = l

0 = X 0 (l) + al X (l)

= −βC sin βl + βD cos βl + al (C cos βl + D sin βl) ,

= −βC sin βl + βD cos βl + al C cos βl + al D sin βl


 
a0 (C)
= −βC sin βl + β cos βl + al C cos βl
β
 
a0 (C)
+al sin βl
β
al a0 C
= −βC sin βl + a0 C cos βl + al C cos βl + sin βl
β

multiplying by β and factoring out C, we get

0 = −β 2 sin βl + a0 β cos βl + al β cos βl + al a0 sin βl

β 2 sin βl − al a0 sin βl = a0 β cos βl + al β cos βl

40
β 2 − al a0 sin βl = (a0 + al ) β cos βl


β 2 − al a0 tan βl = (a0 + al ) β cos βl,




“Any root β > 0 of this ”algebraic” equation would give us an eiganvalue λ = β 2 ”

(Struass, p.94).
a0 (C)
If C 6= 0, and D = , we get the corrsponding eigenfunction
β

Ca0
X (x) = C cos βx + sin βx.
β

Solving for β is difficult in

β 2 − al a0 tan βl = (a0 + al ) β cos βl,



(3.6.1)

so will use graphing to analyze numerical values of β. Dividing (3.6.1) by cos βl, we
sin βl
get the trigonometric identity = tan βl and write (3.6.1) as
cos βl

(a0 + al ) β
tan βl = ,
β 2 − al a0

(a0 + al ) β
and find intersections of tan βl and , as functions of β > 0.
β 2 − al a0

Figure 4: a0 > 0, al > 0, eigenvalues as intersections

41
In figure (7), the eigenvalues are between zeros of tan βl; this, and the inter-

sections show that

π2 2π
2
n2 < βn
2
= λn < (n + 1) (n = 0, 1, 2, 3, . . .). (3.6.2)
l2 l2

π π
Note that in figure (7), when cos βl = 0, βl = , and sin = 1, (β 2 − al a0 ) sin βl =
2 2
(a0 + al ) β cos βl, gives

β 2 − al a0 sin βl

0 =

β 2 − al a0

=

=⇒ β = al a0 ,

this is when “the tangent function and rational function ’intersect at infinity’”

[4, p 94].

For the case of a0 < 0 al > 0, and a0 + al > 0, the maximum occurs at
p (−a0 + al ) β
|−a0 al |; this can be shown by finding the critical point where 2 reaches
β − (−a0 )al
the maximum (see Figure (5)). We use the quotient rule for derivatives:

0
(−a0 + al ) (β 2 + a0 al ) − (−a0 + al ) β (2β)

(−a0 + al ) β
=
β 2 + a0 al (β 2 + a0 al )2

and set the right side to zero, (implies the numerator is zero) and get

(−a0 + al ) β 2 + a0 al = (−a0 + al ) 2β 2


⇒ β 2 + a0 al = 2β 2

⇒ β 2 = a0 al
p
⇒ β= |a0 al | .

42
(−a0 + al ) β
The intersections of tan βl = are shown in figure (5),
β 2 + al a0

(−a0 + al ) β
Figure 5: a0 < 0, al > 0 and a0 + al > 0, tan βl = .
β 2 + al a0

Example 3.4. Find the eigenvalues graphically for the boundary conditions X (0) =

0, X 0 (l) + aX (l) = 0, for − X 00 = λX, where λ = β 2 . Assume that a 6= 0.

Solution: We have

X (x) = C cos βx + D sin βx,

and 0 = X (0) = C + D (0) ⇒ C = 0

so X (x) = D sin βx.

On the other hand, X 0 (l) + aX (l) = 0 gives:

0 = βD cos βl + aD sin βl

=⇒ −βD cos βl = aD sin βl


β sin βl
=⇒ − =
a cos βl
β
=⇒ − = tan βl.
a

43
β
The intersection of functions − and tanβl, will give the eigenvalues:
a
Case 1: a > 0; so y = − βa < 0 : The discontinuities are at β = n − 21 π/l and the


roots of tan βl are nπ/2, for n = 1, 2, 3, . . . . We can see that

n − 21 π


< βn < ,
l l
 
(n− 12 )π
and also the graph shows that limn→∞ βn − l
= 0.

Figure 6: graph of − βa = tan βl, where a > 0

Case 2: a < 0 =⇒ y = − a1 β > 0; so, the graph:

Figure 7: graph of − βa = tan βl, where a < 0

44
n + 21 π n + 21 π
 

shows that < βn < and βn − →0 as n→∞ . So, larger
l l l
2
π2

1
eigenvalues get closer to 2 n + .
l 2
Example 3.5. We will use Newton’s Method of iterations to comopute the first

intersections, with f (β) = (tan πβ) (β 2 − 3. 61) − 3. 8β, β1 = 1, n = 1, L = π and

a0 = al = 1.9, where
f (βn )
βn+1 = β − ,
f 0 (βn )
(tan πβ) (β 2 − 3. 61) − 3. 8β
βn+1 = β −
π (tan2 πβ + 1) (β 2 − 3. 61) + (tan πβ) 2β − 3. 8

(tan πβ1 ) (β 2 − 3. 61) − 3. 8β1


β2 = β1 − = 0.683 321 638
π (tan2 πβ1 + 1) (β12 − 3. 61) + (tan πβ1 ) 2β1 − 3. 8
(tan πβ2 ) (β22 − 3. 61) − 3. 8β2
β3 = β2 − = 0.740 563 595
π (tan2 πβ2 + 1) (β22 − 3. 61) + (tan πβ2 ) 2β2 − 3. 8
(tan πβ3 ) (β32 − 3. 61) − 3. 8β3
β4 = β3 − = 0.757 393 701
π (tan2 πβ3 + 1) (β32 − 3. 61) + (tan πβ3 ) 2β3 − 3. 8
(tan πβ4 ) (β42 − 3. 61) − 3. 8β4
β5 = β4 − = 0.758 261 759
π (tan2 πβ4 + 1) (β42 − 3. 61) + (tan πβ4 ) 2β4 − 3. 8
(tan πβ5 ) (β52 − 3. 61) − 3. 8β5
β6 = β5 − = 0.758 263 778 ,
π (tan2 πβ5 + 1) (β52 − 3. 61) + (tan πβ5 ) 2β5 − 3. 8

Figure 8: a0 = al = 1.9, l = π, β1 ≈ 0.758263778

45
CHAPTER 4

Eigenvalues

Definition 4.1. λ is an eigenvalue of a matrix A, if there is a vector v called eigen-

vector, such that

Av = λv (4.0.1)

where v 6= 0.
d2
Analogously, for the differential operator − we have that
dx2

00
X (x) = −λX(x)
d2
=⇒ − X(x) = λX(x),
dx2

d2
where λ is an eigenvalue of − for a nonzero function X(x).
dx2

4.1 Eigenvalues: Dirichlet Boundary Conditions

The eigenvalues for Dirichlet boundary problems are all positive.

If λ = 0, then

X 00 = 0

X 0 (x) = D

X (x) = Dx + C

Derichlet BC 0 = X (0) = D (0) + C

implies C = 0

so X(x) = Dx

46
for X(l) = 0: 0 = X(l) = D(l)

l 6= 0, so D = 0, therefore zero is not an eigenvalue. For an eigenvalue, λ, the

eigen-function X can not be zero.

Negative Eigenvalues?

If λ were negative, we would write it as λ = −γ 2 , where γ > 0; then,

X 00 = − −γ 2 X = γ 2 X.


Suppose λ is negative, then we have two real solutions of the form erx , then by

section (3.2), X (x) = c1 eγx +c2 e−γx are solutions. For Dirichlet boundary conditions,

X(x) = 0, we have

0 = X (0) = c1 eγ(0) + c2 e−γ(0)

= c1 (1) + c2 (1)

this gives − c1 = c2

so X (x) = c1 eγx − c1 e−γx

at X(l) = 0: 0 = X (l) = c1 eγl − c1 e−γl

= c1 eγl − e−γl


= eγl − e−γl

we have eγl = e−γl

taking ln : ln eγl = ln e−γl

we get γl = −γl

this is true if γl = 0, a contradiction; then, the Dirichlet’s boundary conditions does

47
not allow negative eigenvalues.

4.2 Eigenvalues: Neumann Boundary Conditions

ux (0, t) = ux (l, t) = 0.

Zero Eigenvalues

If λ = 0, then −X 00 (x) = 0 implies X 0 (x) = A and X (x) = Ax + B; A and B

constants.

left BC X 0 (0) = A

gives A = 0

then X (x) = B

and X 0 (x) = 0.

righ BC X 0 (l) = 0

Then X (x) = B is not zero; therefore, for the Neumann’s boundary conditions we

have a zero eigenvalue when −X 00 = 0.

Negative Eigenvalues?

Let λ be a negative eigenvalue and write λ = −γ 2 ; by the characteristic polynomial

method

X 00 = − (−γ 2 ) X = γ 2 X

implies r = ±γ.

We have two real solutions of the form erx , so the solution is

X (x) = c1 eγx + c2 e−γx .

48
at X 0 (0): 0 = X 0 (0) = γc1 eγ(0) − γc2 e−γ(0)
0 γ
= (c1 − c2 )
γ γ
gives − c1 = c2

we have X (x) = c1 eγx − c1 e−γx

at X 0 (l) : 0 = X 0 (l) = c1 γ eγl + e−γl




then eγl = −e−γl

this gives e2γl = −1.

We have a contradiction because ex is never −1 for −∞ < x < ∞. So we conclude

λ = −γ 2 is not a negative eigenvalue when Neumann’s boundary conditions are

applied to the solution u (x, t) = X (x) T (t) .

4.3 Eigenvalues: The Robin’s Boundary Conditions

For − X 00 = λX

and X 0 (0) − a0 X (0) = 0, X 0 (l) + a0 X (l) = 0.

Zero Eigenvalues?

If λ = 0, then X (x) = C + Dx, and X 0 (x) = D, so

X 0 (x) − a0 X (x) = D − a0 (C + Dx)

and 0 = X 0 (0) − a0 X (0)

implies 0 = D − a0 (C + D (0))

= D − a0 C

49
then D = a0 C

so X (x) = C + (a0 C) x

and 0 = X 0 (l) + al X (l)

gives = a0 C + al (C + a0 Cl)

= a0 C + al C + al a0 Cl

then λ = 0 if and only if 0 = C (a0 + al + al a0 l)

that is, if C = 0 or 0 = a0 + al + al a0 l

if C = 0 then D = 0, ⇒ X = 0 not an eigen function

so 0 = a0 + al + al a0 l

then − al a0 l = a0 + al , if and only if λ = 0.

Negative Eigenvalues?

Let λ = −γ 2 < 0, then λ is negative. This gives −X 00 = −γ 2 X, by the characteristic

polynomial method we have

X (x) = c1 eγx + c2 e−γx = A cosh γx + B sinh γx,

and X 0 (0) − a0 X (0) = 0 gives

0 = Aγ sinh 0γ + Bγ cosh 0γ − a0 (A cosh γ0 + B sinh γ0)

= 0 + Bγ − a0 A − 0
a0 A
we get B =
γ
a0 A
so X (x) = A cosh γx + sinh γx
γ
Then 0 = X 0 (l) + al X (l)

50
 
a0 A
= Aγ sinh γl + a0 Acoshγl + al A cosh γx + sinh γl
γ
γ
0 = γ 2 sinh γl + a0 γ cosh γl + al γ cosh γl + al a0 sinh γx
A
γ 2 + al a0 sinh γl

= − (a0 + al ) γ cosh γl
 sinh γl
γ 2 + al a0 = − (a0 + al ) γ
cosh γl
(a0 + al ) γ
tanh γx = − 2
γ + al a0

We graph both sides; if there is an intersection, then we will have a negative eigen-

value(s). With a0 and al , both positive:

Figure 9: for a0 > 0, al > 0, no negative eigenvalue

Figure 10: for a0 < 0, al > 0, a0 + al > 0, no negative eigenvalue.

51
Figure 11: for a0 < 0, a0 + a1 < −a0 al l, one negative eigenvalue where the functions
intersect.

52
CHAPTER 5

Coefficients

5.1 Coefficients in the Case of Dirichlet Boundary Conditions

To find coefficient Cn in (3.3.6) we start by setting (3.3.6) to the initial condi-

tion u(x, 0)

∞  
X nπc (0) nπc (0) nπx
φ (x) = u (x, 0) = Cn cos + Dn sin sin
n=1
l l l

X nπx
= (Cn + 0) sin
n=1
l

X nπx
= Cn sin
n=1
l


X nπx mπx
This gives φ (x) = Cn sin , then multiply both sides by sin , and integrate
n=1
l l
from 0 to l term by term

Z l ∞ Z l
mπx X nπx mπx
φ (x) sin dx = Cn sin sin dx
0 l n=1 0 l l

To compute this we use the trigonometric identities

cos (a − b) = cos a cos b + sin a sin b

and cos (a + b) = cos a cos b − sin a sin b

combine cosines

cos (a − b) − cos (a + b) = cos a cos b + sin a sin b − (cos a cos b − sin a sin b)

= cos a cos b + sin a sin b − cos a cos b + sin a sin b

= 2 sin a sin b

so 2 sin a sin b = cos (a − b) − cos (a + b)

53
1 1
and sin a sin b = cos (a − b) − cos (a + b)
2 2

n−m
Now we find the coefficient; first, we integrate for m 6= n; and we will need θ1 = πx,
l
l n+m l
dθ1 = dx, similarly, θ2 = πx, dθ2 = dx
n−m l n+m

Z l
1  nπx mπx  1  nπx mπx 
cos − − cos + dx
0 2 l l 2 l l
Z l    
1 n−m 1 n+m
= cos πx − cos πx dx
0 2 l 2 l
Z l   Z l  
1 n−m 1 n+m
= cos πx dx − cos πx dx
0 2 l 0 2 l
Z l Z l
1 l 1 l
= cos (θ1 ) dθ1 − cos (θ2 ) dθ2
0 2 (n − m) π 0 2 (n + m) π

 l   l
1 l n−m 1 l n+m
= sin πx − sin πx
2 (n − m) π l 0 2 (n + m) π l 0

1 l 1 l
= (sin ((n − m) π) − (sin 0)) − (sin ((n + m) π) − (sin 0))
2 (n − m) π 2 (n + m) π

= 0

nπx nπ
Integrate when m is fixed and m = n, we will need θ3 = and dθ3 = dx, so
l l
l
that dθ3 = dx is used in substitution at a step of integration; also, we make use

1 1
of the trigonometric identity sin2 x = − cos2x
2 2
Z l
nπx mπx
sin sin dx
0 l l
Z l
nπx
= sin2 dx
0 l
Z l
1 1 l
Z
2nπx
= − cos dx
0 2 2 0 l

54
Z l Z l
1 l 1
= dx − cos θ3 dθ3
0 2 nπ 0 2
1 1 1 l
= l− 0− (sin 2nπ − sin 0)
2 2 2 2nπ
1 1 l
= l−0− (0 − 0)
2 2 2nπ
1
= l
2
Z l 
nπx nπx 0, if m 6= n,
So sin sin dx = 1
0 l l 2
l, if m = n.

Z l
mπx
And φ (x) sin dx
0 l
∞ Z l
X nπx mπx
= Cn sin sin dx
n=1 0 l l
∞ Z l
X nπx mπx
= Cn sin sin dx
n=1 0 l l
Z l
(1) πx mπx
= C1 sin sin dx
0 l l
Z l
2πx mπx
+C2 sin sin dx
0 l l
Z l
mπx mπx
+ · · · + Cm sin sin dx
0 l l
Z l
nπx (m + 1) πx
+Cm+1 sin sin dx + · · ·
0 l l
1
= 0 + 0 + · · · +Cm · + 0 + 0 + · · ·
2

Z l Z l
mπx m=m l 2 mπx
This gives φ (x) sin dx = Cm · , or Cm = , φ (x) sin dx
0 l 2 l 0 l

this is the Fourier coefficient formula for Cn . The other initial condition, ut (x, 0) , for

the wave gives

∞  
X nπc nπct nπc nπct nπx
ut (x, t) = − Cn sin + Dn cos sin
n=1
l l l l l

55
∞ 
X nπc nπc  nπx
and ut (x, 0) = − Cn sin 0 + Dn cos 0 sin
n=1
l l l

X nπc nπx
= Dn sin
n=1
l l

X nπc nπx
so ψ (x) = Dn sin
n=1
l l
Z l
nπc 2 mπx
and we get Dn = ψ (x) sin dx by the same process we got Cn .
l l 0 l

5.2 Coefficients in the Case of Neumann Boundary Conditions

To find the constant coefficient An , we start the series at n = 0, (to include

A0 ) and follow the same steps as we did in finding coefficients for Dirichlet’s boundary

conditions; but now we have cos nπx


l
, instead of sin nπx
l
, so we add the cosine difference

and cosine sum to get the needed trigonometric identity:

then cos (a − b) + cos (a + b) = cos a cos b + sin a sin b + (cos a cos b − sin a sin b)

= cos a cos b + sin a sin b + cos a cos b − sin a sin b

= 2 cos a cos b

so 2 cos a cos b = cos (a − b) + cos (a + b)


1 1
and cos a cos b = cos (a − b) + cos (a + b)
2 2

Again, to integrate term by term, we set m 6= n, and we will get 0, because


mπl
multiplying π by any integer in sin = sin πm = 0, and sin 0 = 0, when computing
l
the integral
Z l
1  nπx mπx  1  nπx mπx 
cos − + cos + dx = 0
0 2 l l 2 l l

56
and for the other integral, as we did before, we fix m and set it equal to n; plus we
Z l
2 1 cos 2x 1 cos 2x 1
use the identity cos x = + , where + dx = l + 0,
2 2 0 2 2 2
Z l Z l
nπx mπx nπx
sin sin dx = cos2 dx
0 l l 0 l
1
= l
2
2 l
Z
mπx
so we get Am ≡ φ (x) cos dx where m = 0, 1, 2, . . . .
l 0 l

For m = 0, 1, 2, 3, . . ., this is coefficient Am ’s formula for the cosine series


1 X nπx
A0 + An cos .
2 n=1
l

Example 5.1. Solve the Diffusion problem with Dirichlet boundary conditions and

initial conditions.

ut = kuxx

u (0, t) = 0 = u (l, t)

u (x, 0) = 1 = φ (x)

We know that

X 2 nπx
u (x, t) = An e−(nπ/l) kt
sin .
n=1
l

the initial condition gives


X 2 nπx
1 = u (x, 0) = An e−(nπ/l) k(0)
sin
n=1
l

X nπx
= An sin
n=1
l
Z l
2 mπx
and Am = φ (x) sin dx, is the formula to find A.
l 0 l

57
mπx mπ l
Letting θ = , θd = dx, then θd = dx,
l l mπ

2 l
Z
l
Am = sin θ θd
l 0 mπ
l
2
= − cos θ
mπ 0
l
2 mπx
= − cos
mπ l 0
2 2
= − cos mπ + ,
mπ mπ

we have cos mπ = (−1)m , and


4
2 m 2  if m is odd
− (−1) + = mπ
mπ mπ  0 if m is even,
so we get

∞ ∞
X nπx X 4 nπx
1= An sin = sin
n=1
l n=1
mπ l
 
4 πx 2πx 3πx
= sin + sin + sin + ··· .
mπ l l l

is an infinite series expansion.

The method of separation of variables is very helpful for solving linear PDEs of

2nd order. Hwoever, it has its limitations. For problems with non-constant coefficients

or for those with non-symmetric boundary conditions, the method will not work.

Other methods would have to be explored in those cases.

58
REFERENCES

[1] P. R. Garabedian,: Partial Differential Equations, John Wiley & Sons. Inc, New

York, 2nd edition, 1964.

[2] A. Salih,: Classification of Partial Differential Equations and Canonical Forms.

Internet Source, 2014.

[3] Carlos, A. Smith and W. Campbell, Scott,: A First Course in Differential Equa-

tions: modeling and simulation, CRC Press, Boca Raton FL, John Wiley & Sons.

Inc, New York, 2nd edition, 2011.

[4] W. A. Strauss,: Partial Differential Equations (An introduction), John Wiley &

Sons Ltd., NJ, 2008.

[5] A. Tveito, and R. Winter,: Introduction to Partial Differential Equations: A

Computational Approach, Springer-Verlag, New York,Inc., 1998

59

You might also like