CH 6

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Chapter 6

Partial Differential Equations

6.1 Introduction
Let R denote the real numbers, C the complex numbers, and n-dimensional Euclidean space
is denoted by Rn with points denoted by x (or sometimes by ~x) as tuples x = (x1 , · · · , xn ).
Often in the case of R2 or R3 we will use (x, y) or (x, y, z) to denote points instead of subscript
notation. Ocassionally when we are dealing with dynamical problems where time is one of
the variables we will also use notations likeP(x, t), (~x, t), (x, y, t), or (x, y, z, t). The usual
Euclidean inner product is given by ~x · ~z = nj=1 xj yj and the norm generated by this inner

product we denote by single bars just like the absolute value, i.e., |~x| = ~x · ~x.
An n-tuple α = (α1 , · · · , αn ) of nonnegative integers is called a multi-index. We define

X
n
|α| = αk , α! = α1 ! · · · αn !, ∀ x ∈ Rn , xα = xα1 1 · · · xαnn .
k=1

We will use several notations for partial derivatives of a function u of x ∈ Rn :

∂u
uj = ∂j u = ,
∂xj

for fist order partials and for higher order partials we have

∂ |α| u
uα = ∂ α u = (∂1 )α1 · · · (∂n )αn u = .
∂xα1 1 · · · ∂xαnn

In particular when α = ~0 then ∂ α is the identity operator. We will agree to order the set of
multi-indices α = (α1 , · · · , αn ), by requiring that α comes before β if |α| ≤ |β| or |α| = |β|
and αi < βi where i is the largest number with αi 6= βi .

1
2 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

A partial differential equation of order k is an equation of the form

F (x1 , x2 , · · · , xn , u, ∂1 u, . . . , ∂n u, ∂12 u, · · · , ∂nk u) = 0 (6.1.1)

relating a function u of x = (x1 , · · · , xn ) ∈ Rn and its partial derivatives of order ≤ k.


Given numbers aα with |α| ≤ k, we denote by (aα )|α|≤k . the element in RN (k) given by
ordering the α’s in any fashion, where N (k) is the cardinality of {α : |α| ≤ k}. Similarly, if
S ⊂ {α : |α| ≤ k} we can consider the ordered (card S)-tuple (aα )α∈S .
Now let Ω be an open set in Rn , and let F be a function of the variables x ∈ Ω and
(uα )|α|≤k . Then we can write the partial differential equatio of order k as

F (x, (uα )|α|≤k ) = 0. (6.1.2)

A function u is called a classical solution of this equation if ∂ α u exists for each α in F , and

F (x, (uα (x))|α|≤k ) = 0, for every x ∈ Ω.

We denote by C(Ω) the space of continuous functions on Ω. If Ω is open and k is a


positive integer, C k (Ω) will denote the space of functions possessing continuous derivatives
up to order k on Ω, and C k (Ω) will denote the space of all u ∈ C k (Ω) such that ∂ α u extends
continuously to the closure of Ω denoted by Ω for all 0 ≤ |α| ≤ k. We also define

C ∞ (Ω) = ∩∞ ∞ ∞
k=1 C (Ω), C (Ω) = ∩k=1 C (Ω).
k k

If Ω ⊂ Rn is open, a function u ∈ C ∞ (Ω) is said to analytic in Ω if it can be expanded


in a convergent power series about every point of Ω. That is, u is analytic in Ω if for each
x ∈ Ω there exist an r > 0 so that for all y ∈ Br (x) = {y : |y − x| < r}
X ∂ α u(x)
u(y) = (y − x)α ,
α!
|α|≥0

with the series converging absolutely and uniformly on Br (x). For complex valued analytic
functions we will use the word holomorphic.
Our definition of partial differential equation is really to broad since it includes equations
that make no sense, such as exp(∂1 u) = 0. It also allows us to think of what should be a
kth order equation as a (k + m)th order equation for any m. In what follows we will impose
various types of conditions on F that make more sense.
The equation (6.1.2) is called linear if F is an affine-linear function of the vector of
variables. This means that we can write
X
aα (x)∂ α u = f (x). (6.1.3)
|α|≤k
6.1. INTRODUCTION 3
P
In this case we define the differential operator L = |α|≤k aα (x)∂ α and write Lu = f . More
generally, we have the quasi-linear equations which have the form
X
aα (x, (∂ β u)|β|≤(k−1) )∂ α u = b(x, (∂ β u)|β|≤(k−1) ). (6.1.4)
|α|≤k

Thus the partial differential equation is linear if u and all of its derivatives appear in a
linear fashion. For example, the general form of a linear 2nd order partial differential equation
is
Auxx + Buxy + Cuyy + Dux + Euy + F u = G
For instance,
yuxx + uyy + ux = 0
is linear and
uuxx + uyy + ux = 0
is nonlinear but quasi-linear. If G = 0 the equation is homogeneous.
Some general concerns in the study of partial differential equation’s are:
1. existence of solutions,
2. uniqueness of solutions,
3. dependence of solutions on the ‘data’,
4. smoothness, or regularity of the solution, and
5. representations for solutions and behavior of solutions.
The first three of these issues are related to the notion of a well-posed problem. In
particular, a problem is well posed in the sense of Hadamard if there exists a unique solution
that depends continuously on the data.
Consider the following examples in R2 ,
1. The general solution of ∂1 u = 0 is u(x1 , x2 ) = f (x2 ) where f is arbitrary. Thus the
equations gives complete information about the behavior of the solution with respect
to x1 (it is a constant) but it gives no information with respect to the x2 variable.
2. The general solution of ∂1 ∂2 u = 0 is u(x1 , x2 ) = f (x1 ) + g(x2 ) where f and g are
arbitrary. Thus, other that the fact that the dependences on x1 and x2 are “uncoupled”
we learn nothing about the dependence on the variables.
3. Any complex valued solution u of the “Cauchy-Riemann” equation ∂1 u + i∂2 u = 0 is a
holomorphic function of z = x1 + ix2 . Thus they are in particular C ∞ . The equation
imposes very strong conditions on all derivatives of the solutions.
4 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Linear partial differential equation’s are often classified as being of elliptic, hyperbolic, or
parabolic type. What follows is a brief and heuristic discussion of some of the features that
characterize these classifications of partial differential equations. Most of what is presented
in this introduction will be made precise in subsequent chapters.

Elliptic Equations
An example of an elliptic partial differential equation is Laplace’s equation,

div grad u = 0

If u = u(x1 , · · · , xn ) and (x1 , · · · , xn ) are rectangular Cartesian coordinates, and w(x) =


(w1 , · · · , wn ),

grad u = ∇u = (ux1 , · · · , uxn )


∂w ∂w
div w
~= + ··· +
∂x1 ∂xn

and so
div grad u = ∇ · ∇u = ∇2 u = ∆u = ux1 x1 + · · · + uxn xn = 0.
The Laplacian of u, ∆u, provides a comparison of values of u at a point with values at
neighboring points. To illustrate this idea consider the simplest case that u = u(x) and
assume uxx (x) > 0. Let ũ denote the tangent line approximation to u. For h > 0 and
sufficiently small,

u(x + h) > ũ(x + h) = u(x) + u0 (x)h


u(x − h) > ũ(x − h) = u(x) − u0 (x)h

and so
u(x + h) + u(x − h)
> u(x).
2
Roughly stated, u(x) is smaller than its average value at nearby points if uxx (x) > 0. In
higher dimensions, say n = 2, the analogous statement is that if ∆u(x, y) > 0, then the
average value of u at neighboring points, say on a circle about (x, y), is greater than u(x, y).
If ∆u(x, y) < 0, then u(x, y) is greater than the average value of u on a circle about (x, y).
If ∆u(x, y) = 0 then u(x, y) is equal to its average value on a circle about (x, y). (We
will subsequently prove a theorem that makes these ideas rigorous.) More generally, if
∆u(x) = 0, x ∈ Ω ⊂ Rn , then u is equal to its average value at neighboring points everywhere
in Ω. In a certain sense, this says that if u satisfies Laplace’s equation then u represents a
state of equilibrium.
6.1. INTRODUCTION 5

Solutions to Laplace’s equation ∆u(x) = 0 are said to be harmonic. Suppose f (z) =


f (x + iy) = u(x, y) + iv(x, y) is analytic. The Cauchy-Riemann equations state that
∂u ∂v ∂u ∂v
= , =−
∂x ∂y ∂y ∂x
and so
uxx + uyy = 0, vxx + vyy = 0.
Thus the real and imaginary parts of an analytic function are harmonic. There is a converse
statement of this result known as Weyl’s Theorem that depends on the notion of a weak
solution. Roughly stated, ∆u(x) = 0 in a weak sense if for all smooth functions φ with
compact support in Ω, Z
u(x)(∆φ)(x) dx = 0.

We will not prove this next result.

Weyl’s Theorem If u is a weak solution of ∆u(x) = 0 on Ω, then u ∈ C ∞ (Ω) and u


satisfies Laplace’s equation in a classical sense.

The so-called Cauchy problem for Laplace’s equation,

uxx + uyy = 0, |x| < ∞, y > 0


u(x, 0) = f (x),
uy (x, 0) = g(x),

is not well-posed. Indeed, consider the specific problem

uxx + uyy = 0, |x| < ∞, y > 0


cos nx
u(x, 0) =
n
uy (x, 0) = 0

with the solution


1
un (x, y) = cosh(ny) cos(nx).
n
For n sufficiently large, the data of this problem can be made uniformly small. However,

lim un (x, y) = ∞.
n→∞

In other words, small changes in the data do not correspond to small changes in a solution.
6 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Another general feature of elliptic problems is that solutions are smoother that data. For
instance, we will develop the following solution of Poisson’s equation ∆u(x) = f (x), x ∈ R3 ,
Z
−1 f (y)
u(x) = dy.
4π R3 |x − y|
pP
(Here |x| = x2i .) From this representation of the solution, we see that the effect of
integrating the data f against the kernel, 1/|x − y|, mollifies, or smoothes the data and so
we expect that a lack of regularity in f (x) would be eliminated. This is indeed the case and
in the spirit of Weyl’s Theorem, we will show that the solution u is in fact C ∞ .

Hyperbolic Equations
One of the simplest examples of a hyperbolic equation is the wave equation,

uxx = utt , −∞ < x < ∞, t > 0.

Here u(x, t) represents the displacement of a point on an infinite string at point x at time
t. The proper formulation of the problem requires that we stipulate the initial position and
velocity of the string. For the infinite string we need to solve the initial value problem

uxx = utt , −∞ < x < ∞, t > 0


u(x, 0) = f (x)
ut (x, 0) = g(x)

This initial value problem is also called the Cauchy problem and we will show that it’s
solution is given by D’Alembert’s formula,
Z x+t
1 1
u(x, t) = [f (x + t) + f (x − t)] + g(s) ds.
2 2 x−t

For instance, suppose that g(x) = 0 and f (x) is the characteristic function of the interval
[−1, 1]. Since f (x + t), f (x − t) represent two waves traveling in opposite directions, it is
easy to see that the discontinuity in the initial data at x = −1 and x = 1 will propagate
along the lines x − t = 1, x + t = −1. Moreover, the support of u(x, t) travels with finite
speed and exhibits a sharp leading and trailing edge. The figure below emphasizes that for
each fixed time t, the region where the disturbance has spread is restricted to a finite set.
The presence of the so-called sharp trailing edge is due to the fact that the initial velocity is
zero. We will see that for dimensions 2, 4, 6, · · · the region of support does not exhibit this
trailing edge whereas in dimensions 3, 5, 7, · · · this phenomenon is present, independently of
the initial conditions (unlike dimension 1).
6.1. INTRODUCTION 7

u=0

u=1/2
u=1/2

u=0 u=1

u=0

The propagation of a disturbance in dimension 1.

In general, for hyperbolic equations

1. solutions are no smoother than data,

2. there is a finite speed of propagation,

3. solutions exhibit a strong dependence on spatial dimension,

4. many quantities are preserved, and

5. the Cauchy problem is well-posed.

Parabolic Equations
The heat equation,
ut (x, t) = ∆u(x, t), x ∈ Rn , t > 0
is an example of a parabolic equation. If we think of u(x, t) as being the temperature at a
point x at time t, this equation describes the flow or diffusion of heat. In view of our earlier
discussion of the interpretation of the Laplacian, we see that if say, ∆u(x, t) < 0, then the
temperature at position x is greater than that at surrounding points. From Fourier’s Law of
Cooling, heat would ‘flow’ away from the position x, and from the differential equation we
see that ut < 0, corresponding to the decrease in temperature at that point.
The Cauchy problem for the heat equation on Rn is

ut = ∆u(x, t), |x| < ∞, t > 0.

u(x, 0) = f (x).
8 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

The solution may be expressed as


Z
1 −|x−y|2
u(x, t) = e 4t f (y) dy.
(4πt)n/2 Rn

From this formula, it is reasonable to expect the solution u(x, t) to be infinitely differentiable.
In addtion, we see that if even if the data f is compactly supported, the temperature u(x, t)
will be positive for all x when t > 0. These observations suggest the following general
features of parabolic equations:
1. solutions are smooth, and

2. there is an infinite speed of conduction.

6.2 Linear and Quasilinear equations of First Order


In the study of linear partial differential equations a measure of the “strength” of a dif-
ferential
X operator in a certain direction is provided by the notopn of charafcteristics. If
L= aα (x)∂ α is a linear differential operator of order k on Ω in Rn , then its character-
|α|≤k
istic form (or principal symbol) at x ∈ Ω is the homogeneous polynomial of degree k on Rn
defined by X
χL (x, ξ) = aα (x)ξ α .
|α|=k

A vector ξ is characteristic for L at x if

χL (x, ξ) = 0.

The characteristic variety is the set of all characteristic vectors ξ, i.e.,

Charx (L) = {ξ 6= 0 : χL (x, ξ) = 0}.

Definition 6.2.1. A hypersurface of class C k (1 ≤ k ≤ ∞) is a subset S ⊂ Rn such that


for each x0 ∈ S there exists an open neighborhood V ⊂ Rn of x0 and a real-valued function
ϕ ∈ C k (V ) such that ∇ϕ(x) 6= 0 for all x ∈ S ∩ V where

S ∩ V = {x ∈ V : ϕ(x) = 0}.

Remark 6.2.2. Since, by definition, for each x0 ∇ϕ(x0 ) 6= 0 we can apply the implicit
function theorem (without loss of generality let us assume that ∂xn ϕ(x0 ) 6= 0) to solve ϕ(x) =
0 for xn = ψ(x0 ) where x0 = (x1 , · · · , xn−1 ) near x0 . Thus a neighborhood of x0 can be mapped
to a piece of the hyperplane xn = 0 by x 7→ (x0 , xn − ψ(x0 )). The same neighborhood can be
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 9

also be represented in parametric form as the image of an open set in Rn−1 (with coordinates
x0 ) under the map
x0 7→ (x0 , ψ(x0 )).
Thus x0 can be thought of as giving local coordinates on S near x0 .

A hypersurface S is called characteristic for L at x if the normal vector ν(x) is in Charx (L)
and S is called non-characteristic if it is not characteristic at any point.
An important property of the characteristic variety is contained in the following:

Let F be a smooth one-to-one mapping of Ω onto Ω0 ⊂ Rn and set y = F (x).


Assume that the Jacobian matrix
· ¸
∂yi
Jx = (x)
∂xj

is nonsingular for x ∈ Ω, so that {y1 , y2 , · · · , yn } is a coordinate system on Ω0 .


We have
∂ X n
∂yi ∂
=
∂xj i=1
∂xj ∂yi

which we can write symbolically as ∂x = JxT ∂y , where JxT is the transpose of


Jx . The operator L is then transformed into
X ¡ ¢³ ´α
L0 = aα F −1 (y) JFT −1 (y) ∂y on Ω0 .
|α|≤k

When this expression is expanded out, there will be some differentiations of JFT −1 (y) but
such derivatives are only formed by “using up” some of the ∂y on JFT −1 (y) , so they do not
enter in the computation of the principal symbol in the y coordinates, i.e., they do not enter
the highest order terms. We find that
X ³ ´α
−1 T
χL (x, ξ) = aα (F (y)) JF −1 (y) ξ .
|α|=k

Now since F −1 (y) = x, on comparing with the expression


X
χL (x, ξ) = aα (x)ξ α
|α|=k

we see that Charx (L) is the image of Chary (L0 ) under the linear map JFT −1 (y) .
10 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Note that if ξ 6= 0 is a vector in the xj -direction (i.e., ξi = 0 for i 6= j), then ξ ∈ Charx (L)
if and only if the coefficient of ∂jk in L vanishes at x. Now, given any ξ 6= 0, by a rotation
of coordinates we can arrange for ξ to lie in a coordinate direction. Thus the condition
ξ ∈ Charx (L) means that, in some sense, L fails to be “genuinely kth order” in the ξ
direction at x.
L is said to be elliptic at x if Charx (L) = ∅ and elliptic on Ω if it elliptic at each x ∈ Ω.
Elliptic operators exert control on all derivatives of all order.

Example 6.2.3. The first three examples are in R2 as discussed above.

1. L = ∂1 : Charx (L) = {ξ 6= 0 : ξ1 = 0}.

2. L = ∂1 ∂2 : Charx (L) = {ξ 6= 0 : ξ1 = 0 or ξ2 = 0}.


1
3. L = (∂1 + i∂2 ): L is elliptic on R2 .
2
X
n
4. L = ∂j2 (Laplace Operator): L is elliptic on Rn .
j=1

X
n
5. L = ∂1 − ∂j2 (Heat Operator): Charx (L) = {ξ 6= 0 : ξj = 0, for j ≥ 2}.
j=2

X
n
Pn
6. L = ∂12 − ∂j2 (Wave Operator): Charx (L) = {ξ 6= 0 : ξj2 = j=2 ξj }.
2

j=2

Remark 6.2.4. In the notation introduced in Definition 6.2.1 we say that a surface S is
oriented if for each s ∈ S we have made a choice of a vector ν(x) which is orthogonal to S
and is a continuously varying function of x. Such a vector is called a normal vector to S at
x. On S ∩ V = {x : ϕ(x) = 0} we have

∇ϕ(x)
ν(x) = ± .
|∇ϕ(x)|

Thus ν(x) is a C k−1 function on S. If S is the boundary of a domain Ω then we usually


choose the orientation so that ν points out of Ω.
At this point we can also define the normal derivative by

∂ν u = ν · ∇u.

.
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 11

Definition 6.2.5. A hypersurface S is called characteristic for L at x ∈ S if the normal


vector ν(x) to S at x is in Charx (L), and S is called non-characteristic if it is not charateristic
at any x in S.

We now turn to the development for real first order systems. First recall the basic problem
in ODEs is the IVP:

Given a function F on say R3 and (t0 , u0 ) ∈ R2 , find a function u(t) defined


in a neighborhood of t0 such that F (t, u, u0 ) = 0 and u(t0 ) = u0 .

In this disscussion we will consider the analog of this which is the initial value problem
for a first order partial differential equation. We will focus on the linear and quasi-linear
cases.
Let us first consider the linear equation

X
n
aj ∂j u + bu = f (x). (6.2.1)
j=1

where aj , b and f are assumed to be C 1 functions of x. If we denote by A the vector field

A(x) = (a1 (x), · · · , an (x),

then we have
Charx (L) = {ξ 6= 0 : A(x) · ξ = 0}.
That is, Charx (L) ∪ {0} is the hyperplane orthogonal to A(x). From this we see that:

A hypersurface S is characteristic at x if and only if A(x) is tangent to S at x.

INITIAL VALUE PROBLEM: Find a solution u to (6.2.1) with given initial values
u = ϕ on a given hypersurface S. P
If S is characteristic at a point x0 , then the quantity aj (x0 )∂j u(x0 ) is completely
determined as a certain directional derivative of ϕ along S at x0 . For this reason it may
not be possible to make it equal to f (x0 ) − b(x0 )u(x0 ). As an example, if the equation is
∂1 u = 0 and S is the hyperplane xn = 0, we cannot have u = ϕ on S unless ∂1 ϕ = 0.
Namely, consider the case of R2 . The general solution is given by u(x1 , x2 ) = f (x2 ) where f
is arbitrary. But if S corresponds to x2 = 0 then the solution must satisfy u(x1 , 0) = φ(x1 )
and the only choice is that ϕ ≡ 0.
Thus to make the initial value problem well behaved, we must assume that S is non-
characteristic. It turns out that to solve for u it is useful to compute the integral curves of
the vector field A(x).
12 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Definition 6.2.6. The integral curves of the vector field A(x) are, by definition, the param-
eterized curves x(t) that satisfy the system of ODEs

dx dxj
= A(x), i.e., = aj (x), j = 1, 2, · · · , n. (6.2.2)
dt dt
Along such a curve a soution u of the equation (6.2.1) must satisfy

du X ∂u dxj X
n
= = aj ∂j u = f − bu. (6.2.3)
dt j=1
∂xj dt

That is, along such a curve a solution u of the equation (6.2.1) will satisfy the ODE

du
= f − bu. (6.2.4)
dt
By the fundamental existence uniqueness theorem from ODEs, through each point x0 of S
there passes a unique integral curve x(t) of A, namely the solution of (6.2.2) with x(0) =
x0 . Along this curve the solution u of (6.2.1) must also be a solution of the ODE (6.2.4)
with u(0) = ϕ(x0 ). Moreover, since A is non-characteristic, x(t) 6∈ S (at least for |t| = 6 0
sufficiently small) and the curves x(t) fill out a neighborhood of S. The same result as stated
in Theorem 6.2.7 is given in the simpler case of R2 in Subsection 6.2.1 (see, in particular,
Theorem 6.2.14).

Theorem 6.2.7. Assume that S is a hypersurface of class C 1 which is non-characteristic


for (6.2.1), and that the functions aj , b, f , and ϕ are C 1 and real-valued. Then for any
sufficiently small neighborhood Ω of S in Rn there is a unique solution u ∈ C 1 of (6.2.1) that
satisfies u = ϕ on S.

This theorem is a special case of the corresponding result for quasi-linear equations so
we will defer the proof of this result to the proof of the following more general result (see
Theorem 6.2.7).
Consider a first order quasi-linear equation
X
n
aj (x, u)∂j u = b(x, u). (6.2.5)
j=1

In this case, we consider variables (x1 , · · · , xn , u) ∈ Rn+1 and note that if u is a function
of x, then the normal to the graph of u (i.e., (x, u(x)) ∈ Rn+1 ) in Rn+1 is proportional to
~v = (∂1 u, · · · ∂n u, −1). So (6.2.5) says that

A(x, u) = (a1 (x, u), · · · , an (x, u), b(x, u))


6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 13

is tangent to the graph of y = u(x) at any point (since it is orthogonal to ~v ).


This suggests that we look at the integral curves of the vector field A(x, u) in Rn+1 given
by solving the equations
dxj dy
= aj (x, y), j = 1, · · · , n, = b(x, y). (6.2.6)
dt dt
As you will see, any graph y = u(x) in Rn+1 which is the union of an (n − 1)-parameter
family of these integral curves will define a solution of (6.2.5). Conversely, suppose that u is
a solution of (6.2.5). If we solve the equations
dxj
= aj (x, u(x)), xj (0) = (x0 )j
dt
to obtain a curve x(t) passing through x0 , and then set y = u(x(t)), we have

dy X dxj X
n n
= ∂j u = aj (x, u)∂j u = b(x, u).
dt j=1
dt j=1

Thus if the graph y = u(x) intersects an integral curve of A in one point (x0 , u(x0 )), it
contains the whole curve.
Suppose we are given intial data u = ϕ on a hypersurface S in Rn . If we form the
submanifold
S ∗ = {(x, ϕ(x)) : x ∈ S}
of Rn+1 , the graph of the solution should be the hypersurface (in Rn+1 ) generated by the in-
tegral curves of A passing through S ∗ . Again, we need to assume that S is non-characteristic
in some sense. This is more complicated than the linear case because aj depend on u as well
as x. We need the following geometric interpretation:
¡ ¢
For x ∈ S, the vector field A(x, ϕ(x)) = a1 (x, ϕ(x)), · · · , an (x, ϕ(x)) should not
be tangent to S at x. Note that this condition involves ϕ as well as S.

If S is represented parametrically by a mapping g : Rn−1 → Rn and we take coordinates


s = (s1 , · · · , sn−1 ) ∈ Rn−1 , so that g(s) = (g1 (s), · · · , gn (s)) , then the above condition can
be expressed as
 
∂g1 ∂g1
··· a1 (g(s), φ(g(s)))
 ∂s1 ∂sn−1 
 . 

det  .. .
.. .
..  6= 0. (6.2.7)

 ∂gn ∂gn 
··· an (g(s), φ(g(s)))
∂s1 ∂sn−1
14 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Theorem 6.2.8. Suppose that S is a C 1 hypersurface and aj , b and φ are C 1 real valued
functions. Suppose that the vector field V = (a1 (x, φ(x)), · · · , an (x, φ(x))) is not tangent to
S at any x ∈ S (this is the noncharacteristic condition). Then there is a unique solution
u ∈ C 1 of
X
n
aj (x, u)∂j u = b(x, u)
j=1

in a neighborhood Ω of S such that u = φ on S.

The proof of this result is constructive.

Proof. The uniqueness follows from the discussion above which states that u must be the
union of integral curves of A in Ω passing through S ∗ .
Now any hypersurface S can be covered by by open sets on which it admits a parametric
representation x = g(s), with s ∈ Rn−1 . If we solve the problem on each such open set,
by uniqueness the local solutions must agree on the overlap of the open sets and hence
patch together to give a solution for all of S. It therefore suffices to assume that S is given
parametrically by x = g(s) with s ∈ Rn−1 .
For each s ∈ Rn−1 , consider the initial value problem

∂xj
(s, t) = aj (x, y), j = 1, · · · , n,
∂t
∂y
(s, t) = b(x, y), (6.2.8)
∂t
xj (s, 0) = gj (s),
y(s, 0) = ϕ(g(s)).

Here s is a parameter vector, so we have a system of ODEs in t. By the fundamental existence


uniqueness theorem for ODEs, there exists a unique solution (x, y) defined for small t, and
(x, y) is a C 1 function of s and t jointly. By the non-characteristic condition (6.2.7) and the
inverse mapping theorem, the mapping (s, t) 7→ (x, t) is invertible on some neighborhood Ω
of S, yielding s and t as C 1 functions of x on Ω such that t(x) = 0 and g(s(x)) = x when
x ∈ S. Now set
u(x) = y(s(x), t(x)).

We have u = ϕ on S, and we claim that u satisfies (6.2.5). By the chain rule and the
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 15

∂sk ∂t
fact that = 0, since sk and t are functionally independent, and = 1 we have
∂t ∂t
à n−1 !
Xn
∂u Xn X ∂u ∂sk ∂u ∂t
aj = aj +
j=1
∂x j j=1 k=1
∂s k ∂x j ∂t ∂xj
X
n−1
∂u X ∂sk ∂u X ∂t
n n
= aj + aj
k=1
∂sk j=1 ∂xj ∂t j=1 ∂xj
X
n−1
∂u X ∂sk ∂xj ∂u X ∂xj ∂t
n n
= +
k=1
∂s k j=1 ∂xj ∂t ∂t j=1 ∂t ∂xj
X
n−1
∂u ∂sk ∂u ∂t
= +
k=1
∂s k ∂t ∂t ∂t
∂u
=0+ = b.
∂t
This completes the proof.

Example 6.2.9. In R3 solve the linear initial value problem Lu = x1 ∂1 u + 2x2 ∂2 + ∂3 u = 3u


with u = ϕ(x1 , x2 ) on the plane x3 = 0.
Solvability: We could observe that the solvability condition is satisfied by considering The
vector field A(x) = (x1 , 2x2 , 1) and the characteristic manifold

Charx (L) = {ξ = (ξ1 , ξ2 , ξ3 ) 6= 0 : A(x) · ξ = 0}.

We note that the initial surface S in this case is x3 = 0 with constant normal vector ν =
(0, 0, 1). In the present case we see that

A(x) · ν = 1 6= 0

so the surface is non-characteristic.


We could also note that with s = (s1 , s2 ) ∈ R3−1 and

g(s) = (g1 (s), g2 (s), g3 (s)) = (s1 , s2 , 0)

paramterizing the surface S we have for (6.2.7)


 
1 0 s1
det 0 1 2s2  = ϕ(s1 , s2 ).
0 0 ϕ(s1 , s2 )
16 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

So under the assumption that ϕ(s1 , s2 ) 6= 0 we can solve the problem.


Solution:In the present case the initial value problem (6.2.2) (see also (6.2.6) and (6.2.8))
gives
dx1 dx2 dx3 du
= x1 , = 2x2 , = 1, = 3u,
dt dt dt dt
with initial conditions ¯
(x1 , x2 , x3 , u)¯t=0 = (s1 , s2 , 0, φ(s1 , s2 )).
Solving this system of ODEs yields
x1 = s1 et , x2 = s2 e2t , x3 = t, u = ϕ(s1 , s2 )e3t .
The next step is to invert the first three equations to obtain s1 , s2 and t
t = x3 , s1 = x1 e−x3 , s2 = x2 e−2x3 .
Thus we find that
u = ϕ(x1 e−x3 , x2 e−2x3 ) e3x3 .
Example 6.2.10. Let us now consider a quasi-linear example in R2 and we want to solve
1
u∂1 u + ∂2 u = 1 with u = s on the segment x1 = x2 = s, 0 < s < 1.
2
Solvability: We show that (6.2.7) is satisfied.
 
∂x1 1 µ 1 ¶
 ∂s a1 (s, s, 2 s) 1 2s 1

det   = det = 1 − s 6= 0 for 0 < s < 1.
∂x2  1 1 2
a2 (s, s, 12 s)
∂s

Solution: The desired system of ODEs in this case is


dx1 dx2 du
= u, = 1, = 1,
dt dt dt
with initial conditions µ ¶
¯ 1
(x1 , x2 , u)¯t=0 = s, s, s .
2
Thus we get
1 1 1
u = t + s, x2 = t + s, x1 = t2 + st + s.
2 2 2
1 1
Since x2 − x1 = t(2 − t − s) = t(2 − x2 ), we can eliminate s and t from these equations
2 2
to obtain
4x2 − 2x1 − x22
u= .
2(2 − x2 )
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 17

6.2.1 Special Case: Quasilinear Equations in R2


As a special case let us consider n = 2, i.e., let us consider first order quasi-linear (or linear)
equations in two variables in the form
∂u ∂u
a(x, y, u) + b(x, y, u) = c(x, y, u). (6.2.9)
∂x ∂y
For this example we adhere to common practice and refer to the variables as x and y or x
and t, depending on the given problem, instead of x1 and x2 . Also in this case it is often
useful to use the notations
∂u ∂u ∂u
ux = , uy = , ut = , etc.
∂x ∂y ∂t
for the various partial derivatives.
Let u = u(x, y) be a solution (also refered to as a solution surface. The vector (ux , uy , −1)
is normal to the surface u(x, y) − u = 0 and the equation (6.2.9) expresses the orthogonality
condition

A(x, y, u) · (ux , uy , −1) = 0, A(x, y, u) ≡ (a(x, y, u), b(x, y, u), c(x, y, u)). (6.2.10)

(ux, uy , −1)
u

u = u(x, y)
(a, b, c)

x
Solution Surface

To solve (6.2.9) amounts to constructing a surface such that the normal to the surface
satisfys the orthogonality condition (6.2.10). This is the same as saying that we seek a
surface u = u(x, y) such that the tangent plane to the surface at (x, y, u) contains the
vector (a(x, y, u), b(x, y, u), c(x, y, u)). More specifically, since we are really interested in the
18 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

initial value problem,


¡ we seek a solution ¢ of (6.2.9) that contains a given curve, say C0 given
parametrically by x0 (s), y0 (s), u0 (s) . Recall that a three dimensional surface (in our case
the solution surface u = u(x, y)) can be represented (at least locally) parametrically as a two
parameter family ¡ ¢
x(s, t), y(s, t), u(s, t) .
In
¡ this way we see that ¢ a solution surface is built-up from a two parameter family of curves
x(s, t), y(s, t), u(s, t) . And , in order that the initial values be achieved we need
¡ ¢ ¡ ¢
x(s, 0), y(s, 0), u(s, 0) = x0 (s), y0 (s), u0 (s) .

If these curves lie on a solution surface, then tangent vectors to the curves must lie in the
tangent plane to the surface at the point. This means that for a fixed s the curves (in t)
must satisfy the equations

dx
(s, t) = a(x, y, u), x(s, 0) = x0 (s),
dt
dy
(s, t) = b(x, y, u), y(s, 0) = y0 (s), (6.2.11)
dt
du
(s, t) = c(x, y, u), u(s, 0) = u0 (s).
dt

u
¡ ¢ (ux, uy , −1) ¡ ¢
x0 (s), y0 (s), u0 (s) x(s, τ ), y(s, τ ), u(s, τ )

(a, b, c) y

¡ ¢
x0 (s), y0 (s)
x characteristic

Solution Surface and Characteristic

If the vector field A were tangent to the curve C0 , then a solution of (6.2.11) would
coincide with C0 and our method for constructing a surface would fail. We will see that this
problem can be avoided by not perscribing data on the curve with

dx dy
= a, = b.
dt dt
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 19

Such a curve is called a characteristic curve. To construct a solution surface we first find

x = x(s, t), y = y(s, t), u = u(s, t)

and then solve the two equations x = x(s, t) and y = y(s, t) for s and t in terms of x and y.
In order to guarantee that this can be done requires a result from advanced calculus – the
inverse function theorem which states:
If x = x(s, t) and y = y(s, t) are C 1 maps in a neighborhood of a point (s0 , t0 ),
the Jacobian  ¯
∂x ∂x ¯¯
 ∂t ∂s ¯
|J| = det   ∂y ∂y ¯
¯ 6= 0.
¯
¯
∂t ∂s (s0 ,t0 )
and, in addition, x0 = x(s0 , t0 ) and y0 = y(s0 , t0 ), then there exists a neigh-
borhood R of (s0 , t0 ) and there exists unique C 1 mappings

s = s(x, y), t = t(x, y)

and
s = s(x0 (s), y0 (s)), 0 = t(x0 (s), y0 (s))
With this we can construct our solution surface as

u = u(s, t) = u(s(x, y), t(x, y)) = u(x, y),

and

u0 (s) = u(s, 0) = u(s(x0 (s), y0 (s)), t(x0 (s), y0 (s))) = u(x0 , y0 ).

Example 6.2.11. Solve

xux + (x + y)uy = u + 1, u(x, 0) = x2 .

We first seek solutions of


dx dy du
= x, = x + y, = u + 1.
dt dt dt
In this case we can parameterize C0 by

x0 (s) = s, y0 (s) = 0, u0 (s) = s2


20 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

and the initial conditions are


x(s, 0) = x0 (s) = 0, y(s, 0) = y0 (s) = 0, u(s, 0) = u0 (s) = s2 .
Thus we solve
)
x0 = x
⇒ x(s, t) = set
x(s, 0) = s
)
y 0 − y = set
⇒ y(s, t) = stet
y(s, 0) = 0

du
= dt 
u+1 ⇒ u(s, t) = (1 + s2 )et − 1.

u(s, 0) = s2
y
Since x = set and y = stet we see that t = and hence s = xe−y/x . Thus
x
u(x, y) = (1 + x2 e−2y/x )ey/x − 1 = ey/x + x2 e−y/x − 1, for x 6= 0.

Example 6.2.12. (Unidirectional Wave Motion) In this linear example we seek a function
u = u(x, t) such that
∂u ∂u
+c = 0, (6.2.12)
∂t ∂x
u(x, 0) = F (x). (6.2.13)
We first seek solutions of
dx dt du
= c, = 1, = 0,
dτ dτ dτ
subject to
x(s, 0) = s, t(s, 0) = 0, u(s, 0) = F (s).
Thus we obtain
x = cτ + s, t = τ, u = F (s).
Solving for s and τ in terms of x and t we have
s = x − ct, τ = t
We then get
u(x, t) = u(s(x, t), τ (x, t)) = F (x − ct).
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 21

Example 6.2.13. Let us now consider an example in which data is prescribed on charac-
teristics. In this linear example we seek a function u = u(x, y) such that

∂u ∂u
x +y =u+1 (6.2.14)
∂x ∂y
u(x, x) = x2 . (6.2.15)

We first seek solutions of


dx dy du
= x, = y, = u + 1,
dt dt dt
subject to
x(s, 0) = s, y(s, 0) = s, u(s, 0) = s2 .
Thus we obtain
x = set , y = set , u = s2 et + et − 1.
In this example it is not possible to solve for s and t in terms of x and y. Not that A(x, y) =
(x, y) and the characteristic manifold

Char(x,y) (L) = {ξ = (ξ1 , ξ2 ) 6= 0 : A(x, y) · ξ = 0}.

We note that the initial curve S in this case is x = y with constant normal vector ν = (1, −1).
In the present case we see that

A(x, x) · ν = x − x = 0

so the curve is a characteristic.

Within the context of these examples in R2 we can restate Theorem 6.2.8 with the
solvability condition (6.2.7) as

Theorem 6.2.14. Suppose a(x, y, u), b(x, y, u), c(x, y, u) are C 1 in Ω ⊂ R3 , C0 is C 1 initial
curve given by (x0 (s), y0 (s), u0 (s)) ⊂ Ω and
µ ¶
a(x0 (s), y0 (s), u0 (s)) x00 (s)
det 6= 0.
b(x0 (s), y0 (s), u0 (s)) y00 (s)

Then there exists a unique C 1 solution of

a(x, y, u)ux + b(x, y, u)uy = c(x, y, u),

with
u(x0 (s), y0 (s)) = u0 (s).
22 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Proof. Our regularity assumptions guarantee that the initial value problem
dx
= a(x, y, u), x(s, 0) = x0 (s)
dt
dy
= b(x, y, u), y(s, 0) = y0 (s)
dt
du
= c(x, y, u), u(s, 0) = u0 (s)
dt
has a unique local solution that is C 1 in t and s. By our hypotheses
 ¯
∂x ∂x ¯¯ ¯ ¯
 ∂t ∂s ¯ ¯ a(x0 (s), y0 (s), u0 (s)) x00 (s) ¯

|J| = det   ¯ = ¯ ¯ 6= 0
∂y ∂y ¯¯ ¯ b(x0 (s), y0 (s), u0 (s)) y00 (s) ¯
¯
∂t ∂s t=0
Thus |J| =
6 0 in a neighborhood of the initial curve C0 and by the inverse function theorem
we can solve for
s = s(x, y), t = t(x, y)
and s = s(x0 (s), y0 (s)), 0 = t(x0 (s), y0 (s)), so we can define

u = u(x, y) = u(s(x, y), t(x, y)).

We first note that u satisfies the initial conditions:

u(x0 (s), y0 (s)) = u(s, 0) = u0 (s).

Furthermore, by the chain rule


µ ¶ µ ¶
∂s ∂t ∂s ∂t
aux + buy = a us + ut + b us + ut
∂x ∂x ∂y ∂y

= (asx + bsy )us + (atx + bty ))ut .

Now by our construction of x, y, and since xt = a, yt = b, we have


∂s
asx + bsy = sx xt + sy yt = = 0,
∂t
and
∂t
atx + bty = sx xt + ty yt = =1
∂t
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 23

and so
aux + buy = c.

Recall that in the case of a differential equation in R2 a solution surface is a surface in


R3 which (at least locally) can be parameterized by a two parameter family. Thus what
we have shown is that the collection of all characteristic curves thorugh C0 gives a solution
surface. Uniqueness will follow by arguing that any solution surface is essentially a collection
of characteristic curves. In particular let u(x, y) be a solution of the equation and fix a point
P0 (x0 , y0 , z0 ) on the surface. Let γ : (x(t), y(t), z(t)) be the curve through P0 determined by

dx
= a(x, y, u(x, y)), x(0) = x0
dt
dy
= b(x, y, u(x, y)), y(0) = y0
dt
z(t) = u(x, y), z(0) = z0 .

Then along this curve


dz dx dy
= ux + uy = aux + buy = c
dt dt dt

since u is a solution. Thus we see that γ is the characteristic curve through P0 . In other
words, a solution is always a union of characteristic curves. Through any point on a solution
surface there is a unique characteristic curve.
Therefore if C0 is not a characteristic curve, there is a unique solution surface that
contains it. If, on the other hand, C0 is a characteristic curve then

x00 (s) = a(x0 (s), y0 (s), ϕ( s))


y00 (s) = b(x0 (s), y0 (s), ϕ( s))

which contradicts |J| =


6 0.

Remark 6.2.15. The above discussion shows that if C0 were a characteristic curve we could
construct infinitely many solutions containing C0 . Namely, take any curve C1 that meets C0
in a point P0 and such that |J| =6 0 on C1 . Then construct the solution surface through C1 .
As discussed above, this solution surface must contain the characteristic curve C0 .
24 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

P0
C0

C1

Solution Surface for Each Curve C1

Example 6.2.16. Recall the Example 6.2.12

ut + cux = 0, u(x, 0) = F (x)

with solution given by


u(x, t) = F (x + ct).
We note that the initial surface is t = 0 and the characteristics are determined by
dx dt
= c, x(s, 0) = s, = 1, t(s, 0) = 0, ⇒ t(s, τ ) = τ, x(s, τ ) = cτ + s, ⇒ x − ct = s.
dτ dτ
Thus we see that the solution is constant along characterisitics, i.e., (x, t) on a characteristic
means that x − ct = s = constant and the solution is given by u(x, t) = F () and u is
constant on a characteristic, i.e., a line x − ct = s for s ∈ R.
A generalization of this problem is the case in which c = c(x, t). Then the characteristics
are determined by
dx dt
= c(x, t), x(s, 0) = s, = 1, t(s, 0) = 0,
dτ dτ
and along this curve
du dx dt
(x(t), t) = ux + ut = ux c(x(t), t) + ut ≡ 0.
dt dt dt
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 25

Hence u is constant on a characteristic.


Consider, for example,

u(x, 0) = e−x .
2
ut + 2tux = 0,

The characteristics are determined by

dx
= 2t
dt
which yields the parabolas
x = t2 + k, k constant .
The characteristic through a point (ξ, 0) is x = t2 + ξ. Since u is constant on this curve we
have
u(x, t) = exp(−ξ 2 ) = e−(x−t ) .
2 2

Example 6.2.17. In the study of fluid flow an important physical characteristic is the
formation of shock waves. The simplest example of the formation of shocks can be witnessed
in the study of certain quasi-linear equations in R2 called hyperbolic conservation laws.
These are equations of the form

ut + c(u)ux = 0, x ∈ R, t > 0 (6.2.16)

subject to initial conditions

u(x, 0) = ϕ(x), x ∈ R. (6.2.17)

The characteristics are determined by

dx
= c(u)
dt
Along this curve

du dx dt
(x(t), t) = ux (x(t), t) + ut (x(t), t) = ux c(u) + ut ≡ 0.
dt dt dt
Hence u is constant on a characteristic. The characteristics are straight lines since
µ ¶
d2 x d dx d du
2
= = c(u) = c0 (u) = 0,
dt dt dt dt dt
26 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

which implies that x = C1 t + C2 . Now since u is constant on a characteristic, then on a


characteristic, say the characteristic passing through (ξ, 0),

dx
= c(u(x(t), t)) = c(u(ξ, 0)) = c(ϕ(ξ)).
dt
Thus we see that the slope of the characteristics depend on c(u) and the initial data. The
equation for the characteristic passing through (ξ, 0) is given by
¡ ¢
x = c ϕ(ξ) t + ξ,

and the solution on this line is given by

u(x, t) = ϕ(ξ) = ϕ(x − c(ϕ(ξ))t)


¡ ¢
where x = c ϕ(ξ) t + ξ.
t (x, t)

x
(ξ, 0)

Characteristic through (ξ, 0)

Example 6.2.18. One particularly famous example of a hyperbolic conservation law which
is often used as a one dimensional model for the Navier-Stokes equations is the Burgers’
equation given by

ut + uux = 0, x ∈ R, t > 0. (6.2.18)

Let us consider (6.2.18) subject to initial conditions



 2, x<0
u(x, 0) = ϕ(x) = 2 − x, 0 ≤ x ≤ 1 . (6.2.19)

1, x>1

For x < 0, the characteristics have slope (or speed) 1/2; for 0 ≤ x ≤ 1 the slope is 1/(2 − x)
and for x > 1 the slope is 1.
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 27

Let us demonstrate that u − ϕ(x − c(u)t) = 0 implicitly defines a solution u(x, t) of the
equation. First, differentiating with respect to x, we have
£ ¤ £ ¤
ux = ϕ0 (x − c(u)t) x − c(u)t x = ϕ0 (x − c(u)t) 1 − c0 (u)ux t

and we can solve for ux

ϕ0 (x − c(u)t)
ux = . (6.2.20)
1 + tc0 (u)ϕ0 (x − c(u)t)

Now we differentiate with respect to t, to obtain


£ ¤ £ ¤
ut = ϕ0 (x − c(u)t) x − c(u)t t = −ϕ0 (x − c(u)t) tc0 (u)ut + c(u)

and we can solve for ut

−c(u)ϕ0 (x − c(u)t)
ut = . (6.2.21)
1 + tc0 (u)ϕ0 (x − c(u)t)

So, combining (6.2.20) and (6.2.21), we have

−c(u)ϕ0 (x − c(u)t) ϕ0 (x − c(u)t)


ut + c(u)ux = + c(u)
1 + tc0 (u)ϕ0 (x − c(u)t) 1 + tc0 (u)ϕ0 (x − c(u)t)
−c(u)ϕ0 (x − c(u)t) + c(u)ϕ0 (x − c(u)t)
= = 0.
1 + tc0 (u)ϕ0 (x − c(u)t)

and
u(x, 0) = ϕ(x − 0) = ϕ(x).

t=1

x
(ξ, 0)

Solution Exists Only for t < 1

From the picture we see that solutions cannot exist for t > 1 since the characteristics
cross beyond that line and the values on u on the intersecting characteristics are different –
28 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

thus, as a function, u is not well defined. More specifically, recall that our solution is defined
by
u(x, t) = ϕ(ξ) where x = ϕ(ξ)t + ξ.
The characteristics are described by

 2t + ξ, ξ<0
x = (2 − ξ)t + ξ, 0 ≤ ξ ≤ 1 .

t + ξ, ξ>1
We can compute that the characteristics intersect at
¯ ¯
((2 − ξ)t + ξ)¯ξ=0 = (t + ξ)¯ξ=1 ,

or 2t = 1 + t, i.e., t = 1. For t < 1 we have


(
ϕ(ξ) = 2, x < 2t (ξ < 0)
u(x, t) = .
ϕ(ξ) = 1, x > t + 1 (ξ > 1)
t

x=t+1
x = 2t
t=1

u=2 u=1
x
(ξ, 0)

Solution

If 0 ≤ ξ ≤ 1 the characteristic passing through (ξ, 0) is x = (2 − ξ)t + ξ which implies


ξ = (x − 2t)/(1 − t) and so
µ ¶
x − 2t 2−x
u(x, t) = 2 − = , 2t ≤ x ≤ t + 1, t < 1.
1−t 1−t
t=1
u

u=2
t
u=1

x
6.2. LINEAR AND QUASILINEAR EQUATIONS OF FIRST ORDER 29

Solution

At t = 1 refered to as the “breaking time,” a “shock” develops. As an exercise you will


show that for general c(u), the breaking time is given by
−1
tb = min , tb > 0,
ξ ϕ0 (ξ)c0 (ϕ(ξ))

at which
1 + c0 (ϕ(ξ))ϕ0 (ξ)t = 0.
30 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Exercise Set 1: First Order Linear and Quasi-Linear Equations


1. Solve the initial value problems
(a) yzx − xzy = 2xyz for z = t2 on C: x = t, y = t, t > 0.
(b) zzx + yzy = x with z = 2t on C: x = t, y = t, t ∈ R.
2. Solve ∂x u + ∂y u = u with u = cos(x) when y = 0.
3. Solve x2 ∂x u + y 2 ∂y u = u2 with u = 1 when y = 2x.
4. Solve u∂x u + ∂y u = 1 with u = 0 when y = x. What happens in this problem if we
replace u = 0 by u = 1?
5. Solve ux + uy = u2 with the initial condition u(x, 0) = h(x).
6. Show that for the initial value problem,
ut + c(u)ux = 0, x ∈ R, t > 0,
u(x, 0) = φ(x)
the breaking time will occur at the minimum value of t for which
−1
tb = min 0
φ (ξ)c0 (φ(ξ))
for which
1 + φ0 (ξ)c0 (φ(ξ))t = 0.
7. (a) Solve the initial value problem
uux + ut = 0, u(x, 0) = f (x).
(b) If f (x) = x, show that the solution exists for all t > 0.

(c) If f (x) = −x, show that a shock develops, that is, the solution blows up in finite
time.





ut + cux = 0, x ∈ R, t > 0,
8. Let D be a constant and H the Heaviside function. Solve


u(x, 0) = −H(−x)xex/D


if (c0 is a constant)
a) c(x) = c0 (1 − x/L)
b) c(x, t) = c0 (1 − x/L − t/T ).
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 31

6.3 Characteristics and Higher Order Equations


6.3.1 Characteristics and Classification of 2nd Order Equations

X
n
∂2
L= aij (x) (6.3.1)
i,j=1
∂xi ∂xj

where aij are real valued functions in Ω ⊂ Rn and aij = aji . Fix a point x0 ∈ Ω. The
characteristic polynomial is given by
X
n
σx0 (L, ξ) = aij (x0 )ξi ξj (6.3.2)
i,j=1

We say that the operator L is:


1. Elliptic at x0 if the quadratic form (6.3.2) is non-singular and definite, i.e., can be
reduced by a real linear transformation to the form
X
n
aei ξi2 + l. o. t.
i=1

2. Hyperbolic at x0 if the quadratic form (6.3.2) is non-singular and indefinite and can be
reduced by a real linear transformation to a sum of n squares, (n − 1) of the same sign,
i.e., to the form
Xn
ξ1 −
2
aei ξi2 + l. o. t.
i=2

3. Ultra-Hyperbolic at x0 if the quadratic form (6.3.2) is non-singular and indefinite and


can be reduced by a real linear transformation to a sum of n squares, (n ≥ 4) with
more than one terms of either sign.

4. Parabolic at x0 if the quadratic form (6.3.2) is singular, i.e., can be reduced by a real
linear transformation to a sum of fewer than n squares, (not necessarily of the same
sign).
It can be shown that in the constant coefficient case a reduction to one of these forms is
always possible with a simple constant matrix transformation of coordinates.
The case of two independent variables and non-constant coefficients can also be analyzed.
∂ 2u ∂2u ∂ 2u
a + 2b + c + F (x, y, u, ux , uy ) = 0 (6.3.3)
∂x2 ∂x∂y ∂y 2
32 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

where a = a(x, y), b = b(x, y), c = c(x, y) are C 2 real valued functions in Ω ⊂ R2 and (a, b, c)
doesn’t vanish at any point. Let us restrict to the linear case

∂2u ∂2u ∂2u ∂u ∂u


a 2
+ 2b + c 2
+d +e + fu = g (6.3.4)
∂x ∂x∂y ∂y ∂x ∂y
Example 6.3.1. Consider the one dimensional wave equation

uxx − uyy = 0.

Let
α = ϕ(x, y) = x + y, β = ψ(x, y) = x − y.
So we have
(α + β) (α − β)
x = Φ(α, β) = , y = Ψ(α, β) = .
2 2
We can express the solutions in terms of the (x, y) or (α, β) coordinates

u(x, y) = u(Φ(α, β), Ψ(α, β)) = U (α, β).

By the chain rule we have

ux = Uα ϕx + Uβ ψx = Uα + Uβ , (6.3.5)
uxx = Uαα + 2Uαβ + Uββ , (6.3.6)

uy = Uα ϕy + Uβ ψy = Uα − Uβ , (6.3.7)
uyy = Uαα − 2Uαβ + Uββ . (6.3.8)

Thus we have
0 = uxx − uyy = 4Uαβ , or Uαβ = 0.
The general solution of this equation is given by

U (α, β) = G(α) + F (β)

which, in turn, implies


u(x, y) = G(x + y) + F (x − y).
This solution represents a pair of waves traveling at the same speed but in opposite directions.

Returning to (6.3.4), if we introduce a nonsingular C 1 coordinate transformation


½
α = ϕ(x, y)
β = ψ(x, y),
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 33

i.e., the Jacobian is not zero


 
∂α ∂α
 ∂x ∂y 
|J| = det 
 ∂β
=6 0.
∂β 
∂x ∂y
Applying the chain rule we have

ux = uα ϕx + uβ ψx , uy = uα ϕy + uβ ψy ,
uxx = uαα ϕ2x + 2uαβ ϕx ψx + uββ ψx2 + uα ϕxx + uβ ψxx ,
uyy = uαα ϕ2y + 2uαβ ϕy ψy + uββ ψy2 + uα ϕyy + uβ ψyy ,
uxy = uαα ϕx ϕy + uαβ (ϕx ψy + ϕy ψx ) + uββ ψx ψy + uα ϕxy + uβ ψxy .

Then the equation (6.3.4) is transformed into a new equation of exactly the same form

∂2u e ∂2u ∂ 2 u e∂u ∂u


e
a 2 + 2b +e
c 2 +d + ee + feu = ge (6.3.9)
∂α ∂α∂β ∂β ∂α ∂β
where

1. e
a = aαx2 + 2bαx αy + cαy2

2. eb = aαx βx + b(αx βy + αy βx ) + cαy βy

3. e
c = aβx2 + 2bβx βy + cβy2
(6.3.10)
4. de = aαxx + 2bαxy + cαyy + dαx + eαy
5. ee = aβxx + 2bβxy + cβyy + dβx + eβy

6. fe = f and ge = g

Furthermore, we have the following extremely important invariance


e = (eb2 − e
D ae
c) = (b2 − ac)J 2 = DJ 2 .

With this we then obtain the following theorem. The proof is constructive.
e is. Furthermore, we have
Theorem 6.3.2. D is positive, negative or zero if and only if D

1. D > 0 implies Hyperbolic


34 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

2. D = 0 implies Parabolic

3. D < 0 implies Elliptic


Indeed, α and β can be chosen so that one and only one of the following hold
1. D > 0 implies e
a=e
c=0

a = eb = 0
2. D = 0 implies e

3. D < 0 implies e c and eb = 0


a=e
The proof of Theorem 6.3.2 is based on the invariance of the discriminant and the ability
to transform equations to a canonical form. The canonical forms only involve the second
order terms and are given by:
I. Hyperbolic

a) uαβ + l.o.t. = 0
b) uαα − uββ + l.o.t. = 0

II. Parabolic

uαα + l.o.t. = 0

III. Elliptic

uαα + uββ + l.o.t. = 0

Remark 6.3.3. One reason why we are interested in classifying equations into such cate-
gories is that in most theoretical developments for 2nd order linear equations one that the
equations are already written in one of the three basic canonical forms. In addition many
numerical routines for solving a PDE assume that the equation is given in one of these
canonical forms. Lower order terms are usually handled separately as a subroutine.
Concerning the proof of Theorem 6.3.2 will consider the cases I. and II. in some detail
and give a reference to where the (somewhat more complicated) case III. can be found in
the literature.

I. Hyperbolic Case: Assume that D = b2 − ac > 0. If a = c = 0 we are done so we assume


without loss of generality that a 6= 0. We seek α = ϕ(x, y) and β = ψ(x, y) so that e a
and e
c are zero. We first note that the calculations (6.3.10) suggest that we consider an
expression of the form

avx2 + 2bvx vy + cvy2 = 0 (6.3.11)


6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 35

where v could represent either ϕ or ψ. We note that (6.3.11) can be factored into
· µ √ ¶ ¸· µ √ ¶ ¸
−b + b2 − ac −b − b2 − ac
a vx − vy vx − vy . (6.3.12)
a a

Thus from (6.3.12) we seek ϕ and ψ so that, for example,


· µ √ ¶ ¸
−b + b2 − ac
ϕx − ϕy = 0, (6.3.13)
a
and
· µ √ ¶ ¸
−b − b2 − ac
ψx − ψy = 0. (6.3.14)
a

With these choices (6.3.11) is satisfied with v given by ϕ and ψ. In addition we must
impose a noncharacteristic solvability condition
¯ ¯
∂(ϕ, ψ) ¯ x¯ ϕ ϕ ¯

=¯ ¯
∂(x, y) ¯ψx ψy ¯
µ √ ¶ µ √ ¶
−b + b2 − ac −b − b2 − ac
= ϕy ψy − ϕy ψy
a a

2 √ 2
= b − ac ϕy ψy 6= 0. (6.3.15)
a

A solution of (6.3.13) is found by solving


µ √ ¶
dx dy −b + b2 − ac dϕ
= 1, =− , = 0.
dt dt a dt
For this problem we take the initial conditions

x0 (s) = x0 , y0 (s) = y0 + s, ϕ0 (s) = s.

With this choice we find (in our earlier notation we used a and b which are not the
same as those occuring in the present problem)
¯ ¯
¯ ¯ ¯ 1 0¯¯
¯a x0 (s)¯ ¯
¯ 0 ¯ ¯µ √ ¶ ¯
¯ ¯ = ¯ b − b2 − ac ¯ 6= 0. (6.3.16)
¯ b y0 (s) ¯ ¯
0
1 ¯
¯ a ¯
36 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Moreover,
ϕ(s, t) = s
and so
ϕ s = 1 = ϕx xs + ϕ y ys .
If ϕy = 0, then it follows from (6.3.13) that ϕx = 0, a contradiction. Now, as we have
learned in our earlier work, the condition (6.3.16) guarantees that we can solve for
(s, t) in terms of (x, y) to obtain ϕ(x, y).
Similarly, we can obtain ψ(x, y) by solving
µ √ ¶
dx dy b+ b2 − ac dψ
= 1, = , = 0.
dt dt a dt

For this problem we take the initial conditions

x0 (s) = x0 , y0 (s) = y0 + s, ψ0 (s) = s.


¡ ¢
Again, ψ(s, t) = s and as above ψy 6= 0. Thus J (ϕ, ψ)/(x, y) 6= 0.
In summary we have found a change of coordinates α = ϕ(x, y), β = ψ(x, y) that
reduces the equation to
2ebuαβ + l.o.t. = 0.
It is easy to show that
µ ¶
eb = 2 ac − b2
ϕy ψy (6.3.17)
a

and so eb 6= 0 and we can divide by it to put (6.3.17) into the normal form

uαβ = F (α, β, u, uα , uβ ).

Remark 6.3.4. (a) The reduction to normal form is a local result.


(b) The special initial curve can be replaced by any curve (x0 (s), y0 (s), ϕ0 (s)) with
ϕ0 (s) = s and (x0 (s), y0 (s)) that specifies a curve in the x y-plane where the PDE
is hyperbolic.
(c) The curves ϕ(x, y) = constant, ψ(x, y) = constant are called characteristics of
(6.3.4). Further, equation (6.3.11) is called the characteristic equation for the
PDE.
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 37

(d) Note that the characteristics for the first order PDE’s for ϕ and ψ are determined
by µ √ ¶
dx dy b ± b2 − ac
= 1, =
dt dt a
or µ √ ¶
dy b ± b2 − ac
= .
dx a
But if ϕ(x, y) = constant and ϕ solves (6.3.13), then

dy
ϕx + ϕy =0
dx
and so
µ √ ¶
dy ϕx b− b2 − ac
=− = . (6.3.18)
dx ϕy a

Hence the characteristics for a 2nd order PDE (6.3.4) coincide with the charac-
teristics of the associated 1st order PDE (6.3.13) (Similarly for ψ).
(e) In order to obtain the other hyperbolic form we set

ξ = α + β, η = α − β

so that
ξ+η ξ−η
α= , β= ,
2 2
and
uα = uη ηα + uξ ξα = uη + uξ , uβ = uη ηβ + uξ ξβ = −uη + uξ .
Thus we have

uαβ = (−uηη + uηξ ) + (−uξη + uξξ ) = uξξ − uηη .

Example 6.3.5. Consider the equation y 2 uxx − x2 uyy = 0, x > 0, y > 0. Here
b2 − ac = x2 y 2 > 0. The characteristics are given by ( see (6.3.18))
µ √ ¶
dy b + b2 − ac x
= = ,
dx a y

and µ √ ¶
dy b− b2 − ac x
= =− .
dx a y
38 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Thus the characteristic curves are given by

y 2 − x2 = constant, y 2 + x2 = constant.

We take
α = ϕ(x, y) = x2 + y 2 , β = ψ(x, y) = x2 − y 2 .
We solve for x, y to obtain
r r
α+β α−β
x= , y= .
2 2
If we use (6.3.10) we find
eb = 8x2 y 2 = 2(α2 − β 2 )
de = 2(y 2 − x2 ) = −2β
ee = 2(y 2 + x2 ) = 2α

and also
2β 2β
uαβ = u α − uβ
4(α2 − β 2 ) 4(α2 − β 2 )

βua − αuβ
= .
2(α2 − β 2 )
Example 6.3.6. Suppose that a, b, c, d, e, f in (6.3.4) are constants. Then the char-
acteristics are given by √
dy b ± b2 − ac
= ≡ ν ±.
dx a
Thus the characteristics are straight lines

α = ϕ(x, y) = y − ν + x,
β = ψ(x, y) = y − ν − x.

In this case we get e


a=e
c = 0 and
eb = aϕx ψx + b(ϕx ψy + ϕy ψx ) + cϕy ψx
= aν + ν − − b(~u+ + ν − ) + c
2(ac − b2 )
= .
a
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 39

For the first order terms we have

de = aϕxx + 2bϕxy + cϕyy + dϕx + eϕy = dν + + e ≡ d0 ,


ee = aψxx + 2bψxy + cψyy + dψx + eψy = dν − + e ≡ e0 .

Recalling that there is a 2 timdes the eb term and multiplying by K, the transformed
equation is
uαβ + Kd0 uα + Ke0 uβ + Kf u = Kg
where
a
K= .
4(ac − b2 )

We note that a further reduction is always possible in this case. Namely we can remove
the first order terms. Let
u = eλα+µβ v.
Then we have

uα = eλα+µβ (λv + vα )
uβ = eλα+µβ (µv + vβ )
uαα = eλα+µβ (vαα + 2λvα + λ2 v)
uββ = eλα+µβ (vββ + 2µvβ + µ2 v)
uαβ = eλα+µβ (vαβ + λvβ + µva + λµv).

If we choose µ = −d0 K, λ = −e0 K and define

f1 = Kf + K 2 d0 e0 , g1 = e−(λα+µβ) Kg,

then the equation can be written in terms of v as

vαβ + f1 v = g1 .

II. Parabolic Case: Assume that D = b2 − ac = 0. In this case, if a = 0, then b = 0 and


we are done since c 6= 0 (otherwise the equation is not truely second order). Thus we
assume that a 6= 0 and the characteristic equation (6.3.11) factors as
µ ¶2
b
a v x + vy = 0. (6.3.19)
a
40 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Thus we need to find a solution ϕ(x, y) of (6.3.19) and then select ψ(x, y) so that
¯ ¯
∂(ϕ, ψ) ¯ x¯ ϕ ϕ ¯

=¯ ¯
∂(x, y) ¯ψx ψy ¯
= ϕx ψy − ψx ϕy
b
= − ϕy ψy − ψx ϕy
a µ ¶
b
= −ϕy ψx + ψy 6= 0.
a

We may, for example, take ψ = x and prescribe inital conditions for (6.3.18) so that
ϕy 6= 0. With these choices, e
a = 0 (since ϕ satisfies (6.3.18)) and from (6.3.10) we find

eb = aϕx ψx + b(ϕx ψy + ϕy ψx ) + cϕy ψy


µ ¶ ·µ ¶ ¸
b b
= a − ϕy ψx + b − ϕy ψy + ϕy ψx + cϕy ψx
a a
· 2
¸
b
= ϕy −bψx − ψy + bψx + cψy
a
· 2 ¸
b
= −ϕy ψy − + c = 0.
a

Thus with α = ϕ(x, y), β = ψ(x, y), (6.3.4) becomes

e e α + eeuβ + feu = ge.


cuββ + du

Furthermore, since b2 = ac, and with the choice ψ = x we have


µ ¶2
b
e
c = aψx + 2bψx ψy + cψy = a ψx + ψy = a 6= 0.
2 2
a

Therefore we can divide by e


c = a to get the desired form.
If ϕ(x, y) is a solution of (6.3.19) and ϕ(x, y) = constant, then it follows from (6.3.18)
that the characteristics could be obtained from
dy b
= .
dx a
Example 6.3.7. Let us transform the equation

uxx − 2xuxy + x2 uyy − 2uy = 0


6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 41

into canonical form. Here we have b2 − ac = x2 − x2 = 0, so the equation is of parabolic


type and the characteristics are found from
dy b
= = −x,
dx a
or
x2
y = − + c.
2
Take
x2
ϕ(x, y) = y + ,
2
ψ(x, y) = x.

Then

de = aϕxx + 2bϕxy + cϕyy + dϕx + eϕy = (1)(1) + 0 + 0 + 0 + (−2)(1) = −1,


ee = aψxx + 2bψxy + cψyy + dψx + eψy = 0,

and so
uββ − uα = 0.

III. Elliptic Case: Assume that D = b2 − ac < 0. For this case we choose to proceed
in a purely formal fashion. This problem turns out to be rather messy and lengthy
to carry out in detail. For a detailed treatment we refer to the text by Garabedian
[6]. In particular, we will assume that a, b and c are analytic functions near the
point in question (x0 , y0 ). This can be avoided but not without great difficulty. The
characteristic equation (6.3.18) factors in this case into
µ · √ ¸ ¶µ · √ ¸ ¶
−b + i ac − b2 −b − i ac − b2
a vx − vy vx − vy = 0.
a a
We seek a solution of the characteristic equation solving, for example,
µ · √ ¸ ¶
−b + i ac − b2
vx − vy = 0.
a
Let us suppose that we find z(x, y) = ϕ(x, y) + iψ(x, y) so that
Note that in the present case we must have ac < 0 which means that a and c have the
same sign and are not zero. We seek a holomorphic function ϕ = ϕ1 + iϕ2 such that
³ √ ´
aϕx + b + i ac − b2 ϕy = 0,
42 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

or ³ √ ´
a (ϕ1x + iϕ2x ) + b + ac − b2 )(ϕ1y + iϕ2y ) = 0.
Note that, since we have assumed that a, b, c are real the solution ψ of
³ √ ´
aψx + b − i ac − b ψy = 0,
2

is given by ψ = ϕ = ϕ1 − iϕ2 .
To argue that these complex equations are solvable we collecting real and imaginary
parts in the equation for ϕ to obtain a first order linear system of partial differential
equations
p
aϕ1x + bϕ1y − (ac − b2 ) ϕ2y = 0
p
aϕ2x + bϕ2y + (ac − b2 ) ϕ1y = 0.
or µ ¶ µ p ¶ µ ¶
p −b (ac − b2 ) ∂ ϕ1
∂ ϕ1 1
= .
∂x ϕ2 a − (ac − b2 ) −b ∂y ϕ2
If we prescribe any Cauchy initial data
µ ¶ µ ¶
ϕ1 g1 (y)
(0, y) =
ϕ2 g2 (y)
this system has a unique solution by the Cauchy-Kovalevski Theorem.
We now introduce the real transformations
ξ = ϕ1 = Re ϕ, η = ϕ2 = Im ϕ
and note that
µ ¶
ϕ1x ϕ1y
det J = det
ϕ2x ϕ2y
 p 
1 −bϕ 1y + (ac − b 2 )ϕ
2y ϕ 1y
= 2 det  p 
a − (ac − b )ϕ1y − bϕ2y ϕ2y
2
µ ¶
ϕ1x ϕ1y
= det
ϕ2x ϕ2y
Ãp !2 µ ¶
(ac − b2 ) ϕ2y ϕ1y
= det
a −ϕ1y ϕ2y
Ãp !2
(ac − b2 ) ¡ 2 ¢
= ϕ2y + ϕ21y 6= 0.
a
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 43

Thus, by our construction of ϕ, we have

aϕ2x + 2bϕx ϕy + cϕ2y = 0

which implies

a (ϕ1x + iϕ2x )2 + 2b (ϕ1x + iϕ2x ) (ϕ1y + iϕ2y ) + c (ϕ1y + iϕ2y )2 = 0.

Collecting the real parts we have


¡ ¢ ¡ ¢
a ϕ21x − ϕ22x + 2b (ϕ1x ϕ1y − ϕ2x ϕ2y ) + c ϕ21y − ϕ22y = 0.

We now collect the terms involving ξ = ϕ1 on the left and those involving η = ϕ2 on
the right to get
aξx2 + 2bξx ξy + cξy2 = aηx2 + 2bηx ηy + cηy2
which from (6.3.10) gives
e
a=e
c.
Next we collect the imaginary parts to get
µ ¶
¡ ¢
2 aϕ1x ϕ2x + b ϕ2x ϕ1y + ϕ2y ϕ1x + cϕ1y ϕ2y = 0,

which implies ¡ ¢
eb = aξx ηx + b ξy ηx + ξx ηy + cξy ηy = 0.

Also, since eb = 0, we know from invariance that

−ec = eb2 − e
ae ae
c<0

which implies that


e
a=e
c 6= 0.
Thus dividing by e
a we arrive at

uξξ + uηη = Fe(ξ, η, u, uξ , uη ).

Example 6.3.8. Let us transform the equation

x2 uxx + y 2 yyy = 0, x > 0, y > 0,

into canonical form.


In this case we have √
dy y b + i ac − b2
=i =
dx x a
44 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

or
− ln(x) = i ln(y) + K, K a constant.
We take
α = ϕ1 (x, y) = ln(x), β = ϕ2 (x, y) = ln(y).
Then
µ ¶
1 1
e
a=x 2
+ 2b (0) + 0 = 1 = e
c,
x2 x
eb = 0,
µ ¶
e 1
d = x − 2 + 0 + y 2 (0) = −1,
2
x
µ ¶
−1
ee = x (0) + 0 + y
2 2
= −1.
y2

Thus we have
uαα + uββ − uα − uβ = 0.

We conclude this discussion by considering another example. The main point of this
example is to illustrate that the classification of a differential equation is a local result.

Example 6.3.9. The Tricomi Equation.

uyy − yuxx = 0.

For this equation, D = b2 − ac = y. So when y < 0 the equation is elliptic, when y > 0
the equation is hyperbolic and when y = 0 the equation is parabolic.
For this example a = −y, b = 0 and c = 1 so the characteristic equation

dy b ± b2 − ac
=
dx a
reduces to √
dy −ac 1
=± = ±√ .
dx a y

Since
For y > 0 the equation is hyperbolic and the characteristic curves are are given by

3x ± 2y 3/2 = constant
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 45

The transformations
ξ = 3x − 2y 3/2 , η = 3x + 2y 3/2 ,
reduce the equation to the normal form

1 uξ − uη
uξη − = 0.
6 ξ−η

hyperbolic

x parabolic
0
elliptic
Characteristics for Tricomi Equation

Return to Characteristics for Higher Order Equations


Let us return to the general notation for linear differential operators and characteristics
introduced at the begining of the chapter. Namely, a kth order linear equation on Ω in Rn
is given by
X
L= aα (x)∂ α .
|α|≤k

The principal part of the equation is consists of all kth order terms
X
P = aα (x)∂ α .
|α|=k

Let us consider some examples For the Laplacian in two variables we have equation and
principal part are the same and given by

(2,0) (0,2) ∂2 ∂2
L = P = ∂1 + ∂2 = + .
∂x21 ∂x22
46 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

For the wave equation in R2 we again have the equation and principal part are the same
(2,0) (0,2) ∂2 ∂2
L=P = ∂1 − ∂2 = − .
∂x21 ∂x22
For the heat operator in R2 we have the equation and principal part which in this case
are not the same
(2,0) (0,1) ∂2 ∂
L = ∂1 − ∂2 = 2
− ,
∂x1 ∂x2
and
(2,0) ∂2
P = ∂1 = .
∂x21
The characteristic form (or principal symbol) at x ∈ Ω is the homogeneous polynomial
of degree k defined for ξ ∈ Rn by
X
χL (x, ξ) = aα (x)ξ α .
|α|=k

A vector ξ is characteristic or is called a characteristic direction, for L at x if


χL (x, ξ) = 0.
The characteristic variety is the set of all characteristic vectors ξ, i.e.,
Charx (L) = {ξ 6= 0 : χL (x, ξ) = 0}.
As we already have defined, a surface (or curve) S is said to be characteristic with respect
to L at x0 if the normal vector ν(x0 ) at x0 defines a characteristic directions.
Example 6.3.10. Consider
∂ ∂
L = a(x, y) + b(x, y) = a(1,0) ∂ (1,0) + a(0,1) ∂ (0,1) .
∂x ∂y
The characteristic form is
χL (x, ξ) = a(x, y)ξ1 + b(x, y)ξ2 .
Let C be a characteristic curve given parametrically by (x(t), y(t)). The tangent to this
curve is (x0 (t), y 0 (t)) and the normal is (y 0 (t), −x0 (t)). Since C is a characteristic curve we
must have
a(x(t), y(t))y 0 (t) − b(x(t), y(t))x0 (t) = 0.
Thus the characteristic curve can be found from
dx dy
= a(x(t), y(t)), = b(x(t), y(t))
dt dt
just as we defined them earlier.
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 47

Example 6.3.11. Reconsider the wave equation in R2

(2,0) (0,2) ∂2u ∂2u


Lu = ∂1 u − ∂2 u = − = 0.
∂x21 ∂x22

If ξ = (ξ1 , ξ2 ) defines a characteristic direction, then we must have

ξ12 − ξ22 = 0.

Now if C : (x(t), y(t)) is a characteristic curve, let

dy dx
ξ1 = , ξ2 = −
dt dt
denote a normal to C. Then we have
µ ¶2 µ ¶ 2 µ ¶µ ¶
dy dx dy dx dy dx
− = − + =0
dt dt dt dt dt dt

This equation is satisfied if we take


dy
= ±1,
dx
or
y − x = c, y + x = c,
and we see that the characteristic curves are straight lines (just as we saw earlier).

Example 6.3.12. For the equation (6.3.4), the principal part is

∂ 2u ∂2u ∂ 2u
P u = a 2 + 2b + c 2.
∂x ∂x∂y ∂y

A characteristic curve (x(t), y(t)) has normal

dy dx
ξ1 = , ξ2 = −
dt dt
that satisfies
aξ12 + 2bξ1 ξ2 + cξ22 = 0,
or µ ¶2 µ ¶µ ¶ µ ¶2
dy dy dx dx
a − 2b +c = 0,
dt dt dt dt
or
ady 2 − 2bdydx + cdx2 = 0,
48 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

or · µ √ ¶ ¸· µ √ ¶ ¸
b + b2 − ac b − b2 − ac
a dy − dx dy − dx = 0.
a a
Hence the characteristic curves can be found by solving
µ √ ¶
dy b ± b2 − ac
= ,
dx a

just as we saw earlier.

In an attempt to further illustrate the significance of characteristics consider the following


example.

Example 6.3.13. Consider the Cauchy problem that we looked at earlier

∂u
Lu = = 0, u(x, 0) = f (x), (x, y) ∈ R2 .
∂x

From the PDE we must have u(x, y) = F (y) and the initial condition gives u(x, 0) =
f (x) = F (0) so that f must be a constant or the problem has no solution. If f (x) = c is a
constant then we can take any F (y) such that F (0) = c so we get infinitely many solutions.
Note that the characteristics for this example are determined by

dy
= 0, ⇒ y = constant.
dx
So we have prescribed our initial data along a characteristic curve. Note that y is constant
on a characteristic.

6.3.2 The Cauchy Problem for Higher Order Equations


Let S be a hypersurface of class C k . If u ∈ C k−1 defined near S, then the Cauchy data of u
on S is the set
{u, ∂ν u, · · · , ∂νk−1 u}.
We observe that we can consider the so-called Cauchy problem in a neighborhood of a
single point and then introduced a change of variables to transform the problem so that S
contains the origin and near the origin S coincides with the hyperplane xn = 0. Thus it
is often the case when treating higher order initial value problems that we distinguish the
variable xn and denote it by t and then define x = (x1 , · · · , xn−1 ) ∈ Rn−1 . Also we then
consider ∂t and ∂x and multi-indices α = (α1 , · · · , αn−1 ). With this the Cauchy Problem is:
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 49

Given functions {φj (x)}k−1


j=0 solve
¡ ¢
F x, t, (∂xα ∂tj u)|α|+j≤k = 0
with
∂tj u(x, 0) = φj (x), 0 ≤ j ≤ (k − 1).
This problem is far to general since the Cauchy data determine many derivatives on S.
Indeed, up to order k the only unknown is ∂tk u. For the Cauchy problem to be well-posed it
must be assumed that F = 0 can be solved for ∂tk . This is a characteristic condition on S.
Numerous examples were given to illustrate this situation.
We could embark on an in-depth treatment of the Quasi-linear case. In particular, we
could consider the notion of noncharacteristic surface for the Cauchy problem: An initial
value problem
X ¡ ¢ ¡ ¢
aα,j x, t, ((∂xβ ∂ti u)|β|+i≤k−1 ∂xα ∂tj u = b x, t, (∂xβ ∂ti u)|β|+i≤k−1
|α|+j=k

∂tj u(x, 0) = φj (x), 0 ≤ j ≤ (k − 1)


is called non-characteristic if
¡ ¢
a0,k x, 0, (∂xα ∂tj u)|α|+j≤k 6= 0
With this we could be in a position to prove the main fundamental existence and unique-
ness result (such as it is) for PDE’s with analytic data. Rather that take such a deep
excursion we will simple state the main result – the Cauchy-Kovalevski Theorem.
Theorem 6.3.14. If G, {φj }k−1
j=0 are analytic near the origin, the Cauchy problem
 µ ¶
 ∂ k u = G x, t, (∂ α ∂ j u)|α|+j≤k
t x t
j<k (6.3.20)
 j
∂t u(x, 0) = φj (x), 0 ≤ j < k
has a unique analytic solution defined in a neighborhood of the origin.
This result can also be stated for linear equation without distinguishing a special variable
as follows: Consider a linear PDE of order k
X
L= aα (x)∂ α .
|α|≤k

Let S be a surface in Rn with normnal at x given by ν(x). Suppose that we prescribe initial
data by ¯ ¯ ¯
¯ ∂u ¯¯ ∂ k−1 u ¯¯
¯
u¯ = ϕ 1 , = ϕ2 , · · · , = ϕk .
S ∂ν ¯S ∂ν k−1 ¯S
The following theorem is a slightly different statement of the Cauchy-Kovalevski theorem.
50 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Theorem 6.3.15. Suppose that aα , f , ϕ1 , · · · , ϕk and the surface S are analytic (note that
to say S is analytic means that if S is given by F (x1 , · · · , xn ) = 0 then F is an analytic
function). Suppose further that the initial surface S is not characteristic at x0 ∈ S, i.e.,
X £ ¤α
aα (x0 ) ν(x0 ) 6= 0,
|α|≤k

where ν(x0 ) is the normal to S at x0 . Then the Cauchy problem has a solution u(x) in a
neighborhood of x0 . This solution is unique in the class of analytic solutions.
Remark 6.3.16. 1. The Cauchy-Kovalevski theorem says that under certain conditions
we can seek a solution in the form of a power series. While sometimes useful, this is
not a very practical method of solution.
2. The Cauchy-Kovalevski theorem guarantees that the Cauchy problem
∆u(x, y) = 0, |x| < ∞, y > 0,
u(x, 0) = f (x), |x| < ∞,
∂u
(x, 0) = g(x), |x| < ∞
∂x
has a unique solution for every analytic f and g. It does not, however, say that the
problem is well posed. Recall the Hadamard example we discussed earlier.
At this point we mention that there are numerous related topics that should be considered
if we had the time.
Additional topics in this area include
1. Holmgren uniqueness theorem
2. Non-continuous dependence example of Hadamard
3. Classical Lewy example of an equation with no solution of class C 1 for any C ∞ (not
analytic) right hand side
4. Malgrange-Ehrenpreis theorem on local solvability for constant coefficient operators.
Unfortunately, with our time constraints the best we can do is to state some of these
results for completeness.
The first is the Holgren Uniqueness Theorem which says that analyticity is not required
in the Cauchy-Kovalevski theorem to guarantee uniqueness.
Theorem 6.3.17 (Holmgren’s Uniqueness Theorem). Under the same assumptions as
the Cauchy-Kovalevski theorem, any two C k solutions in a neighborhood of x0 must coincide
in a perhaps smaller neighborhood of x0 .
6.3. CHARACTERISTICS AND HIGHER ORDER EQUATIONS 51

The next theorem due to Malgrange and Ehrenpreis gives an existence theorem for general
constant coefficient equations.

Theorem 6.3.18 (Malgrange-Ehrenpreis Theorem). If


X
L= aα ∂ α ,
|α|≤k

is a kth order linear differential equation with constant coefficients on Rn and f ∈


C0∞ (Rn ) (the space of infinitely differentiable functions with compact support), then there
exists a u ∈ C ∞ (Rn ) such that Lu = f .

On the flip side we should also mention the classical nonexistence example due to Hans
Lewy (1957). This result shattered all hopes of generalizing the Cauchy-Kovalevski theorem
to the non-analytic case.
Consider the differential operator L defined in R3 with coordinates (x, y, t) given by

∂ ∂ ∂
L= +i − 2i(x + iy) .
∂x ∂y ∂t

Theorem 6.3.19 (Hans Lewy Theorem). Let f be a continuous real-valued function de-
pending only on t. If there is a C 1 function u of (x, y, t) satisfying Lu = f in some neigh-
borhood of the origin, then f must actually be analytic.

Once this result is known, it is possible to show that there are C ∞ functions g on R3
such that the equation Lu = g has no solution u ∈ C 1+α (α > 0) in any neighborhood of
any point.
52 CHAPTER 6. PARTIAL DIFFERENTIAL EQUATIONS

Exercise Set 2: Classification and Canonical Forms


1. Classify the operators and find the characteristics (if any) through the point (0, 1):
(a) utt + uxt + uxx = 0
(b) utt + 4uxt + 4uxx = 0
(c) utt − 4uxt + uxx = 0
2. Find where the operators are hyperbolic, parabolic and elliptic:
(a) Lu = utt + tuxx + xux
(b) Lu = x2 utt − uxx + u
(c) Lu = tutt + 2uxt + xuxx + ux
3. Transform the equation to canonical form: 2uxx − 4uxy − 6uyy + ux = 0.
4. Transform the equations to canonical form:
(a) e2x uxx + 2ex+y uxy + e2y uyy + ux = 0.
(b) x2 uxx + y 2 uyy = 0 for x > 0, y > 0.
5. Transform the equations to canonical form:
(a) uxx + 2uxy + 17uyy = 0.
(b) 4uxx + 12uxy + 9uyy − 2ux + u = 0.
6. Find the characteristics of Lu = utt − tuxx through the point (0, 1).
7. Determine where the following equation is hyperbolic or parabolic.
yuxx + (x + y)uxy + xuyy = 0
Where the equation is hyperbolic, show that the general solution may be written as
Z
1
u(x, y) = f (β) dβ + g(y − x).
y−x
where β = y 2 − x2
8. Classify the following equation and find the canonical form, uxx − 2uxt + utt = 0. Show
that the general solution is given by u(x, t) = tf (x + t) + g(x + t).
9. Show that (1 + x2 )uxx + (1 + y 2 )uyy + xux + yuy = 0 is elliptic and find the canonical
form of the equation.
Bibliography

[1] R. Dautray and J.L. Lions, Mathematical analysis and numerical methods for science
and technology,

[2] G. Folland, Introduction to partial differential equations,

[3] A. Friedman, Generalized functions and partial differential equations,

[4] A. Friedman, Partial differential equations,

[5] K.E. Gustafson Partial differential equations,

[6] P.R. Garabedian, Partial Differential Equations, New York, John Wiley & Sons, 1964.

[7] I.M. Gel’fand and G.E. Shilov, Generalized functions, v. 1,

[8] G. Hellwig, Partial differential equations, New York, Blaisdell Publ. Co. 1964.

[9] L. Hörmander, Linear partial differential operators,

[10] F. John, Partial differential equations,

[11] J. Kevorkian, Partial differential equations,

[12] P. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shocks,

[13] I.G. Petrovsky, Lectures on partial differential equations, Philadelphia, W.B. Saunders
Co. 1967.

[14] M. Pinsky, Introduction to partial differential equations with applications,

[15] W. Rudin, Functional analysis,

[16] F. Treves, Basic Partial Differential Equations, New York, Academic Press, 1975.

[17] F. Treves, Linear Partial Differential Equations with Constant Coefficients, New York,
Gordon & Breach, 1966.

53
54 BIBLIOGRAPHY

[18] K. Yosida, Functional analysis,

[19] Generalized functions in mathematical physics, V.S. Vladimirov

[20] H.F. Weinberger, Partial differential equations, Waltham, Mass., Blaisdel Publ. Co.,
1965.

[21] E.C. Zachmanoglou and D.W. Thoe, Introduction to partial differential equations with
applications,

[22] A.H. Zemanian, Distribution theory and transform analysis,

You might also like