Mat5101 Am Lecturenotes
Mat5101 Am Lecturenotes
Applied Mathematics
Department of Mathematics,
Dokuz Eylül University,
35160 Buca, İzmir, Turkey.
1 Linear Algebra
Matrices and Linear Systems
Linear Systems
Gauss Elimination
Vector Spaces
Determinants
Cramer’s Rule
Sturm-Liouville Problems
Fourier Series
Wave Equation
Laplace Equation
Fourier Integral
The Fourier Sine and Cosine Integrals
Solving PDEs by Fourier Integrals
Linear Algebra
Each entry in (1) has two subscripts. The first one stands for the row
number and the second one stands for the column number. That is, a21
is the entry in the second row and first column.
If m = n, then the matrix A will be called as a square matrix. In this
case, a11 , a22 , · · · , ann are called as the main diagonal entries of the
matrix A.
Example 1
Readily, we compute that
0 1 2 6 5 4 0+6 1+5 2+4
+ =
9 8 7 3 4 5 9+3 8+4 7+5
6 6 6
= .
12 12 12
Example 2
Obviously, we compute that
4 5 6 3·4 3·5 3·6
3 =
7 8 9 3·7 3·8 3·9
12 15 18
= .
21 24 27
The condition n = p means that the second factor B must have as many
rows as the first factor A has columns. As a diagram of sizes, we can
give the following.
A B = C .
[m×n][n×q] [m×q]
b1k · · · b1q
.. . . .
. . ..
×
bjk · · · bjq
.. .
..
. ..
×
.
bnk · · · bnq
×
ai1 · · · aij · · · ain cik · · · ciq
. . .. .
. . . ... . . . ... ..
. ..
.
.
am1 · · · amj · · · amn cm1 · · · cmk · · · cmq
Example 3
Using Definition 4, we compute that
3 2 2 (−1) 3
4 (−2) 5 3 2
3×2+2×5 3 × (−1) + 2 × 3 3×3+2×2
=
4 × 2 + (−2) × 5 4 × (−1) + (−2) × 3 4 × 3 + (−2) × 2
16 3 13
= .
(−2) (−10) 8
4. (AB)T = B T AT .
Example 6
Consider that
7
(− 32 )
3 2 7 3 2 7 0 0
5 4 3 = 72 4 9
2
+ 3
2 0 (− 23 ) ,
9 3
7 6 4 7 2 4 0 2 0
where the first and the second matrices on the right-hand side are
symmetric and skew-symmetric, respectively.
a11 a12 · · · a1n a11
a22 · · · a2n
a a
21 22
.. . . .. . .
. .. ..
. .
ann an1 an2 · · · ann
Figure 2: Upper and lower triangular matrices, respectively. All the entries in
the red triangles are zero.
a11 1
a22
1
..
.
..
.
ann 1
Figure 3: Diagonal and the identity matrices, respectively. All the entries in the
red triangles are zero.
Example 7
2 (−2) 1
1 1 1 1
Show that √ and 1 2 2 are
2 1 (−1) 3
2 1 (−2)
orthogonal matrices.
Ax = b,
We assume that aij are not all zero, so that A is not a zero matrix. Note
that x has n components and b has m components.
The matrix
a11 a12 ··· a1n b1
a21 a22 ··· a2n b2
A
e :=
.. .. .. .. ..
. . . . .
am1 am2 ··· amn bm
is called the augmented matrix of the system (3). The vertical line can
be omitted (as we shall do later). It is merely a reminder that the last
column of A e does not belong to A.
x1 − x2 + x3 =0
−x1 + x2 − x3 =0
10x2 + 25x3 =90
20x1 + 10x2 =80.
Step 1. Elimination of the first unknown. We mark the first row as the
pivot row and its first entry as the pivot. To eliminate x1 in the other
equations, we add 1 times the pivot row to the second row and add
(−20) times the pivot row to the fourth row. This yields
1 (−1) 1 0
0 0 0 0
.
0 10 25 90
0 30 (−20) 80
Step 2. Elimination of the second unknown. First, push the zero row at
the end of the matrix
1 (−1) 1 0
0 10 25 90
.
0 30 (−20) 80
0 0 0 0
Then, mark the second row as the pivot row and its first non-zero entry
as the pivot. To eliminate x2 in the other equations, we add (−3) times
the pivot row to the third row. Hence, the result is
1 (−1) 1 0
0 10 25 90
.
0 0 (−95) (−190)
0 0 0 0
Step 3. Back substitution. Now, working backwards from the last to the
first row of this triangular matrix, we can readily find x3 , x2 and x1 as
follows.
(−190)
x3 = = 2,
−95x3 = − 190 (−95)
10x2 + 25x3 =90 =⇒ 1
x2 = (90 − 25 × 2) = 4,
x1 − x2 + x3 =0 10
x1 =4 − 2 = 2.
Definition 11
We now call a linear system S1 row-equivalent to another linear system
S2 if S1 can be obtained from S2 only by applying finitely many row
operations.
Thus, we can state the following theorem, which also justifies the Gauss
elimination.
Theorem 1
Row-equivalent linear systems have the same set of solutions.
At the end of the Gauss elimination the form of the coefficient matrix,
the augmented matrix or the system itself is called row-reduced form.
In it, rows of zeroes, if present, are the last rows, and in each non-zero
row the leftmost non-zero entry is farther to the right than in the
previous row. For instance, (see Example 8) the following coefficient
matrix and its augmented matrix are in the row-reduced form.
1 (−1) 1 1 (−1) 1 0
0 10 25 and 0 10 25 90
.
0 0 (−95) 0 0 (−95) (−190)
0 0 0 0 0 0 0
Note that we do not require that the leftmost non-zero entries be 1 since
this has no theoretic or numeric advantage.
Example 9
Transform the following matrices into row-reduced forms.
(−1) 4 1 1 (−2) 1 3
0 0 0 0 and 0 1 1 .
0 0 0 1 2 0 1
a11 a12 · · · · · · · · · a1n b1
a22 · · · · · · · · · a2n b2
.. .. ..
. . .
a · · · a br
rr rn
br +1
..
.
bm
Figure 4: Row reduced form of the augmented matrix at the end of the Gauss
elimination. Here, r ≤ m and a11 , a22 , · · · , arr 6= 0, and all the entries in the red
triangle and the blue rectangle are zero.
From this, we see that with respect to the solutions of the system with
augmented matrix in Figure 4 (and thus, with respect to the originally
given system) there are three possible cases.
Figure 5: All the entries in the red triangle and the blue rectangles are zero.
Figure 6: All the entries in the red triangle and the blue rectangles are zero.
No Solution
Figure 7: All the entries in the red triangle and the blue rectangle are zero, but
there exists non-zero entries in the orange rectangle.
Vector Spaces
Example 10
Some examples of vector spaces are listed below.
1. The singleton {0}.
2. Rn (the set of vectors).
3. Rm×n (the set of m × n matrices).
4. Pn [x] (the set of polynomials).
5. F[a, b] (the set of functions on [a, b]).
6. C1 [a, b] (the set of continuously differentiable functions on [a, b]).
α1 u 1 + α2 u 2 + · · · + αn u n
α1 u 1 + α2 u 2 + · · · + αn u n = 0
α1 u 1 + α2 u 2 + · · · + αn u n = 0,
Example 11
1. Show that (1, 1), (−3, 2) are linearly independent.
2. Show that (1, 1), (−3, 2), (2, 4) are linearly dependent.
10 20 2×
(− −→
3,
2)
16
(2, →
4)
−
1
)×
0
5
(−
−→ )
,1
(1
×
16
-10 0
-10 -3 0 1 2 10 0 10 16 20
Figure 8: The figures show that 16(1, 1) + 2(−3, 2) + (−5)(2, 4) = (0, 0), i.e.,
(1, 1), (−3, 2), (2, 4) are linearly dependent.
Example 12
Find the rank of the matrix
3 0 2 2
A := (−6) 42 24 54 .
21 (−21) 0 (−15)
Clearly, we have
Thus, rank(A) = 2.
Example 14
The set{(1, 2, 1), (−2, −3, 1),
(3, 5, 0)} is linearly dependent since the
1 2 1
matrix (−2) (−3) 1 has the rank 2, which is less than 3.
3 5 0
Example 15
Show that Span{(1, 1), (−3, 2), (2, 4)} = R2 .
Example 16
Show that {(1, 1), (−3, 2)} is a basis for R2 . Hence, dim(R2 ) = 2.
Finally for a given matrix A, the solution set of the homogeneous system
Ax = 0 is a vector space, which is called as the null space of A, and its
dimension is called as the nullity of A.
rank(A) + nullity(A) = n.
When (4) admits solutions, one can apply Gauss elimination to find those
solutions.
where a11 , a12 , · · · , amn are scalars and x1 , x2 , · · · , xn are unknowns. Note
that x1 , x2 , · · · , xn = 0 is always a solution, which is called as the trivial
solution.
According to Theorem 6, (5) admits non-trivial solutions if r < n, where
r := rank(A). If r < n, then these solutions, together with x = 0, form a
vector space of dimension (n − r ), which is called as the solution space
of (5).
The solution space of (5) is also called as the null space of A because of
Ax = 0 for every x in the solution space of (5). Its dimension is called as
the nullity of A. Hence, by Theorem 5, we have
rank(A) + nullity(A) = n,
Example 17
Show that the solution space (null space) of
x
1 2 3 4 y
= 0
2 4 7 8 z 0
w
is
x
y : x = −2y − 4w and z = 0
z
w
or equivalently
(−2) (−4)
1 0
y
+w
: y, w ∈ R .
0 0
0 1
Theorem 7
A homogeneous linear system with fewer equations than the number of
unkowns has always non-trivial solutions.
Theorem 8
If a non-homogeneous linear system is consistent, then all of its solutions
are obtained as
x = xh + xp,
where x h runs through all solutions of the associated homogeneous
system and x p is any (fixed) solution of the non-homogeneous system.
Example 18
Show that
x
3 2 3 (−2) y 1
1 1 1 0 = 3
z
1 2 1 (−1) 2
w
has
1 1
0 2
x h := c
(−1) and x p :=
0 ,
0 3
where c is an arbitrary constant. So, any solution of the system is of the
form
1 1
0 2
x =c (−1) + 0 ,
0 3
where c is an arbitrary constant.
Determinants
Example 19
We compute that
3 2
(−4) = 3 · 1 − 2 · (−4) = 11.
1
a11 a12 a13
a21 a22 a23
a31 a32 a33
(−) a11 a12 a13 (+)
(−) a21 a22 a23 (+)
(−) (+)
Example 20
We compute by Sarrus rule that
1 5 (−3)
2
3 (−4) =[1 · 3 · 2 + 2 · 6 · (−3) + (−1) · 5 · (−4)]
(−1) 6 2
− [(−3) · 3 · (−1) + (−4) · 6 · 1 + 2 · 5 · 2]
=[6 + (−36) + 20] − [9 + (−24) + 20]
=(−10) − 5 = −15.
a11 ··· a1(j−1) a1j a1(j+1) ··· a1n
.. .. .. .. .. .. ..
. . . . . . .
a
(i−1)1 · · · a(i−1)(j−1) a(i−1)j a(i−1)(j+1) · · · a(i−1)n
ai1 · · · ai(j−1) aij ai(j+1) · · · ain
a(i+1)1
· · · a(i+1)(j−1) a(i+1)j a(i+1)(j+1) · · · a(i+1)n
.. .. .. .. .. .. ..
. . . . . . .
an1 ··· an(j−1) anj an(j+1) ··· ann
Example 21
We compute that
6 0 (−3) 5
4 13 6 (−8)
(−1) 0 7 4
8 6 0 2
4 6 (−8) 6 (−3) 5
=0(−1)1+2 (−1) 7 4 + 13(−1)2+2 (−1)
7 4
8 0 2 8 0 2
6 (−3) 5 6 (−3) 5
3+2 4+2
+ 0(−1) 4 6 (−8)
+ 6(−1) 4 6 (−8)
8 0 2 (−1) 7 4
=0 · (−1) · 708 + 13 · 1 · (−298) + 0 · (−1) · 48 + 6 · 1 · 674
=(−3874) + 4044 = 170.
Theorem 9
Let A and B be two square matrices of the same size. Then,
Cramer’s Rule
Theorem 11 (Cramer’s Rule)
Denote by A and b the coefficient matrix and the right-hand side vector,
respectively. For k = 1, 2, · · · , n, denote by Ak the matrix formed by
replacing k-th column of A by b. Then, we have the following.
det(Ak )
1. If det(A) 6= 0, then (6) has a unique solution given by xk = det(A) .
2. If det(A) = 0 and det(Ak ) = 0 for all k, then (6) has infinitely many
solutions.
3. If det(A) = 0 and det(Ak ) 6= 0 for some k, then (6) has no solutions.
Then, (7) has the trivial solution only if and only if det(A) 6= 0.
Note that in this case, det(Ak ) = 0 for all k by the second case of
Corollary 5, and thus last case of Theorem 11 does not appear.
Example 22
The system
x + 2y =−3
2x + 6y + 4z = − 6
−x + 2z =1
has a unique solution since the determinant of the coefficient matrix is
(−4). Thus, the solution is given by
(−3) 2 0 1 (−3) 0
(−6) 6 4 2 (−6) 4
1 0 2 (−4) (−1) 1 2
x= = = 1, y = = −2,
(−4) (−4) (−4)
and similarly z = 1.
Example 23
Find the inverse matrix of the matrix A given below.
1 2 3
A := 0 1 4 .
5 6 0
1 2 0 16 (−12) (−3)
∼ 0 1 0 20 (−15) (−4)
0 0 1 (−5) 4 1
1 0 0 (−24) 18 5
∼ 0 1 0 20 (−15) (−4) .
0 0 1 (−5) 4 1
Hence,
(−24) 18 5
A−1 := 20 (−15) (−4) .
(−5) 4 1
1
A−1 := adj(A)
det(A)
Example 24
Clearly, for
3 1
A :=
2 4
we have det(A) = 10 and
T
4 (−2) 4 (−1)
adj(A) = = .
(−1) 3 (−2) 3
Therefore, we have
−1 1 4 (−1)
A = .
10 (−2) 3
Example 25
For the matrix
1 (−1) (−2)
A := 3 (−1) 1 ,
1 (−3) (−4)
we compute that det(A) = 10. Also, we have
7 2 (−3)
adj(A) = 13 (−2) (−7) .
(−8) 2 2
Theorem 14
Let A and B be two square matrices of the same size. Then,
(AB)−1 = B −1 A−1 .
Ax = b.
Once A−1 is known, we can multiply both sides of the above system by
A−1 and get
x = A−1 b.
Example 26
Show that
x − y − 2z = − 10
3x − y + z =5
x − 3y − 4z =20
has the solution
Remark 1
Some remarks on matrix multiplication are given below.
1. Matrix multiplication is not commutative, i.e., we have in general that
AB 6= BA.
Example 27
Let (u1 , u2 ), (v1 , v2 ) ∈ R2 , then
h(u1 , u2 ), (v1 , v2 )i = u1 v1 + u2 v2
Finally, we have
and
hα(u1 ,u2 ), (v1 , v2 )i = h(αu1 , αu2 ), (v1 , v2 )i
=(αu1 )v1 + (αu2 )v2 = α(u1 v1 + u2 v2 )
=αh(u1 , u2 ), (v1 , v2 )i.
Thus, R2 is an inner product space on R.
Example 28
Two examples of inner product spaces are listed below.
1. Rn is an inner product space with
* u1 v1 +
u2 v2
.. , .. := u1 v1 + · · · + un vn .
. .
un vn
Definition 27 (Orthogonality)
Let V be an inner product space on R, and u, v ∈ V . The vectors u and
v are called orthogonal provided that hu, v i = 0.
Example 29
1. In R3 , u := (2, 1, −1) and v := (1, −1, 1) are orthogonal.
a+b a+b 2
2. In C[a, b], f (x) := x − 2 and g (x) := x 2 − (a + b)x + 2 are
orthogonal.
Definition 28 (Norm)
Let V be an inner product space on R, and u ∈ V . The norm of u is
defined by p
kuk := hu, ui.
Further, u is said to be a unit vector provided that kuk = 1.
Example 30
1. On Rn , a norm is given by
u1
u2
q
..
:= u12 + u22 + · · · + un2 .
.
un
Property 6
Inner products and norms satisfy the following properties.
1. Cauchy-Schwarz Inequality: |hu, v i| ≤ kukkv k for all u, v ∈ V .
2. Triangle Inequality: ku + v k ≤ kuk + kv k for all u, v ∈ V .
3. Parallelogram Equality: ku + v k2 + ku − v k2 = 2 kuk2 + kv k2 for
all u, v ∈ V .
Linear Transformations
Example 31
Some examples of linear transforms are listed below.
1. Zero transform, i.e., f (u) = 0.
2. Identity transform, i.e., f (u) = u.
3. Reflection transform, i.e., f (u) = −u.
4. Scaling transform, i.e., f (u) = λu, where λ ∈ R
5. Projection transform.
6. Rotation transform.
7. Differential transform.
8. Integral transform.
Representation Matrix
u = α1 e 1 + α2 e 2 + · · · + αn e n .
f (u) = α1 f (e 1 ) + α2 f (e 2 ) + · · · + αn f (e n ).
Example 32
1 1
Consider the basis , of R2 . Let us find the rule of
1 (−1)
the linear transform f : R2 → R3 satisfying
5 3
1 1
f = (−1) and f = (−5) .
1 (−1)
11 1
Null(f ) := {u : f (u) = 0}
Range(f ) := {v : f (u) = v }.
Further,
nullity(f ) := dim Null(f ) and rank(f ) := dim Range(f ) .
Example 33
Find the rank and the nullity of the transform f : R3 → R2 defined by
x
16x + y − 3z
f y = .
8x − 3y + z
z
We see that
1
Null(f ) = x 5 : x ∈ R
7
1
= Span 5 ,
7
Ax = λx. (8)
This example illustrates the general case as follows. Expanding (8) in its
components, we get
Moving the terms on the right-hand side to the left-hand side gives us
(A − λ I)x = 0.
Theorem 17 (Eigenvalues)
The eigenvalues of a square matrix A are the roots of the characteristic
equation det(A − λ I) = 0. Hence, an n × n matrix has at least 1
eigenvalue and at most n numerically different eigenvalues.
Example 34
2 2
The characteristic polynomial of the matrix A := is
1 3
2−λ 2
D(λ) := = λ2 − 5λ + 4 = (λ − 1)(λ − 4)
1 3−λ
Example 35
1 2 1
Let us consider the matrix A := 0 3 2 . The characteristic
(−1) 1 1
equation of A is D(λ) := −λ3 + 5λ2 − 6λ = −λ(λ − 2)(λ − 3). Thus, A
has the eigenvalues
λ
1 := 0, λ2 := 2and λ3 :=
3. Further, we get
the
1 3 1
eigenvector (−2) for λ1 = 0, 2 for λ2 = 2 and 1
3 (−1) 0
for λ3 = 3.
Example 36
1 0 0
Consider the matrix A := 2 1 0 . The characteristic equation
1 (−2) 3
of A is D(λ) := −λ3 + 5λ2 − 7λ + 3 = −(λ − 1)2 (λ − 3). Thus, A has
the
eigenvalues
λ1,2 := 1 and λ3 :=
3.
Further, we get the eigenvector
0 0
1 for the eigenvalue 1 and 0 for the eigenvalue 3. In this
1 1
example, we can find only two eigenvectors.
B = P −1 AP.
D := X −1 AX
D m = X −1 Am X
or equivalently
Am = X D m X −1 .
Example 37
4 1
Consider the matrix A := . We see that the eigenvalues
(−8) (−5)
1
of A are (−4) and 3, which yield the eigenvectors and
(−8)
1 1 1
, respectively. Let us define X := , which
(−1) (−8) (−1)
(−1) (−1)
yields X −1 = 17 . Then, we get the diagonal matrix
8 1
(−4) 0
D := X −1 AX = .
0 3
Example 38
Homogeneous Equation
Non-Homogeneous Equation
Next, we proceed
Rt
to find a solution of (9). To this end, we multiply (9)
by µ(t) := e p(ξ)dξ and get
From (11) and (12), the general solution of (9) (all solutions) are given
by Z t R
− t p(ξ)dξ − t p(ξ)dξ ξ
R R
y (t) = ce| {z } + e e p(ζ)dζ f (ξ)dξ ,
complementary solution | {z }
particular solution
Example 39
Show that the general solution of the equation
y 0 + 3y = et
is
1
yc := ce−3t + et ,
4
where c is an arbitrary constant. Further, if the initial condition y (0) = 1
is given, then the desired solution becomes
3 −3t 1 t
y := e + e.
4 4
Figure 11: The graphic of the solution y over the interval [0, 1].
Homogeneous Equation
Consider the
ay 00 (t) + by 0 (t) + cy (t) = 0, (13)
rt
where a, b and c are as mentioned above. Substituting y = e into the
left-hand side of (13), we see that
ay 00 + by 0 + cy =ar 2 er t + br er t + cer t
=y (ar 2 + br + c)
=yP(r ),
P(r ) := ar 2 + br + c. (14)
ay 00 + by 0 + cy = 0
and thus
y (t) := er0 t
is a solution of (13).
2
Distinct Roots: ∆ := b − 4ac > 0
ex + e−x ex − e−x
cosh(x) := and sinh(x) := for x ∈ R,
2 2
we can show that
√ √
b ∆ b ∆
y1 (t) := e− 2a t cosh t and y2 (t) := e− 2a t sinh t
2a 2a
Example 40
Let us solve
y 00 + y 0 − 2y = 0.
Clearly, the characteristic equation is r 2 + r − 2 = 0 or equivalently
(r + 2)(r − 1) = 0, which implies r1 := −2 and r2 := 1. Since the roots
are distinct, the general solution of the equation is
yc (t) := c1 e−2t + c2 et ,
Example 41
The differential equation
y 00 (t) − y (t) = 0
Repeated Roots: ∆ = 0
b
r1 := − .
2a
Hence,
y1 (t) := er1 t (16)
is a solution of (13). Now, define
Recall that y10 = r1 y1 , and thus y100 = r12 y1 . Computing the derivatives of
y2 , we get
y20 =er1 t u 0 + r1 er1 t u
=er1 t [u 0 + r1 u],
y200 =er1 t u 00 + 2r1 er1 t u 0 + r12 er1 t u
=er1 t u 00 + 2r1 u 0 + r12 u .
i.e.,
y1 (t) = er1 t and y2 (t) = er1 t t
are linearly independent.
Thus, the general solution of the equation is
Example 42
Let us solve
y 00 − 4y 0 + 4y = 0.
Clearly, the characteristic equation is r 2 − 4r + 4 = 0 or equivalently
(r − 2)2 = 0, which implies r1,2 := 2. Since the roots are repeated, the
general solution of the equation is
As in the first case, we can show that z1 and z2 are solutions of (13).
However, they are complex functions. We will now show that their real
and imaginary parts also form solutions. Using the well-known Euler’s
formula eiθ = cos(θ) + i sin(θ), we get
and
z2 (t) := eαt [cos(βt) + i sin(βt)].
Let us define
and
y100 (t) =(α2 − β 2 )eαt − 2αβeαt sin(βt)
=eαt (α2 − β 2 ) cos(βt) − 2αβ sin(βt) .
ay100 (t) + by10 (t) + cy1 (t) =aeαt (α2 − β 2 ) cos(βt) − 2αβ sin(βt)
Example 43
Let us solve
y 00 − 4y 0 + 13y = 0.
Clearly, the characteristic equation is r 2 − 4r + 13 = 0 or equivalently
(r − 2)2 + 32 = 0, which implies r1,2 := 2 ± 3i. Since the roots are in
complex conjugates, the general solution of the equation is
Non-Homogeneous Equation
If we assume that 0 0
y1 u1 + y2 u2 =0
y10 u10 + y20 u20 = f ,
a
then by Cramer’s rule, we have
Z t
f (ξ)y2 (ξ)
fy2
0
u1 = − u1 (t) = − dξ
aW =⇒ aW (ξ)
Z t (23)
u 0 = fy1
u2 (t) = f (ξ)y1 (ξ)
dξ,
2
aW aW (ξ)
where c1 and c2 are arbitrary constants. Finally, by (24) and (25), the
general solution of (21) is
Z t Z t
f (ξ)y2 (ξ) f (ξ)y1 (ξ)
y (t) = c1 y1 (t) + c2 y2 (t) −y1 (t) dξ + y2 (t) dξ ,
| {z } aW (ξ) aW (ξ)
complementary solution | {z }
particular solution
Example 44
Let us solve
y 00 + y = 3.
Clearly, the characteristic equation is r 2 + 1 = 0, which implies
r1,2 := ±i. Since the roots are complex conjugates, the complementary
solution of the equation is
Note that the Wronskian of cos and sin is 1. Using the variation of
parameters formula, we have
Z t Z t
yp (t) := − cos(t) 3 sin(ξ)dξ + sin(t) 3 cos(ξ)dξ
Two main contents of this section are the initial-value problems (IVP)
and the boundary-value problems (BVP).
To find the unique solution of (26), we first write the general solution of
the ODE in the first line of (26) and then apply the initial conditions in
the second line to determine for which arbitrary constants these
conditions hold. Say
Example 45
Show that the solution of the IVP
(
y 00 + y 0 − 2y = 0 for t > 0
y (0) = 1 and y 0 (0) = −5
is
y (t) := 2e−2t − et for t ≥ 0.
Figure 12: The graphic of the solution y over the interval [0, 1].
Example 46
Show that the solution of the IVP
(
y 00 − 4y 0 + 4y = 0 for t > 0
0
y (0) = 1 and y (0) = 5
is
y (t) := e2t + 3e2t t for t ≥ 0.
1
1
Figure 13: The graphic of the solution y over the interval [0, 1].
Example 47
Show that the solution of the IVP
(
y 00 − 4y 0 + 13y = 0 for t > 0
0
y (0) = 2 and y (0) = −5
is
y (t) := e2t 2 cos(3t) − 3 sin(3t) for t ≥ 0.
2
1
Figure 14: The graphic of the solution y over the interval [0, 1].
Example 48
Show that the solution of the IVP
(
y 00 + y = 3 for t > 0
0
y (0) = 5 and y (0) = −1
is
y (t) := 2 cos(t) − sin(t) + 3 for t ≥ 0.
Figure 15: The graphic of the solution y over the interval [0, 1].
Example 49
Show that the solution of the IVP
(
y 00 − y = 3 for t > 0
y (0) = −1 and y 0 (0) = −3
is
y (t) := 2 cosh(t) − 3 sinh(t) − 3 for t ≥ 0.
1
−1
Figure 16: The graphic of the solution y over the interval [0, 1].
Before we start off this section, we need to make it very clear that we are
only going to scratch the surface of the topic of BVPs. There is enough
material in the topic of BVPs that we could devote a whole class to it.
The intent of this section is to give a brief (and we mean very brief) look
at the idea of BVPs and to give enough information to allow us to do
some basic partial differential equations in the next chapter.
Now, with that out of the way, the first thing that we need to do is to
define just what we mean by a BVP. With IVPs, we had a differential
equation and we specified the value of the solution and an appropriate
number of derivatives at the same point (collectively called “initial
conditions”). For instance, for a second-order differential equation the
initial conditions are as follows.
With BVPs, we will have a differential equation and we will specify the
function and/or derivatives at different points, which we’ll call boundary
values. For second-order differential equations, which will be looking at
pretty much exclusively here, any of the following can (and will) be used
for boundary conditions.
As we’ll soon see much of what we know about initial value problems will
not hold here. We can, of course, solve the ODE provided the
coefficients are constant. None of that will change. The changes (and
perhaps the problems) arise when we move from initial conditions to
boundary conditions.
Example 50
Show that the solution of the BVP
(
y 00 + 4y = 0 for 0 < t < π
4
y (0) = −2 and y ( π4 ) = 10
is
π
y (t) := −2 cos(2t) + 10 sin(2t) for 0 ≤ t ≤ 4.
π
−2 4
Figure 17: The graphic of the solution y over the interval [0, π4 ].
Example 51
Show that solutions of the BVP
(
y 00 + 4y = 0 for 0 < t < π
y (0) = −2 and y (π) = −2
are
y (t) := −2 cos(2t) + c sin(2t) for 0 ≤ t ≤ π,
where c is an arbitrary constant.
π
−2
Figure 18: The graphic of the solutions y over the interval [0, π] for
c = −5, −4, · · · , 5.
Example 52
Show that the BVP
(
y 00 + 4y = 0 for 0 < t < π
y (0) = −2 and y (π) = 3
has no solutions.
Example 53
Show that the solution of the BVP
(
y 00 + 3y = 0 for 0 < t < π
0
y (0) = 7 and y (π) = 0
is √ √ √
y (t) := 7 cos 3t + 7 tan 3π sin 3t for 0 ≤ t ≤ π.
Figure 19: The graphic of the solution y over the interval [0, π].
Example 54
Show that the BVP
(
y 00 + 25y = 0 for 0 < t < π
0 0
y (0) = 5 and y (π) = 5
has no solutions.
Example 55
Show that solutions of the BVP
(
y 00 + 9y = cos(t) for 0 < t < π
2
y 0 (0) = 3 and y ( π2 ) = −1
are
1 π
y (t) := c cos(3t) + sin(3t) + cos(t) for 0 ≤ t ≤ 2,
8
where c is an arbitrary constant.
−1 π
2
Figure 20: The graphic of the solutions y over the interval [0, π] for
c = −5, −4, · · · , 5.
Sturm-Liouville Problems
Example 56
Consider the Sturm-Liouville problem
(
y 00 (t) + λy (t) = 0
y (0) = 0 and y (π) = 0.
yc (t) := c1 + c2 t,
Case 3. Let λ < 0. Say λ := −µ2 , where µ > 0. The general solution of
the DE is
yc (t) := c1 cosh(µt) + c2 sinh(µt),
where c1 and c2 are arbitrary constants. Using the boundary conditions,
we get ( (
c1 = 0 c1 :=0
=⇒
c1 cosh(µπ) + c2 sinh(µπ) = 0 c2 :=0.
Hence, we get the trivial solution.
Thus, the Sturm-Liouville problem has the eigenvalues
λn := n2 for n = 1, 2, · · ·
Example 57
Consider the Sturm-Liouville problem
(
y 00 (t) + λy (t) = 0
y (0) = 0 and y 0 (π) = 0.
Example 58
Consider the Sturm-Liouville problem
(
y 00 (t) + λy (t) = 0
y 0 (0) = 0 and y 0 (π) = 0.
λn := n2 for n = 0, 1, · · ·
Example 59
Consider the periodic Sturm-Liouville problem
(
y 00 (t) + λy (t) = 0
y (−π) = y (π) and y 0 (−π) = y 0 (π).
λn := n2 for n = 0, 1, · · ·
Fourier Series
Definition 34 (Orthogonality)
Functions f1 , f2 , · · · defined on some interval I : a < t < b are called as
“orthogonal” on this interval with respect to the weight function µ
(with µ(t) > 0 for all t ∈ I ) if the inner product satisfies
Z b
hfm , fn i := fm (ξ)fn (ξ)µ(ξ)dξ = 0 for all distinct m and n.
a
X∞
= am hfm , fn i.
m=0
Because of the orthogonality, all the terms inside the sum are zero except
when m = n. Hence, the infinite series reduces to
hf , fn i = an hfn , fn i = an kfn k2 ,
which yields
hf , fn i
an = , n = 0, 1, · · · (30)
kfn k2
provided that kfn k =
6 0.
where Rb
f (ξ)fn (ξ)µ(ξ)dξ
an = Ra b 2 , n = 0, 1, · · ·
a
fn (ξ) µ(ξ)dξ
provided that kfn k =
6 0.
Definition 35 (Completeness)
Let f1 , f2 , · · · be a sequence of orthogonal functions on an interval
I : a < t < b and S be a set of functions defined on I . If every function
f ∈ S can be approximated arbitrarily closely by a linear combination of
the functions f1 , f2 , · · · , then f1 , f2 , · · · is said to be “complete” in the
set S. More precisely, let f ∈ S, if for every ε > 0, there exist a positive
integer m, scalars a1 , a2 , · · · , am and functions f1 , f2 , · · · , fm such that
m
X
f − ak k
< ε.
f
k=0
where Z π
1
a0 = f (ξ)dξ,
2π −π
1 π
Z
am = f (ξ) cos(mξ)dξ, m = 1, 2, · · · ,
π −π
1 π
Z
bm = f (ξ) sin(mξ)dξ, m = 1, 2, · · · .
π −π
Further, the series converges to f (t) if f is continuous at t and to
f (t + )+f (t − )
2 if f has a jump type discontinuity at t.
Example 60
−2π −π π 2π
−1
or equivalently
∞
4X 1
f (t) ∼ sin (2k − 1)t ,
π 2k − 1
k=1
−2π −π π 2π
−1
with coefficients
Z L
1
a0 := f (ξ)dξ,
2L −L
1 L
Z
π
am := f (ξ) cos mξ dξ, m = 1, 2, · · · ,
L −L L
Z L
1 π
bm := f (ξ) sin mξ dξ, m = 1, 2, · · · .
L −L L
Example 61
Consider the following 2-periodic function
(
t, 0 < t < 1
f (t) :=
2, −1 < t < 0
−2 −1 1 2
The series
∞
1 − (−1)m 2 − (−1)m
5 X
− cos(πmt) + sin(πmt)
4 m=1 π 2 m2 πm
−2 −1 1 2
with coefficients
Z L
1
a0 := f (ξ)dξ,
L 0
Z L
2 π
am := f (ξ) cos mξ dξ, m = 1, 2, · · · .
L 0 L
Example 62
Consider the following 2-periodic function
−4 −3 −2 −1 1 2 3 4
−1
The series
∞
2 X (−1)m
− sin(πmt)
π m=1 m
is the Fourier Sine series for f . I
−4 −3 −2 −1 1 2 3 4
−1
Example 63
Consider the 2L-periodic function defined as
(
t(L − t), 0≤t≤L
f (t) :=
t(L + t), −L ≤ t ≤ 0
−3L −L L 3L
Show that
∞
4L2 X 1 − (−1)m
π
sin mt for − L ≤ t ≤ L
π 3 m=1 m3 L
or equivalently
∞
8L2 X
1 π
sin (2k − 1)t for − L ≤ t ≤ L
π3 (2k − 1)3 L
k=1
Example 64
Heat Equation
We shall solve (32) for some important types of boundary and initial
conditions. We begin with the case in which the ends x = 0 and x = L of
the bar are kept at temperature zero. So that we have the boundary
conditions
Here, we must have f (0) = 0 and f (L) = 0 for consistency with (33).
∂u ∂2u
= ϕψ̇ and = ϕ00 ψ,
∂t ∂x 2
where dots denote the derivatives with respect to t and the primes the
derivatives with respect to x.
ϕψ̇ = c 2 ϕ00 ψ.
ϕ00 1 ψ̇
= 2 .
ϕ c ψ
The variables are separated now, the left-hand side depends only on x
and the right-hand side only on t. Hence, both sides must be constant
because if they were variables, then changing one would affect only one
side, leaving the other side unaltered. Thus,
ϕ00 1 ψ̇
= 2 = −λ,
ϕ c ψ
where λ is called as the “separation constant”.
ϕ00 + λϕ = 0 (36)
and
ψ̇ + c 2 λψ = 0. (37)
ϕ(x) := A + Bx,
π
2
Now, we turn to the solution of the problem (37) with λn := Ln , i.e.,
2
π
ψ̇ + c n ψ=0
L
are solutions of the heat equation (32) satisfying (33). These are the
eigenfunctions of the problem corresponding to the eigenvalues c πL n
for n = 1, 2, · · · .
2 L
Z
π
Cn := f (ξ) sin nξ dξ, n = 1, 2, · · · .
L 0 L
Example 65
Show that the solution of the heat equation
1
ut = 64 uxx for 0 < x < L and t > 0
u(0, t) = 0 and u(L, t) = 0 for t > 0
u(x, 0) = x(L − x) for 0 ≤ x ≤ L
is
∞
4L2 X 1 − (−1)n
π π 2
u(x, t) := 3 3
sin nx e−( 8L n) t .
π n=1 n L
Example 66
Show that the solution of the heat equation
1
ut = 64 uxx for 0 < x < L and t > 0
u(0, t) = 0 and u(L, t) = 0 for t > 0
u(x, 0) = sin πL x + 3 sin 5 πL x − 8 sin 17 πL x
for 0 ≤ x ≤ L
is
π π 2 π π 2
u(x, t) := sin x e−( 8L ) t + 3 sin 5 x e−(5 8L ) t
L L
π π 2
− 8 sin 17 x e−(17 8L ) t .
L
Example 67
Show that the solution of the heat equation
1
ut = 64 uxx for 0 < x < L and t > 0
ux (0, t) = 0 and ux (L, t) = 0 for t > 0
u(x, 0) = 1 − cos 3 πL x
for 0 ≤ x ≤ L
is
π π 2
u(x, t) := 1 − cos 3 x e−(3 8L ) t .
L
Wave Equation
∂2u 2
2∂ u
= c , (42)
∂t 2 ∂x 2
where c 2 := Tρ . Here, c is the velocity of propagation with a tension
of T and a mass density of ρ.
Furthermore, the form of the motion of the string will depend on its
initial deflection (the deflection at time t = 0) f (x), and on its initial
velocity (the velocity at time t = 0) g (x). Hence, we have the two
initial condition
We will find a solution of the PDE (42) satisfying (43) and (44). The
technique applied here is almost same as the one used for the heat
equation.
ϕψ̈ = c 2 ϕ00 ψ,
2
d2 ψ
where ϕ00 := ddxϕ2 and ψ̈ := dt 2 . To separate the variables, we divide by
c 2 ϕψ, which yields
ϕ00 1 ψ̈
= 2 .
ϕ c ψ
The left-hand side depends only on x and the right-hand side only on t,
so both sides must be equal to the so-called separation constant λ, i.e.,
ϕ00 1 ψ̈
= 2 = −λ. (45)
ϕ c ψ
ϕ00 1 ψ̈
= 2 = −µ2 .
ϕ c ψ
Multiplication by the denominators gives immediately the two ODEs
ϕ00 + µ2 ϕ = 0 (46)
and
ψ̈ + c 2 µ2 ψ = 0. (47)
π
Now, we solve (47). For µn := L n, as just obtained, (47) becomes
2
π
ψ̈ + c n ψ=0
L
Hence, for (50) to satisfy the first condition in (44), Cn ’s must be the
coefficients of the Fourier Sine series of f by Theorem 27, i.e.,
2 L
Z
π
Cn := f (ξ) sin nξ dξ, n = 1, 2, · · · .
L 0 L
Hence, for (50) to satisfy the second condition in (44), Dn ’s must be the
coefficients of the Fourier Sine series of g by Theorem 27, i.e.,
Z L
2 π
Dn := g (ξ) sin nξ dξ, n = 1, 2, · · · .
cπn 0 L
Example 68
Show that the solution of the wave equation
utt = 4uxx for 0 < x < L and t > 0
u(0, t) = 0 and u(L, t) = 0 for t > 0
u(x, 0) = x(L − x) and ut (x, 0) = x for 0 ≤ x ≤ L
is
∞
L2 X 4 1 − (−1)n
π π
u(x, t) := 2 sin n x cos 2n t
π n=1 L π n3 L
(−1)n
π
− 2
sin 2n t .
n L
Example 69
Show that the solution of the wave equation
utt = 4uxx for 0 < x < L and t > 0
u(0, t) = 0 and u(L, t) = 0 for t > 0
u(x, 0) = 4 sin 5 πL x and ut (x, 0) = 3 sin 14 πL x
for 0 ≤ x ≤ L
is
π π 3L π π
u(x, t) := 4 sin 5 x cos 10 t + sin 14 x sin 28 t .
L L 28π L L
Example 70
Show that the solution of the wave equation
utt = 4uxx for 0 < x < L and t > 0
ux (0, t) = 0 and u(L, t) = 0 for t > 0
π π
u(x, 0) = 2 cos 9 2L x and ut (x, 0) = 3 cos 15 2L x for 0 ≤ x ≤ L
is
π π L π π
u(x, t) := 2 cos 9 x cos 9 t + cos 15 x sin 15 t .
2L L 5π 2L L
Laplace Equation
∂2u ∂2u
2
+ 2 = 0, (51)
∂x ∂y
which can be obtained from two-dimensional heat equation
2
∂2u
∂u ∂ u
= c2 +
∂t ∂x 2 ∂y 2
∂u
by taking the time as constant. So that ∂t ≡ 0, which leads us to (51).
and
u(0, y ) = 0 and u(L, y ) = 0 for 0 ≤ y ≤ H. (53)
See Figure 29.
f (x)
H
0 0
ϕ00 + µ2 ϕ = 0
π
Next, using µn := Ln in the second ODE, we get
2
π
ψ̈ − n ψ=0
L
π
2
are the eigenfunctions for the eigenvalues λn := Ln .
Defining
∞
X π π
u(x, y ) := Cn sin nx sinh ny
n=1
L L
Example 71
Show that the solution of the Laplace equation
2
∂ u ∂2u
∂x 2 + ∂y 2 = 0 for 0 < x, y < 2
u(x, 0) = 0 and u(x, 2) = sin 3 π2 x for 0 ≤ x ≤ 2
u(0, y ) = 0 and u(2, y ) = 0 for 0 ≤ y ≤ 2
is
1 π π
u(x, y ) := sin 3 x sinh 3 y .
sinh(3π) 2 2
Example 72
Show that the solution of the Laplace equation
2
∂ u ∂2u
∂x 2 + ∂y 2 = 0 for 0 < x, y < 2
u(x, 0) = 0 and u(x, 2) = 0 for 0 ≤ x ≤ 2
u(0, y ) = − sin π2 y and u(2, y ) = 0 for 0 ≤ y ≤ 2
is
1 π π
u(x, y ) := − sinh π − x sin y .
sinh(π) 2 2
Example 73
Show that the solution of the Laplace equation
2
∂ u ∂2u
∂x 2 + ∂y 2 = 0 for 0 < x, y < 2
u(x, 0) = 3 sin 7 π2 x and u(x, 2) = sin 3 π2 x
for 0 ≤ x ≤ 2
u(0, y ) = − sin π2 y and u(2, y ) = 0 for 0 ≤ y ≤ 2
is
1 π π
u(x, y ) := sin 3 x sinh 3 y
sinh(3π) 2 2
3 π π
+ sin 7 x sinh 7π − 7 y
sinh(7π) 2 2
1 π π
− sinh π − x sin y .
sinh(π) 2 2
Fourier Integral
where Z L
1
a0 := f (ξ)dξ,
2L −L
1 L
Z
π
am := f (ξ) cos mξ dξ, m = 1, 2, · · · ,
L −L L
1 L
Z
π
bm := f (ξ) sin mξ dξ, m = 1, 2, · · · .
L −L L
We will try to figure out what happens when L → ∞.
π π
Let ω := L m, then ∆ω := L. Thus, we have
Z L ∞ Z L
1 X 1
fL (t) = f (ξ)dξ + f (ξ) c(ωξ)dξ c(ωt)
2L −L m=1
L −L
Z L
1
+ f (ξ) s(ωξ)dξ s(ωt)
L −L
Z L ∞ Z L
1 1X
= f (ξ)dξ + f (ξ) c(ωξ)dξ c(ωt)
2L −L π m=1 −L
Z L (55)
+ f (ξ) s(ωξ)dξ s(ωt) ∆ω,
−L
where we denote cos and sin with their first letters above.
which implies
Z L
1
lim f (ξ)dξ = 0. (56)
L→∞ 2L −L
where
Z ∞ Z ∞
1 1
A(z) := f (ξ) cos(zξ)dξ and B(z) := f (ξ) sin(zξ)dξ.
π −∞ π −∞
where
Z ∞ Z ∞
1 1
A(z) := f (ξ) cos(zξ)dξ and B(z) := f (ξ) sin(zξ)dξ.
π −∞ π −∞
Example 74
where h > 0, is
Z ∞
1 sin(hω)
f (t) ∼ 3 cos(ωt)
π 0 ω
1 − cos(hω)
+ sin(ωt) dω for t ∈ R. I
ω
By Theorem 29, we see that the Fourier integral converges to the function
1
2 , t = −h
1, −h < t < 0
3, t = 0
g (t) := 2
2, 0 < t < h
1, t = h
0, |t| > h.
Example 75
where h > 0, is
Z ∞
2 sin(hω)
f (t) ∼ cos(ωt)dω for t ∈ R. I
π 0 ω
By Theorem 29, we see that the Fourier integral converges to the function
1, |t| < h
g (t) := 12 , |t| = h
0, |t| > h.
Example 76
where h > 0, is
∞
1 − cos(hω)
Z
2
f (t) ∼ sin(ωt)dω. I
π 0 ω
By Theorem 29, we see that the Fourier integral converges to the function
1
(− 2 ), t = −h
(−1), −h < t < 0
0, t=0
g (t) :=
1,
0<t<h
1
, t=h
2
0, |t| > h.
where Z ∞
2
B(z) := f (ξ) sin(zξ)dξ.
π 0
Example 77
where α > 0, is
Z ∞
2 α
f (|t|) ∼ cos(ωt)dω for t ∈ R.
π 0 α2 + ω2
Example 78
where α > 0, is
Z ∞
1 ω2
f (|t|) ∼ √ e− 4α cos(ωt)dω for t ∈ R.
απ 0
Example 79
where α > 0, is
Z ∞
2 ω
sgn(t)f (|t|) ∼ sin(ωt)dω for t ∈ R.
π 0 α2 + ω 2
Now, we will solve some examples by using the notion of Fourier integral.
Example 80
ϕ00 + λϕ = 0 and ψ̇ + λψ = 0.
Then, we have
−µx
a(λ)e
+ b(λ)eµx , λ<0
ϕλ (x) := a(λ) + b(λ)x, λ=0
a(λ) cos(µx) + b(λ) sin(µx), λ > 0,
ψλ (t) := c(λ)e−λt . I
Then, we have
2
uµ (x, t) := A(µ) cos(µx) + B(µ) sin(µx) e−µ t
for µ ≥ 0. I
which yields
1 z2
A(z) := √ e− 4 and B(z) :≡ 0. I
π
Therefore,
Z ∞
1 ω2 2
u(x, t) := √ e− 4 cos(ωx)e−ω t dω
π 0
Z ∞
1 1 2
=√ cos(ωx)e−(t+ 4 )ω dω. I
π 0
x 1 −1
t
2
− x
Figure 30: Graphic of e√ 1+4t
.
1+4t
Example 81
ϕ00 + λϕ = 0 and ψ̈ − λψ = 0.
Then, we have
−µx
a(λ)e
+ b(λ)eµx , λ<0
ϕλ (x) := a(λ) + b(λ)x, λ=0
a(λ) cos(µx) + b(λ) sin(µx), λ > 0
and
c(λ) cos(µy ) + d(λ) sin(µy ), λ < 0
ψλ (y ) := c(λ) + d(λ)y , λ=0
c(λ)e−µy + d(λ)eµy ,
λ > 0,
Thus,
0,
λ<0
ϕλ (x) := a(λ), λ=0
a(λ) cos(µx) + b(λ) sin(µx), λ > 0
and
c(λ) cos(µy ) + d(λ) sin(µy ), λ < 0
ψλ (y ) := c(λ), λ=0
c(λ)e−µy ,
λ > 0.
Therefore, we have
which yields
Z ∞
u(x, y ) := {A(ω) cos(ωx) + B(ω) sin(ωx)}e−ωy dω. I
0
2 sin(πz)
A(z) := and B(z) :≡ 0.
π z
Therefore, we have
Z ∞
2 sin(πω)
u(x, y ) := cos(ωx)e−ωy dω. I
π 0 ω
x 1 −1
y
1 π+x π−x
Figure 31: Graphic of π
arctan y
+ arctan y
.
Fourier Transform
1 ∞
Z Z ∞ Z ∞
f (t) = f (ξ) c(ωξ) c(ωt) + f (ξ) s(ωξ) s(ωt) dξdω
π 0 −∞ −∞
Z ∞Z ∞
1
= f (ξ) cos ω(ξ − t) dξdω
π 0 −∞
Z ∞Z ∞
1
= f (ξ) cos ω(ξ − t) dξdω
2π −∞ −∞
Z ∞Z ∞
1 h i
= f (ξ) e−iω(ξ−t) + eiω(ξ−t) dξdω
4π −∞ −∞
"Z #
∞ Z ∞ Z ∞Z ∞
1
= f (ξ)e−iω(ξ−t) dξdω + f (ξ)eiω(ξ−t) dξdω .
4π −∞ −∞ −∞ −∞
then Z ∞
1
√ F{f }(ω)eiωt dω = f (t)
2π −∞
or equivalently
F F{f } (−t) = f (t).
That is, if F is a Fourier transform of a function f , then
Example 82
where h > 0, is r
21
F{f }(z) = sin(hz).
πz
Example 83
Show that r
−α|·|
2 α
F e (z) = ,
π α2 + z 2
where α > 0.
!!! Insert Graphic Here!!!
Example 84
Show that
2 1 z2
F e−α· (z) = √ e− 4α ,
2α
where α > 0.
!!! Insert Graphic Here!!!
Theorem 32
The Fourier transform is linear, i.e.,
Theorem 33
Let f be a piecewise differentiable function such that limt→±∞ f (t) = 0.
Then,
F{f 0 }(z) = izF{f }(z).
Definition 39 (Convolution)
Theorem 34
The integral Z ∞
1
F −1 {F }(t) := √ F (ω)eiωt dω.
2π −∞
Theorem 35
Example 85
Ut + z 2 U = 0,