Mathematical Methods in Physics (Part I)
Mathematical Methods in Physics (Part I)
Dr Cristina Zambon
12 November 2021
1
Despite the effort to eliminate all typographic errors, some of them could still be present. Hence be
careful. Note that this summary is intended as a guideline for the materials covered in lectures and it is
not supposed to replace the textbook.
2
Introduction
Before engaging with the lectures, ensure that you are familiar with the mathe-
matical concepts that follows. This material will not be covered in any details during
lectures.
a · b = |a||b| cos θ = a1 b1 + a2 b2 + a3 b3 .
i) |a|2 = a · a.
ii) a · b = b · a.
iii) a · (b + c) = a · b + a · c, a · (β b) = β a · b, where β is a scalar.
iv) If the scalar product of two vectors is zero, then the vectors are perpendicular.
a × (b × c) = (a · c)b − (a · b)c.
Matrix operations:
|A| = Ajk Cjk , for any row j, |A| = Akj Ckj , for any column j,
where Cmn = (−1)m+n |Amn | is the cofactor associated to the matrix element Amn .
In turn, |Amn | is the minor associated to the matrix element Amn . The minor is the
determinant of the matrix obtained by removing the m-th row and n-th column from
the matrix A.
Properties:
i) |AB . . . F | = |A||B| . . . |F |.
ii) |AT | = |A|, |A∗ | = |A|∗ , |A† | = |A|∗ , |A−1 | = |A|−1 .
iii) If the rows (or the columns) are linearly dependent, then |A| = 0.
iv) If B is obtained from A by multiplying the elements of any row (or column) by
a factor α, then |B| = α |A.
v) If B is obtained from A by interchanging two rows (or columns), then |B| =
−|A|.
4
vi) If B is obtained from A by adding k times one row (or column) to the other
row (or column), then |A| = |B|.
CT Cji
A−1 = , that is A−1
ij = , A−1 A = AA−1 = I,
|A| |A|
where C is the cofactor matrix and I the identity matrix (Iij = δij ). If |A| = 0 the
inverse does not exist and the matrix A is said to be singular.
Note that in order to find the inverse of a matrix, you can also use the Gauss-Jordan
method shown in the lectures, which makes use of the elementary row operations.
Properties:
1 Vector Algebra in R3 6
2 Vector Spaces 11
3 Matrices 16
5 Fourier Series 27
6 Integral Transforms 35
6.1 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Laplace transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8 Vector Calculus 50
8.1 The del operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2 Curves and surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9 Integrals 56
9.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.2 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
9.3 Volume integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9.4 Theorems on integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5
Chapter 1
Vector Algebra in R3
3
P
(i) aij bjk ≡ aij bjk = ai1 b1k + ai2 b2k + ai3 b3k .
j
3 P
P 3 3
P
(ii) aij bjk ck ≡ aij bjk ck = (aij bj1 c1 + aij bj2 c2 + aij bj3 c3 )
j k j
= (ai1 b11 c1 + ai1 b12 c2 + ai1 b13 c3 ) + (ai2 b21 c1 + ai2 b22 c2 + ai2 b23 c3 )
+ (ai3 b31 c1 + ai3 b32 c2 + ai3 b33 c3 ).
Let us introduce two mathematical objects that can be used in the context of the summa-
tion convention:
Kronecker delta:
1 if i = j,
δij = i, j = 1, 2, 3
0 if i ̸= j.
Note that this object is symmetric.
6
7
(iii) a · b = ai bi = δij ai bj .
Levi-Civita symbol:
1 if (i, j, k) = (1, 2, 3) = (2, 3, 1) = (3, 1, 2),
ϵijk = −1 if (i, j, k) = (1, 3, 2) = (3, 2, 1) = (2, 1, 3),
0 otherwise.
= a 1 b2 c3 + a2 b3 c1 + a3 b1 c2 − a1 b3 c2 − a3 b2 c1 − a2 b1 c3
a1 a2 a3
= b1 b2 b3 .
c1 c2 c3
(iv) ϵijk ϵilm = 3δjl δkm + δil δjm δki + δim δji δkl − δil δji δkm − 3δjm δkl − δim δjl δki
= δjl δkm − δjm δkl .
a · b = |a||b| cos θ = a1 b1 + a2 b2 + a3 b3 = ai bi .
i) |a|2 = a · a.
8 CHAPTER 1. VECTOR ALGEBRA IN R3
ii) a · b = b · a.
iii) a · (b + c) = a · b + a · c, a · (β b) = β a · b, where β is a scalar.
a (a · b) a
b = b∥ + b⊥ , b∥ = (|b| cos θ) = .
|a| |a| |a|
Equation of a line.
Given a point A with position vector a located on a line having a direction b̂, a
generic point R on the same line with position vector r is given by
x
r = a + λb̂, r = y ,
z
where λ is a scalar. Note that the same equation can be also written as follows
(r − a) × b̂ = 0.
Figure 1.2: Line passing through the point A and having a direction b̂.
10 CHAPTER 1. VECTOR ALGEBRA IN R3
Equation of a plane.
Figure 1.3: Plane perpendicular to the unit vector n̂ and passing through the point A.
Vector Spaces
Linear vector space. A set of objects called vectors forms a vector space V if there
are two operations defined on the elements of the set called addition and multiplication by
scalars, which obey the following simple rules (the axioms of the vector space):
We say that the vector space V is closed with respect to addition and scalar multipli-
cation.
iv) There exists an inverse element −v such that v + (−v) = 0 for all v.
v) u + v = v + u.
where u, v, w are vectors and α, β are scalars. If the scalars α are real V is called a real
vector space, otherwise V is called a complex vector space.
11
12 CHAPTER 2. VECTOR SPACES
(i) R3 . Yes.
(iv) The set of real functions f (x) with no restriction on the values of x and with
the usual addition and scalar multiplication. Yes.
(v) The set of matrices of size (n × m) with real entries and with the usual addition
and scalar multiplication. Yes.
(vi) The set of 2-dimensional vectors with real entries and the usual addition but
the following definition of scalar multiplication
x αx
α = ,
y 0
(vii) The set of solutions of the following second order, linear, homogeneous differ-
ential equation
d2 f df
p(x) 2 + q(x) + r(x) f = 0,
dx dx
where p, q, r are given functions. Yes.
x
(viii) The set of vector u = y in the 3-dimensional space for which
z
2x − 3y + 11z + 2 = 0.
No.
Linear combinations:
α1 v1 + α2 v2 + · · · αk vk = αi vi ,
(i) What is a span of a single vector in R3 ? It is the set of all scalar multiples
of this vector. It is a line in the direction of the vector.
α1 v1 + α2 v2 + · · · + αk vk = 0
is satisfied if and only if all αi = 0. Otherwise, the vectors are said to be linearly
dependent. That is, they are linearly dependent if the expression
α1 v1 + α2 v2 + · · · + αk vk = 0
Example 3 Indicate whether the following sets of vectors are linearly dependent
or independent.
(i)
0 0 1
v1 = 1 , v2 = 1 , v3 = 1 .
1 2 −1
α3
By definition: α1 v1 + α2 v2 + α3 v3 = α+ α2 + α3 = 0,
α1 + 2α2 − α3
This implies, α1 = α2 = α3 = 0. Hence the vectors are linearly indepen-
dent.
14 CHAPTER 2. VECTOR SPACES
(ii)
−2 1 0
v1 = 0 , v2 = 1 , v3 = 2 .
1 1 3
We can see that v3 = v1 + 2v2 . Hence the vectors are linearly depen-
dent.
(iiii)
{1 + x + x2 , 1 − x + 3x2 , 1 + 3x − x2 }.
By definition: α1 (1 + x + x2 ) + α2 (1 − x + 3x2 ) + α3 (1 + 3x − x2 ) =
x2 (α1 + 3α2 − α3 ) + x(α1 − α2 + 3α3 ) + (α1 + α2 + α3 ) = 0.
It follows that α1 = −2α2 , α2 = α3 . Hence the ‘vectors’ are linearly
dependents.
Basis: A basis is a minimal set of vectors that span a vector space. In other words,
a set of vectors v1 , v2 , · · · vk , in V is called a basis of V if and only if v1 , v2 , · · · vk ,
are linearly independent and V = Span(v1 , v2 , · · · vk ). Then
i) The numbers of vector in a basis is called the dimension of the space V (dim
V ).
ii) If the set {v1 , v2 , · · · , vk } is a basis of the vector space V , then any vector v
in V can be written as a unique linear combination of the basis vectors and
the coefficients of the unique linear combination are called the components of v
with respect to that basis.
(i) R3 .
Basis: the set of vectors in Example 3 (i). Dim 3.
Dim 6.
15
(iii) The set of polynomials of degree two or less with real coefficients.
Basis: {1, x, x2 }. Dim 3.
Inner (or scalar) product: Consider a vector space V . The inner product between
two elements of V is a scalar function denoted ⟨u|v⟩ that satisfies the following
properties
i) ⟨u|v⟩ = ⟨v|u⟩∗ .
ii) ⟨u|(λv + µw)⟩ = λ⟨u|v⟩ + µ⟨u|w⟩, λ, µ scalars.
iii) ⟨u|u⟩ > 0 if u ̸= 0.
p
The length of a vector (norm) is |u| = ⟨u|u⟩.
two vectors are orthogonal if ⟨u|w⟩ = 0.
(i) In R3 :
⟨u|v⟩ = uT · v.
u1 v1
Take u = u2 , v = v2 .
u3 v3
For the first property: ⟨u|v⟩ = uT · v = (u1 v1 + u2 v2 + u3 v3 ) = vT · u =
(vT · u)∗ = ⟨v|u⟩∗ . Similar procedure for the other properties.
(ii) In C3 :
⟨u|v⟩ = u† · v.
u1 v1
Take u = u2 , v = v2 .
u3 v3
For the first property: ⟨u|v⟩ = u† ·v = (u∗1 v1 +u∗2 v2 +u∗3 v3 ) = (u1 v1∗ +u2 v2∗ +
u3 v3∗ )∗ = (v† · u)∗ = ⟨v|u⟩∗ . Similar procedure for the other properties.
Chapter 3
Matrices
Matrix operations:
|A| = Ajk Cjk , for any row j, |A| = Akj Ckj , for any column j,
where Cmn = (−1)m+n |Amn | is the cofactor associated to the matrix element Amn .
In turn, |Amn | is the minor associated to the matrix element Amn . The minor is the
determinant of the matrix obtained by removing the m-th row and n-th column from
the matrix A.
Properties:
i) |AB . . . F | = |A||B| . . . |F |.
16
17
CT Cji
A−1 = , that is A−1
ij = , A−1 A = AA−1 = I,
|A| |A|
where C is the cofactor matrix and I the identity matrix (Iij = δij ). If |A| = 0 the
inverse does not exist and the matrix A is said to be singular.
Note that in order to find the inverse of a matrix, you can also use the Gauss-Jordan
method, which makes use of the elementary row operations.
Properties:
The trace of a square matrix: It is the sum of the diagonal elements of the
matrix, i.e. X
Tr A = Akk ≡ Akk .
k
Properties:
i) Symmetric matrices: AT = A.
Anti-symmetric or skew-symmetric matrices: AT = −A.
Any matrix can be written as the sum of a symmetric matrix and an antisym-
metric matrix:
1 1
A + AT + A − AT ).
A=
2 2
1 1
A + A† + A − A† .
A=
2 2
0 1 0 −i 1 0
σ1 = , σ2 = , σ3 = .
1 0 i 0 0 −1
Show that the Pauli matrices, together with the identity matrix, I, form a basis
for the vector space of the (2 × 2) hermitian matrices.
A general element of the vector space is:
γ + δ α − iβ
ασ1 + βσ2 + γσ3 + δI = ,
α + iβ −γ + δ
where α, β, γ, δ are real. This is the general form of an hermitian matrix since
it can be rewritten as follows
a c
,
c∗ b
Are there any vectors x ̸= 0 which are transformed by a matrix A into multiple of
themselves?
In other words: For which vectors x and scalar λ is the following eigenvalue equation
Ax = λx
satisfied?
iii) The eigenvalue equation, being a set of non homogeneous linear equations, has a non
trivial solution if and only if |A − λI| = 0.
iv) The eigenvalues of the matrix A are the roots of the characteristic polynomial.
v) The eigenvectors associated to the eigenvalue µ are the vectors x such that
(A − µ I)x = 0.
20
21
1 2 1
A = 2 1 1 .
1 1 2
1−λ 2 1
|A − λI| = 2 1−λ 1 = (1 − λ)2 (2 − λ) − 2(1 − λ) − 4(1 − λ) = 0.
1 1 2−λ
Hence λ1 = 1, λ2 = 4, λ3 = −1.
For λ1 = 1 :
0 2 1 x1 2x2 + x3
(A − I)x1 = 2 0 1 x2 = 2x1 + x3 = 0.
1 1 1 x3 x1 + x2 + x3
x1 1
Hence x1 = x1 . A possible eigenvector is: x1 = 1 .
−2x1 −2
1
For λ1 = 4 solve (A − 4I)x2 = 0 → x2 = 1 .
1
1
For λ1 = −1 solve (A + I)x3 = 0 → x3 = −1 .
0
If a matrix has an eigenvalue equal to zero, then the matrix is singular since its
determinant is zero.
22 CHAPTER 4. THE EIGENVALUE PROBLEM
−2 2 −3
A= 2 1 −6 .
−1 −2 0
addition, they can always be chosen in such a way that they form a mutually
orthogonal set.
Similar matrices: Two (n × n) matrices A and A′ are said to be similar if it exists
a matrix S such that
A′ = S −1 AS.
The two matrices represent the same linear operator in different bases. The two bases
are related by the matrix S. In fact:
with x′ = (x′1 , x′2 , · · · , x′n )T . If the two bases are related by a matrix S
n
X
′
ei = Sji ej ,
j=1
then the two representations for the vector x are related by x = S x′ since
n n n n
!
X X X X
x= xj ej = x′i e′ i = Sji x′i ej .
j=1 i=1 j=1 i=1
Consider now a linear operator A and the relation y = Ax. In the representation
associated with the basis {e1 , e2 , · · · , en }, it becomes y = A x. On the other hand,
in the basis {e′ 1 , e′ 2 , · · · , e′ n }, it is:
S y ′ = A S x′ → y ′ = S −1 AS x′ ,
hence A′ = S −1 AS.
1 0 3
A = 0 −2 0 .
3 0 1
Theorem: Two (n × n) matrices have the same set of eigenvectors if and only if they
commute.
AB x = BA x = λ Bx,
Then aP
vector z in the vector space spanned by the set of eigenvectors can be written
as z = ni=1 ci xi . Consider the two expressions below
n
X n
X
AB z = AB ci xi = ci µ(i) λ(i) xi ,
i=1 i=1
n
X n
X
BA z = BA ci xi = ci λ(i) µ(i) xi .
i=1 i=1
Subtract them and you get (AB − BA) z = 0 for an arbitrary z. Hence [A, B] = 0.
Note that, if the eigenvalues of one of the matrix are degenerate, then not all eigen-
vectors of one matrix are eigenvectors of the other one. However, provided that by
taking linear combinations a set of common eigenvectors can be found, the result
above still applies.
ii) Exponential of A :
∞
X An
eA = ,
n=0
n!
then ∞
(SDS −1 )
X (SDS −1 )n
A
e =e = = SeD S −1 .
n=0
n!
i) Show that U has the form U = eiH for some hermitian matrix H.
Since U is matrix, it can be diagonalised: U = SDS † with
iθ
e 1 0 0
0 eiθ2 0 iθi
D= · · · · · · · · · and e = λi .
0 0 eiθn
θ1 0 0
0 θ2 0
Then U = SeiΛ S † with Λ =
···
.
··· ···
0 0 θn
†
It follows that U = eiSΛS where SΛS † ≡ H is an hermitian matrix.
with
iλ
λ1 0 0 e 1 0 0
0 iλ2
λ2 0 0 e 0
D=
··· , and eiD = .
··· ··· ··· ··· ···
0 0 λn 0 0 eiλn
Then
= eiTr D = eiTr H .
P
|eiD | = Πnj=1 eiλj = ei j λj
Chapter 5
Fourier Series
where x0 is an arbitrary point along the x-axis. In order to guarantee that the series
converges, the function f (x) must satisfy the Dirichlet conditions in the interval L:
i) The function is single-valued.
ii) The function f (x) has a finite number of extreme points (maxima and minima).
iii) The function f (x) has a finite number of finite discontinuities.
Counterexample: the function sin(1/x) can not be represented by means of a Fourier series.
27
28 CHAPTER 5. FOURIER SERIES
f(x)=sin(1/x)
1.00
0.75
0.50
0.25
0.00
f(x)
0.25
0.50
0.75
1.00
0.6 0.4 0.2 0.0 0.2 0.4 0.6
x-range
The set of all periodic functions on the interval L that can be represented by Fourier
series forms a vector space:
i) Basis:
2πrx 2πrx
cos , r = 0, 1, 2, 3, . . . ; sin , r = 1, 2, 3, . . .
L L
iii) If x1 is a point of discontinuity for the function f (x) in the interval L, then the
value of the Fourier series at that point is:
f (x− +
1 ) + f (x1 )
f (x1 ) = ,
2
where f (x− +
1 ) and f (x1 ) are the left and right limits of the function at x1 , re-
spectively.
Example 1 Calculate the Fourier series for the function sketched in the figure
above.
The function is even, hence the coefficients br = 0. The interval L = 2π.
Consider the function between −π and π then
−x if −π ≤ x ≤ 0
f (x) = ,
x if 0 ≤ x ≤ π.
Z0 Zπ Zπ
1 2 2 (−1)r − 1
ar = (−x) cos rx dx + x cos rx dx = x cos rx dx = .
π π π r2
−π 0 0
30 CHAPTER 5. FOURIER SERIES
−1 if −π < x < 0
f (x) =
0 if 0 < x < π.
The function is odd, hence the coefficients ar = 0. The interval L = 2π. The
Fourier coefficients are:
Z0 Zπ Zπ
1 2 2 1 − (−1)r
br = (−1) sin rx dx + sin rx dx = sin rx dx = .
π π π rπ
−π 0 0
Consider a function defined only on a finite interval L. Then, in order to calculate its
31
Fourier series we need to extend the function over the whole x-axis. In other words
we need to consider a periodic extension of the original function. The Fourier series
of any extension is a representation of the original function on the finite interval L.
However, normally continuous extension are preferable because they allow us to avoid
the Gibbs’s phenomenon at the points of discontinuity (see page 421 in Riley).
x2 if 0 < x < 2
f (x) =
0 otherwise.
All extensions below provide good representations of the function f (x) in the
interval 0 ≤ x ≤ 2.
32 CHAPTER 5. FOURIER SERIES
Fourier series evaluated at specific points can be used to calculate series of constant
33
4 3x 5x
f (x) = sin x + sin + sin + ··· .
π 3 5
∞
X (−1)n
f (π/2) = .
n=0
2n + 1
Given a Fourier series, integration and differentiation can be used to obtain Fourier
series for other functions. However, while integration is always a safe operation in
the sense that convergence of the new series is always guaranteed, differentiation is
not since an additional power of r at the numerator reduces the rate of convergence
of the new series.
x2 if 0 ≤ x ≤ 2
f (x) =
0 otherwise.
Write its Fourier series and by using the operations of integration and differ-
entiation find the Fourier series of the function
3
x if 0 ≤ x ≤ 2
g(x) =
0 otherwise.
Integrate f (x) :
∞
4 X (−1)r πrx x3
x + 32 sin + c = (0 ≤ x ≤ 2),
3 r=1
(πr)3 2 3
where c is the constant of integration. We can replace the result for x into this
expression, then we get
∞ ∞
X (−1)r πrx X (−1)r πrx
x3 = −16 sin + 96 sin + c′ (0 ≤ x ≤ 2).
r=1
πr 2 r=1
(πr)3 2
Since g(0) = 0, c′ = 0 and the expression above becomes the Fourier series of
the function g(x).
Complex Fourier series: Using some manipulations, Fourier series can be written in a
complex form. Writing trigonometric functions by means of exponentials we have
∞
a0 X 2πrx 2πrx
f (x) = + ar cos + br sin
2 r=1
L L
∞
a0 X i2πrx/L ar br −i2πrx/L ar b r
= + e + +e − .
2 r=1
2 2i 2 2i
Set (ar − ibr )/2 ≡ cr . Then, since ar = a−r and br = −b−r we get (ar + ibr )/2 ≡ c−r . It
follows that
∞ ∞
a0 X i2πrx/L −i2πrx/L
X
cr ei2πrx/L .
f (x) = + cr e + c−r e =
2 r=1 r=−∞
Integral Transforms
Given I such that I[f ] = g, the inverse operator I −1 is also a linear operator and
I −1 [g] = f.
There are several types of integral functions. We are going to discuss the Fourier and the
Laplace transforms.
35
36 CHAPTER 6. INTEGRAL TRANSFORMS
R∞
−∞
|f (t)|dt is finite.
Z∞
1
F −1 [fˆ(ω)](t) = f (t) = √ fˆ(ω) eiωt dω.
2π
−∞
There different ways to write Fourier transforms. We will stick to the notation above.
However, be aware that you could encounter the following forms as well
R∞ R∞
i) fˆ(ω) = f (t) e−iωt dt, f (t) = 1
2π
fˆ(ω) eiωt dω,
−∞ −∞
R∞ R∞
ii) fˆ(ω) = √1
2π
f (t) eiωt dt, f (t) = √1
2π
fˆ(ω) e−iωt dω,
−∞ −∞
R∞ R∞
iii) fˆ(ν) = f (t) e−i2πνt dt, f (t) = fˆ(ν) ei2πνt dν.
−∞ −∞
There are functions that are not periodic. Hence, we cannot use Fourier series. However,
we could use Fourier transforms and think of these functions as periodic functions with a
period that is infinite. Consider the complex Fourier series with period L
∞ ∞ ZL/2
X X 1
f (t) = cn ei2πnt/L = f (t) e−2πint/L dt ei2πnt/L .
L
n=−∞ n=−∞
−L/2
Z∞ Z∞ Z∞
dω 1 dω
f (t) = √ √ f (t) e−itω dt eitω = √ fˆ(ω) eitω .
2π 2π 2π
−∞ −∞ −∞
6.1. FOURIER TRANSFORMS 37
Example 1 Calculate the complex Fourier series for the following periodic function
0 −L/2 < t < −a/2
f (t) = 1 −a/2 < t < a/2 , L > a,
0 a/2 < t < −L/2
and the Fourier transform for the following non periodic function
1 −a/2 < t < a/2
g(t) =
0 otherwise
Then compare |cn | (from the Fourier series of f (t)) and |ĝ(ω)| by sketching them on
the same x-axis.
a sin(nπa/L)
The coefficients for the complex Fourier series of f (t) are: cn = L nπa/L
, for n ̸= 0
and c0 = a/L. Then, the Fourier series is:
∞
X a sin(nπa/L) i2πnt/L
f (t) = e .
n=−∞
L nπa/L
Za/2 a/2
1 iωt 1 eiωt a sin(aω/2)
ĝ(ω) = √ e dt = √ =√ .
2π 2π −iω −a/2
2π aω/2
−a/2
Figure 6.1: Example 1: Comparison between the discrete and continuum spectrum.
38 CHAPTER 6. INTEGRAL TRANSFORMS
where a ̸= 0.
Translation:
F[f (t + a)](ω) = eiaω fˆ(ω), a constant.
Exponential multiplication:
F[f (t/2) cos αt](ω) = F[f (t/2) eiαt ](ω)/2 + F[f (t/2) e−iαt ](ω)/2.
By using scaling and exponential multiplication in this order, we get
F[f (t/2) cos αt](ω) = F[f (t) eiαt ](2ω) + F[f (t) e−iαt ](2ω)
= F[f (t)](2(ω − α)) + F[f (t)](2(ω + α)).
We are now going to introduce a new operation between two functions, called convolution.
This is used, for instance, in digital signal processing where two signals are combined to
form a third signal. The Fourier transforms provide a way to analyse the spectrum of the
signal involved.
The convolution of two functions f and g over the interval (−∞, ∞) is a function h
defined as follows:
Z∞
h(y) = f (x)g(y − x) dx ≡ (f ∗ g)(y) = (g ∗ f )(y).
−∞
Proof:
Starting with the left hand side
Z∞ Z∞ Z∞
1 1
ĥ(k) = √ h(y) e−iyk dy = √ dy f (x)g(y − x) dx e−iyk
2π 2π
−∞ −∞ −∞
Z∞ Z∞ √
1
ĥ(k) = √ dx f (x) dz g(z) e−i(z+x)k = 2π fˆ(k)ĝ(k).
2π
−∞ −∞
√ Z∞ Z∞
1
2π fˆ(ω)ĝ(ω) = √ dx f (x) e−ikx
dz g(z) e−ikz .
2π
−∞ −∞
√ Z∞ Z∞ Z∞
1 1
2π fˆ(ω)ĝ(ω) = √ dx f (x) dy g(y − x) e −iky
=√ dy e−iky h(y) = ĥ(k).
2π 2π
−∞ −∞ −∞
40 CHAPTER 6. INTEGRAL TRANSFORMS
Example 3 Use the Fourier transform in order to find a solution, i.e. f (t), for the
following ODE
d2 f df
2
+2 + f (t) = h(t),
dt dt
where h(t) is a known function. Start by taking the Fourier transform of the ODE.
We get
d2 f
df
F 2
(ω) + 2 F (ω) + F [f (t)] (ω) = F [h(t)] (ω),
dt dt
which becomes: −ω 2 fˆ(ω) + 2iω fˆ(ω) + fˆ(ω) = ĥ(ω) → fˆ(ω) = ĥ(ω)/(1 + 2iω −
ω 2 ). We have two possibilities
h i
−1 ĥ(ω)
i) Take the inverse Fourier transform, i.e. f (t) = F (1+2iω−ω 2 )
(t).
ii) Use the convolution theorem, i.e. fˆ(ω) = ĥ(ω)ĝ(ω) with ĝ(ω) = √12π (1+2iω−ω
1
2) .
R∞ ′ ′ ′
Then f (t) = −∞ g(t )h(t−t ) dt , where the functions h(t) and g(t) can be found
by using the inverse Fourier transform.
6.2. LAPLACE TRANSFORMS 41
Z∞
L[f (t)](s) ≡ f¯(s) = f (t) e−st dt,
0
where s is taken to be real. Note that, sometimes a constrain on the variable s should be
imposed in order for the integral to exist.
i) t.
R∞ ∞
e−st
L[t](s) = t e−st dt = −s2
= 1
s2
for s > 0.
0 0
What happens if you calculate the Fourier transform of the same function?
Delay rule:
Exponential multiplication:
Scaling:
1 ¯ s
L[f (at)](s) = f , a ̸= 0 constant.
|a| a
42 CHAPTER 6. INTEGRAL TRANSFORMS
Polynomial multiplication:
dn f¯(s)
L[tn f (t)](s) = (−1)n , n = 1, 2, 3 . . . .
dsn
L[f (n) (t)](s) = sn f¯(s) − sn−1 f (0) − sn−2 f (1) (0) − · · · − f (n−1) (0), s > 0,
Example 5 Using the properties of the Laplace transforms and the result
L[cosh(kt)](s) = s/(s2 − k 2 ) with s > |k|, calculate the Laplace transform of the
following functions
i) sinh(kt).
ii) t sinh(kt).
d k 2ks
L [t sinh(kt)] (s) = (−1) ds s2 −k2
= (s2 −k2 )2
for s > |k|.
Set (u + v) = t, then
Z∞ Z∞
f¯(s)ḡ(s) = du f (u) dt e−st g(t − u).
0 u
Swapping the order of integration and being careful with the new limits of integration, we
have
Z∞
t t
Z Z
f¯(s)ḡ(s) = dt e−st du f (u)g(t − u) = L f (u)g(t − u) du (s),
0 0 0
which implies
Zt
(f ∗ g)(t) = f (u)g(t − u) du.
0
convolution theorem,
together with the Laplace transform properties and tables of known Laplace transforms
(see table on page 455 in Riley.) In this course we are going to limit ourselves to the use
of these two techniques.
44 CHAPTER 6. INTEGRAL TRANSFORMS
Example 6 Use partial fraction decomposition and the table at page 455 in order to
calculate f (t) given that
s+3
f¯(s) = .
s(s + 1)
3 2
f¯(s) = − = f¯1 (s) + f¯2 (s).
s s+1
Using the tables we have L−1 f¯1 (s) (t) = 3 for s > 0 and L−1 f¯2 (s) (t) = −2 e−t
for s > 0.
Example 7 Use the convolution theorem and the table at page 455 in order to cal-
culate f (t) given that
2
f¯(s) = .
s2 (s − 1)2
2 1
f¯(s) = 2 = f¯1 (s)f¯2 (s).
s (s − 1)2
Using the tables we have L−1 f¯1 (s) (t) = 2t for s > 0 and L−1 f¯2 (s) (t) = t et for
s > 1 hence
Z∞
L−1 f¯(s) (t) = 2(t − u) u eu du = 2 et (t − 2) + 2 (t + 2),
for s > 1.
Example 8 Use the Laplace transform in order to find a solution, i.e. f (t), for the
following ODE
df
+ 2 f (t) = e−t , f (0) = 3.
dt
6.2. LAPLACE TRANSFORMS 45
r
p2 1 k
H(p, x) = + mω 2 x2 = E, ω=
2m 2 m
and the Schrödinger equation associated with it i.e
h̄2 d2 1
− 2
ψ(x) + mω 2 x2 ψ(x) = E ψ(x),
2m dx 2
where ψ(x) represents the wave function of the harmonic oscillator in coordinate
space. The solution for the ground state is:
2 h̄ω
ψ0 (x) = e−(mω/2h̄)x , E0 = .
2
p
This is a Gaussian with width ∆x = h̄/mω. We want to find out the ground
state wave function in momentum space. In order to do so, calculate the Fourier
transform of ψ0 (x). The variable in Fourier space is k = p/h̄ (see workshops for
Fourier transform of a Gaussian.) Then, calculate ∆p. What is the meaning of the
quantity ∆x∆p?
The wave function in momentum space is:
Z∞ r r
1 −(mω/2h̄)x2 −ikx h̄ −k2 h̄/(2mω) h̄ −p2 /(2h̄mω)
F[ψ(x)](k) = √ e e dx = e = e .
2π mω mω
−∞
√
This is a Gaussian with width ∆p = h̄mω. It follows that ∆x∆p = h̄, which codifies
the uncertainty principle in QM.
Chapter 7
Consider a pulse
1 1
n − 2n < x < 2n
δn (x) =
0 otherwise
If we take the duration of the pulse to decrease, while retaining a unit area, then, in the
limit, we are led to the notion of the Dirac δ-function, i.e.
Z∞ Z∞
lim δn (x) dx = δ(x) dx = 1,
n→∞
−∞ −∞
Z∞ Z∞
lim δn (x)f (x) dx = δ(x)f (x), dx = f (0),
n→∞
−∞ −∞
The Dirac delta function δ(x − a) - with a a constant - is a generalised function (or
distribution) and it is defined as the limit of a sequence (not unique) of functions. Its
46
47
Zβ
f (a) α < a < β
δ(x − a) = 0 for x ̸= a, f (x)δ(x − a) dx =
0 otherwise
α
R4
i) δ(x − π) cos x dx = cos π = −1.
−4
R∞
ii) δ̂(ω) = √1
2π
δ(t) e−iωt dt = √1 .
2π
−∞
Then
Z∞ Z∞
Zn
1
f (x) = δ(t − x) f (t) dt = lim f (t) eiω(t−x) dω dt
n→inf ty 2π
∞ −∞ −n
Z∞ Z∞
1
= dt f (t) eiω(t−x) dω ,
2π
−∞ −∞
which implies
Z∞
1
δ(t − x) = eiω(t−x) dω
2π
−∞
√
Example 2 Calculate the inverse Fourier transform of 1/ 2π.
Z∞
−1 1 1 1
F √ (t) = √ √ eiωt dω = δ(t).
2π 2π 2π
−∞
48 CHAPTER 7. THE DIRAC DELTA FUNCTION
δ(x) = δ(−x).
where a are the roots of the function g(x) i.e. g(a) = 0 and g ′ (a) ̸= 0
R∞
Example 3 Calculate I = dt δ(x2 − b2 ) f (x) dx, where b is a constant.
−∞
δ(x − b) δ(x + b)
δ(x2 − b2 ) = + ,
|2b| | − 2b|
then
Z∞ Z∞
1 1 1
I= dt δ(x − b) f (x) + dt δ(x + b) f (x) = (f (b) + f (−b)) .
2b 2b 2b
−∞ −∞
R∞
f (x)δ ′ (x − a)dx = −f ′ (a).
−∞
H ′ (x) = δ(x)
where H(x) is the Heaviside step function defined as follows
1 x≥0
H(x) = .
0 x<0
In fact
Z∞ Z∞
f (x) H ′ (x) dx = f (x) H(x)|∞
−∞ − f ′ (x) H(x) dx = f (∞) − f (∞) + f (0).
−∞ −∞
R∞
Since f (0) = −∞
f (x) δ(x) dx the property is proved.
49
Vector Calculus
i)
∂ ∂ϕ ∂a ∂ da ∂ϕ
(ϕa) = a+ϕ , a(ϕ(u, v, . . . )) = ,
∂u ∂u ∂u ∂u dϕ ∂u
ii)
∂ ∂a ∂b ∂ ∂a ∂b
(a · b) = ·b+a· , (a × b) = ×b+a× ,
∂u ∂u ∂u ∂u ∂u ∂u
where a, b are vector functions and ϕ, is a scalar function.
50
8.1. THE DEL OPERATOR 51
∂r ∂r ∂r
= i, = j, = k,
∂x ∂y ∂z
hence dr = i dx + j dy + k dz.
We define the linear vector differential operatordel (or nabla) in cartesian coordinates as
follows
∂ ∂ ∂
∇=i +j +k .
∂x ∂y ∂z
Let us apply such an operator to scalar and vector functions.
∂ϕ ∂ϕ ∂ϕ
grad ϕ = ∇ϕ = i+ j+ k.
∂x ∂y ∂z
i) ∇(ϕ + ψ) = ∇ϕ + ∇ψ
ii) ∇(ϕψ) = ψ∇ϕ + ϕ∇ψ
iii) ∇(ψ(ϕ)) = ψ ′ (ϕ)∇ϕ
3 ′
iv) Special cases: ∇r = r/r, ∇(1/r) = −r/r p , ∇ϕ(r) = ϕ r/r, where r is the
2 2 2
modulus of the position vector r, i.e. r = x + y + z .
∂ϕ ∂ϕ ∂ϕ
dϕ = dx + dy + dz
∂x ∂y ∂z
∂ϕ ∂ϕ ∂ϕ
= i+ j+ k · (dx i + dy j + dz k) = ∇ϕ · dr = 0.
∂x ∂y ∂z
∇ϕ = 2x i + 2y j + 2z k = 2 r.
∇ · (a × b) = b · (∇ × a) − a · (∇ × b).
2∂ 2ϕ ∂ 2ϕ ∂ 2ϕ
∇ · (∇ϕ) = ∇ ϕ = + + 2,
∂x2 ∂y 2 ∂z
where ∇2 is a scalar differential operator and it is called the Lapalcian.
8.2. CURVES AND SURFACES 53
iv) ∇ × (∇ × a) = ∇(∇ · a) − ∇2 a.
(a) ∇ · B = 0, (b) ∇ · E = 0,
∂E ∂B
(c) ∇ × B = ϵ0 µ0 , (d) ∇ × E = − .
∂t ∂t
i) Derive the Laplace equation of electrostatic.
∂B ∂(∇ × B)
∇ × (∇ × E) = −∇ × =− .
∂t ∂t
Take the time derivative of (c):
∂(∇ × B) ∂ 2E
= −ϵ0 µ0 2 = ∇(∇ · E) − ∇2 E
∂t ∂t
Combine the two results
∂ 2E 1 ∂ 2E
2 −2 2
∇ E = ϵ0 µ0 2 , (ϵ0 µ0 ) = c → − ∇ E.
∂t c2 ∂t2
r(u) = u i − u j, −1 ≤ u ≤ 1
This is clearly a circle with centre at the point (2, 0), hence
r(u) = 2 + cos u i + sin u j, 0 ≤ u ≤ 2π.
i) The derivative r′ (u) ≡ t(u) is a vector tangent to the curve at each point.
2 2 r
ds dr dr dr dr dr
= · = , ds = ± · du,
du du du du du du
where the sign fixes the direction of measuring s, for increasing or decreasing u.
Note that ds is the line element of the curve.
dr dr dr du dr
t̂ = / = = .
du du du ds ds
iv) The derivative of the unit tangent vector with respect to the arch length defines
the radius of curvature ρ:
1 d2 r dt̂
ρ= , with n = 2
= .
|n| ds ds
Note that t̂ is perpendicular to n since starting with t̂2 = 1 and applying the
derivative, we get 2t̂ · (dt̂/ds) = 0.
8.2. CURVES AND SURFACES 55
Example 6 Consider the curve r(u) = 3 cos u i + 3 cos u j + 4u k. find its radius
of curvature.
dr
= t = −3 sin u i + 3 cos u j + 4 k.
du
dr ds
The modulus is: du =
du
= 5, hence t̂ = 51 (−3 sin u i + 3 cos u j + 4 k) .
Then
dt̂ du 1
n= = = (−3 cos u i − 3 sin u j) .
du ds 25
It follows that the radius of curvature is ρ = 25/3.
i) The vectors ∂r/∂u, ∂r/∂v are linear independent and tangent to the curve S.
ii) A vector normal to the surface is:
∂r ∂r
n= × .
∂u ∂v
Integrals
The line integral (or path integral) of a vector field a(r) along the curve C is:
Z uZmax
dr
a(r) · dr = a(r(u)) · du,
du
C umin
where C is a smooth oriented (a direction along C must be specified) curve defined by the
equation r(u) with endpoints A = r(umin ) and B = r(umax ).
i) C1 : r(u) = u i + u j + u k, 0 ≤ u ≤ 1.
hence
Z Z1
5
a · dr = (u eu + 2 u2 ) du = .
3
C1 0
56
9.1. LINE INTEGRALS 57
ii) C2 : r(u) = u i + u2 j + u3 k, 0 ≤ u ≤ 1.
hence
Z Z1
e 1
a · dr = (u eu2 + 2 u7 + 3u5 ) du = + .
2 4
C2 0
Properties/Observations.
R
Example 2 Evaluate C
ϕ ds where ϕ = (x − y)2 and r(u) = a cos u i +
a sin u j, 0 ≤ u ≤ π, a constant.
p
ds = ( dr/du · dr/du) du = a du, then
Z Zπ
ϕ ds = (a cos u − a sin u)2 a du = π a3 .
C 0
The vector field a is said to be conservative (or irrotational ) and ϕ is its potential.
In addition:
R
i) I = ∇ϕ · dr = ϕ(B) − ϕ(A) where A and B are the endpoints of the path C.
C
Notice that if you use a = −∇ϕ, then I = ϕ(A) − ϕ(B).
ii) The line integral I along any closed path C in D is zero.
∂ϕ x2 y 2
= ax = xy 2 + z → ϕ = + zx + f (y, z),
∂x 2
∂ϕ ∂f
= ay = x 2 y + 2 = x 2 y + → f = 2y + g(z),
∂y ∂y
∂ϕ dh
= az = x = x + → h = c.
∂z dz
It follows that ϕ = (xy)2 /2 + xz + 2y + c.
R
ii) Evaluate the integral C a · dr along the curve r(u) = u i + 1/u j + k with end
points A = (1, 1, 1) and B = (3, 1/3, 1).
Z
2
a · dr = ϕ(B) − ϕ(A) = .
3
C
R
Example 4 Evaluate the integral S
a · dS where a = x i and S is the surface of
the hemisphere x2 + y 2 + z 2 = a2 with z ≥ 0 and a constant. Use spherical polar
coordinates to parametrise the surface (see further down in the notes.)
r(θ, ϕ) = a sin θ cos ϕ i + a sin θ sin ϕ j + a cos θ k and
∂r ∂r
= a cos θ cos ϕ i + a cos θ sin ϕ j − a sin θ k, = −a sin θ sin ϕ i + a sin θ cos ϕ j.
∂θ ∂θ
Hence
∂r ∂r
dS = × dθdϕ = a sin θr dθdϕ a(r(θ, ϕ)) = a sin θ cos ϕ i,
∂θ ∂ϕ
Z Z2π Zπ/2
2
a · dS = a2 dϕ cos2 ϕ dθ sin3 θ = a3 π.
3
S 0 0
Observation.
i) The integral depends on the orientation of the surface S since the sign of dS
depends on the orientation of S.
ii) If the surface is closed, by convention, the vector n is pointed outwards the
volume enclosed.
iii) In order to parametrise the surface is often useful to use alternative coordinates
systems. For instance:
a) Cylindrical polar coordinates:
x = ρ cos ϕ
y = ρ sin ϕ , ρ ≥ 0, 0 ≤ ϕ < 2π, −∞ < z < ∞
z=z
60 CHAPTER 9. INTEGRALS
R
Example 5 Evaluate the integral S
dS where S is the surface of the hemisphere
x2 + y 2 + z 2 = a2 with z ≥ 0 and a constant.
R R 2π R π/2
dS = |dS| = a2 sin θ dθdϕ → S
dS = 0
dϕ 0 dθ a2 sin θ = 2πa2 .
9.3. VOLUME INTEGRALS 61
Observation
Example 6 Use the divergence theorem to show that the Gauss’s law of elec-
trostatic for a point-like charge q is equivalent to the Maxwell’s equation
∇ · E = ρ/ϵ0 .
Consider the point-like charge at the origin. Then
ZZZ
qr
E= , with q = ρ dV,
4πϵ0 r3
V
where ρ is the charge density. On the other hand, Gauss’ law states:
RR
S
= E · dS = q/ϵ0 .
62 CHAPTER 9. INTEGRALS
For the integrals on the right hand side use spherical polar coordinates.
From Example 4 we know that dS1 = a sin θ r dθdϕ. Also
hence
ZZ Zπ/2
5
a · dS1 = a3 2π (cos2 θ + sin θ cos θ) dθ = πa3 .
3
S1 0
For the second integral on the right hand side a suitable parametrisation is
r2 (ϕ) = r cos ϕ i + r sin ϕ j. Hence
ZZ Za
∂r ∂r
dS1 = × drdϕ = −r k drdϕ → a·dS2 = −a2π r dr = −πa3 .
∂ϕ ∂r
S2 0
It follows that the divergence theorem reads: 2/3 πa3 = 5/3πa3 − πa3 .
9.4. THEOREMS ON INTEGRALS 63
Example 8 Derive Gauss’s law for a general surface S. Then use the divergence
theorem to show that
21
∇ = −4πδ(r),
r
where δ(r) is the 3-dimensional delta function
ZZZ
f (r) a ∈ V
f (r)δ(r − a) dV = .
0 otherwise
V
In fact, ∇ · (r/r3 ) = 0 for r ̸= 0, i.e. if the origin is not inside the surface S.
Then consider the volume between the surface S and a small sphere around
the origin S ′ . Because of the divergence theorem
ZZ ZZ
r r
· dS − · dS′ = 0.
r3 r3
S S′
We can calculate easily the second integral, since dS′ = r2 r̂ sin θdθdϕ, hence
ZZ Zπ
r
· dS′ = 2π sin θdθ = 4π.
r3
S′ 0
64 CHAPTER 9. INTEGRALS
It follows that
ZZ ZZ
q r q q
E · dS = 3
· dS = 4π = .
4πϵ0 r 4πϵ0 ϵ0
S S
Since ZZ
r 4π r = 0
· dS = ,
r3 0 r= ̸ 0
S
we can write
ZZZ r ZZZ ZZZ
2 1
∇ · 3 dV = − ∇ dV = 4πδ(r) dV.
r r
V V V
where P (x, y) and Q(x, y) are two functions whose derivatives are continuous and
single-valued inside and on a boundary of a simple connected region R in the xy-
plane and C = ∂R is a closed, anticlockwise oriented curve.
Green’s theorem is also called the divergence theorem in two dimensions. In fact,
consider such a theorem in Cartesian coordinates for a vector function a. Then in
9.4. THEOREMS ON INTEGRALS 65
where C is the boundary of the triangle with vertices (0, 0), (1, 0), (1, 2).
Set F = (P, Q) = (y − sin x, cos x), then
ZZ ZZ
∂Q ∂P
I= − dxdy = − (sin x + 1) dx dy,
∂x ∂y
R R
Z1 Z2x Z1
I=− dx dy (sin x + 1) = − dx (sin x + 1) 2x = 2 cos(1) − 2 sin(1) − 1.
0 0 0
Observation
i) Note that Green’s theorem holds in multiply connected regions as well. In this
case the integrals must be calculated over all boundaries of the regions suitably
oriented (positive oriented).
ii) Green’s theorem can be used to evaluate the area A of a region R. In fact, set
F = (P, Q), then choose P and Q such that
∂Q ∂P
− = 1,
∂x ∂y
∂Q ∂P
=
∂x ∂y
I I
(P dx + Qdy) = F · dr = 0.
C
Z2π Z2π
1 ab
A= (ba cos ϕ cos ϕ + ab sin ϕ sin ϕ) dϕ = dϕ = πab.
2 2
0 0
Stokes’ theorem:
ZZ Z
(∇ × a) · dS = a · dr,
S C
Compatible orientation: Imagine you are walking on the surface (side with the
normal pointing out). If you walk near the edge of the surface in the direction
corresponding to the orientation of C, then the surface must be to your left.
9.4. THEOREMS ON INTEGRALS 67
then Z Z 2π
3
a · drb = b cos2 ϕ dϕ = b3 π.
0
Cb
Given the position vector r expressed in cartesian coordinates x, y, z we can use a change
of variable to express this vector in terms of a new set of coordinates u, v, w
r(u, v, w) = x(u, v, w) i + y(u, v, w) j + z(u, v, w) k,
where x, y, z are continuous and differentiable functions.
The line element is:
∂r ∂r ∂r
dr = du + dv + dw.
∂u ∂v ∂w
where the vectors ∂r/∂u, ∂r/∂v, ∂r/∂w are linearly independent. If these vectors are
orthogonal, then the coordinates u, v, w are said to be orthogonal curvilinear coordi-
nates.
Properties
New basis:
∂r ∂r ∂r
= hu êu , = hv êv , = hw êw ,
∂u ∂v ∂w
where hu , hv hw positive and called scale factors. In an orthogonal curvilinear coor-
dinate system these vectors are orthogonal and êu , êv , êw form an orthonormal basis
of the three dimensional vector space R3 .
Line element:
dr = hu êu du + hv êv dv + hw êw dw.
The scale factors determine the changes in length along each orthogonal direction
resulting from changes in u, v, w.
Arc length:
ds2 = dr · dr = h2u (du)2 + h2v (dv)2 + h2w (dw)2
69
70CHAPTER 10. CHANGE OF VARIABLES: ORTHOGONAL CURVILINEAR COORDINATES
Volume element:
∂r ∂r ∂r
dV = · × du dv dw = hu hv hw dudvdw.
∂u ∂v ∂w
Example 1 Derive the the scale factors, basis vector and volume elements for
1) Cartesian coordinates.
r(x, y, z) = x i + y j + z k, hence
∂r ∂r ∂r
= i, = j, =k → hx = hy = hz = 1, êx = i, êy = j, êz = k,
∂x ∂y ∂z
and dV = dxdydz.
∂r ∂r ∂r
= cos ϕ i + sin ϕ j, = −ρ sin ϕ i + ρ cos ϕ j, = k, →
∂ρ ∂ϕ ∂z
hρ = 1, hϕ = ρ, hz = 1, êρ = cos ϕ i+sin ϕ j, êϕ = − sin ϕ i+cos ϕ j, êz = k
and dV = ρ dρdϕdz.
∂f ∂f ∂f
df = du + dv + dw = ∇f · dr.
∂u ∂v ∂w
In cartesian coordinates this becomes
∂ ∂ ∂
∇f · dr = i +j +k · (i dx + j dy + k dz) .
∂x ∂y ∂z
71
∇f · dr = ∇f · (hu eu du + hv ev dv + hw ew dw) ,
which implies
êu ∂f êv ∂f êw ∂f
∇f = + + .
hu ∂u hv ∂v hw ∂w
This is the gradient of the function f in general curvilinear coordinates. It follows that
the del operator is:
êu ∂ êv ∂ êw ∂
∇= + + .
hu ∂u hv ∂v hw ∂w
Without derivation, we also have
Divergence:
1 ∂ ∂ ∂
∇·a= (hv hw au ) + (hw hu av ) + (hu hv aw ) ,
hu hv hw ∂u ∂v ∂w
where a = au êu + av êv + aw êw .
Curl:
hu êu hv êv hw êw
1 ∂ ∂ ∂
∇×a=
hu hv hw ∂u ∂v ∂w
hu au hv av hw aw .
Laplacian:
2 1 ∂ hv hw ∂ϕ ∂ hw hu ∂ϕ ∂ hu hv ∂ϕ
∇ ϕ= + + .
hu hv hw ∂u hu ∂u ∂v hv ∂v ∂w hw ∂w
Example 2 Find the position vector r in cylindrical polar coordinates and verify that
∇ · r = 3.
From Example 1, we have the unit vectors for the cylindrical polar coordinates. By
inverting those relations we obtain:
Then
r = ρ cos ϕ(cos ϕ êρ − sin ϕ êϕ ) + ρ sin ϕ(sin ϕ êρ + cos ϕ êϕ ) + z êz = ρ êρ + z êz
and
1 ∂ 2
∂
∇·r= ρ + (ρ z) = 3.
ρ ∂ρ ∂z
72CHAPTER 10. CHANGE OF VARIABLES: ORTHOGONAL CURVILINEAR COORDINATES
Example 3 A rigid body is rotating about a fixed axis with a constant angular velocity
ω. Take ω to lie along the z-axis. Use cylindrical polar coordinates to compute
1) v = ω × r.
The position vector has been found in Example 2. Then ω = ω êz . Then
êρ êϕ êz
v = 0 0 ω = ωρ êϕ .
ρ 0 z
2) ∇ × v.
êρ ρ êϕ êz
1 ∂ ∂
∂
∇ × v = ∂ρ ∂ϕ ∂z = 2ω êz = 2 ω.
ρ 2
0 ωρ 0
iii) dV = ρ dρdϕdz.
Spherical polar coordinates:
r(r, θ, ϕ) = r sin θ cos ϕ i+r sin θ sin ϕ j+r cos θ k, r ≥ 0, 0 ≤ ϕ < 2π, 0≤θ≤π
hr = 1, êr = sin θ cos ϕ i + sin θ sin ϕ j + cos θ k,
i) hθ = r, êθ = cos θ cos ϕ i + cos θ sin ϕ j − sin θ k,
hϕ = r sin θ, êϕ = − sin ϕ i + cos ϕ j.
êr r2 sin θ dθdϕ (r = const)
ii) dS = êθ r sin θ drdϕ (θ = const)
êϕ r drdθ (ϕ = const).