Lecture Notes Vector Analysis MATH 332
Lecture Notes Vector Analysis MATH 332
Lecture Notes Vector Analysis MATH 332
Vector Analysis
MATH 332
Ivan Avramidi
New Mexico Institute of Mining and Technology
Socorro, NM 87801
Author: Ivan Avramidi; File: vecanal4.tex; Date: June 30, 2005; Time: 19:16
Contents
1 Linear Algebra 1
1.1 Vectors in Rn and Matrix Algebra . . . . . . . . . . . . . . . . . . . 1
1.1.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Inner Product and Norm . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 Geometry 49
3.1 Geometry of Euclidean Space . . . . . . . . . . . . . . . . . . . . . 49
3.2 Basic Topology of Rn . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Curvilinear Coordinate Systems . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Change of Coordinates . . . . . . . . . . . . . . . . . . . . . 56
3.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4 Vector Functions of a Single Variable . . . . . . . . . . . . . . . . . 59
3.5 Geometry of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6 Geometry of Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . 65
I
II CONTENTS
4 Vector Analysis 69
4.1 Vector Functions of Several Variables . . . . . . . . . . . . . . . . . 69
4.2 Directional Derivative and the Gradient . . . . . . . . . . . . . . . . 71
4.3 Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.4 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5 Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.6 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.7 Differential Vector Identities . . . . . . . . . . . . . . . . . . . . . . 79
4.8 Orthogonal Curvilinear Coordinate Systems in R3 . . . . . . . . . . . 80
5 Integration 83
5.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2 Surface Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.3 Volume Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.4 Fundamental Integral Theorems . . . . . . . . . . . . . . . . . . . . 87
5.4.1 Fundamental Theorem of Line Integrals . . . . . . . . . . . . 87
5.4.2 Green’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.3 Stokes’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.4 Gauss’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 87
5.4.5 General Stokes’s Theorem . . . . . . . . . . . . . . . . . . . 88
6 Potential Theory 89
6.1 Simply Connected Domains . . . . . . . . . . . . . . . . . . . . . . 90
6.2 Conservative Vector Fields . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.1 Scalar Potential . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3 Irrotational Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 92
6.4 Solenoidal Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4.1 Vector Potential . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.5 Laplace Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.5.1 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . 94
6.6 Poisson Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.1 Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . 95
6.6.2 Point Sources . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.3 Dirichlet Problem . . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.4 Neumann Problem . . . . . . . . . . . . . . . . . . . . . . . 95
6.6.5 Green’s Functions . . . . . . . . . . . . . . . . . . . . . . . 95
6.7 Fundamental Theorem of Vector Analysis . . . . . . . . . . . . . . . 96
8 Applications 103
8.1 Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.1.1 Inertia Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 104
8.1.2 Angular Momentum Tensor . . . . . . . . . . . . . . . . . . 104
8.2 Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.2.1 Strain Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.2.2 Stress Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.3 Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.3.1 Continuity Equation . . . . . . . . . . . . . . . . . . . . . . 106
8.3.2 Tensor of Momentum Flux Density . . . . . . . . . . . . . . 106
8.3.3 Euler’s Equations . . . . . . . . . . . . . . . . . . . . . . . . 106
8.3.4 Rate of Deformation Tensor . . . . . . . . . . . . . . . . . . 106
8.3.5 Navier-Stokes Equations . . . . . . . . . . . . . . . . . . . . 106
8.4 Heat and Diffusion Equations . . . . . . . . . . . . . . . . . . . . . . 107
8.5 Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.5.1 Tensor of Electromagnetic Field . . . . . . . . . . . . . . . . 108
8.5.2 Maxwell Equations . . . . . . . . . . . . . . . . . . . . . . . 108
8.5.3 Scalar and Vector Potentials . . . . . . . . . . . . . . . . . . 108
8.5.4 Wave Equations . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.5.5 D’Alambert Operator . . . . . . . . . . . . . . . . . . . . . . 108
8.5.6 Energy-Momentum Tensor . . . . . . . . . . . . . . . . . . . 108
8.6 Basic Concepts of Special and General Relativity . . . . . . . . . . . 109
Bibliography 111
Notation 113
Index 113
Linear Algebra
vT = (v1 , v2 , . . . , vn ) .
• The operation that converts column-vectors into row-vectors and vice versa pre-
serving the order of the components is called the transposition and denoted by
T . That is
T
v1 v1
v2 v2
= (v , v , . . . , v )
.. 1 2 n and (v1 , v2 , . . . , vn )T = .. .
. .
vn vn
1
2 CHAPTER 1. LINEAR ALGEBRA
• The vectors that have only zero elements are called zero vectors, that is
0
0
0 = . , 0T = (0, . . . , 0) .
..
0
• The product of two column-vectors and the product of two row-vectors, called
the inner product (or the scalar product), is defined then by
X
n
(u, v) = (uT , vT ) = huT , vi = ui vi = u1 v1 + · · · + un vn .
i=1
• Finally, we define the norm (or the length) of both column-vectors and row-
vectors is defined by
n 1/2 q
p X
||v|| = ||v || = hv , vi = v2i = v2 + · · · + v2n .
T T
1
i=1
1.1.2 Matrices
• A set of n2 real numbers Ai j , i, j = 1, . . . , n, arranged in an array that has n
columns and n rows
A11 A12 · · · A1n
A
21 A22 · · · A2n
A = . .. .. ..
.. . . .
An1 An2 · · · Ann
is called a square n × n real matrix.
• The set of all real square n × n matrices is denoted by Mat(n, R).
• The number Ai j (also called an entry of the matrix) appears in the i-th row and
the j-th column of the matrix A
A11 A12 · · · A1 j · · · A1n
A21 A22 · · · A2 j · · · A2n
.. .. .. .. .. ..
. . . . . .
A =
Ai1 Ai2 · · · Ai j · · · Ain
.. .. .. .. .. ..
. . . . . .
An1 An2 · · · An j · · · Ann
• Remark. Notice that the first index indicates the row and the second index
indicates the column of the matrix.
• The matrix whose all entries are equal to zero is called the zero matrix.
• The addition of matrices is defined by
A11 + B11 A12 + B12 ··· A1n + B1n
A + B A22 + B22 ··· A2n + B2n
21
A + B =
21
.. .. .. ..
. . . .
An1 + Bn1 An2 + Bn2 ··· Ann + Bnn
• The numbers Aii are called the diagonal entries. Of course, there are n diagonal
entries. The set of diagonal entries is called the diagonal of the matrix A.
• The numbers Ai j with i , j are called off-diagonal entries; there are n(n − 1)
off-diagonal entries.
• The numbers Ai j with i < j are called the upper triangular entries. The set of
upper triangular entries is called the upper triangular part of the matrix A.
• The numbers Ai j with i > j are called the lower triangular entries. The set of
lower triangular entries is called the lower triangular part of the matrix A.
• The number of upper-triangular entries and the lower-triangular entries is the
same and is equal to n(n − 1)/2.
• A matrix whose only non-zero entries are on the diagonal is called a diagonal
matrix. For a diagonal matrix
Ai j = 0 if i , j.
is called the identity matrix. The elements of the identity matrix are
1, if i = j
Ii j =
0, if i , j .
Ai j = 0 if i < j.
Ai j = 0 if i > j,
AT = A
and anti-symmetric if
AT = −A .
• Every matrix A can be uniquely decomposed as the sum of its diagonal part AD ,
the lower triangular part AL and the upper triangular part AU
A = AD + AL + AU .
• Every matrix A can be uniquely decomposed as the sum of its symmetric part AS
and its anti-symmetric part AA
A = AS + A A ,
where
1 1
AS = (A + AT ) , AA = (A − AT ) .
2 2
• The product of matrices is defined as follows. The i j-th entry of the product
C = AB of two matrices A and B is
X
n
Ci j = Aik Bk j = Ai1 B1 j + Ai2 B2 j + · · · + Ain Bn j .
k=1
This is again a multiplication of the “i-th row of the matrix A by the j-th column
of the matrix B”.
• Theorem 1.1.1 The product of matrices is associative, that is, for any matrices
A, B, C
(AB)C = A(BC) .
(AB)T = BT AT .
AA−1 = A−1 A = I .
and
(A−1 )T = (AT )−1 .
AT A = AAT = I ,
• The trace is a map tr : Mat(n, R) that assigns to each matrix A = (Ai j ) a real
number tr A equal to the sum of the diagonal elements of a matrix
X
n
tr A = Akk .
k=1
tr (AB) = tr (BA) ,
and
tr AT = tr A .
1.1.3 Determinant
• Consider the set Zn = {1, 2, . . . , n} of the first n integers. A permutation ϕ of the
set {1, 2, . . . , n} is an ordered n-tuple (ϕ(1), . . . , ϕ(n)) of these numbers.
• That is, a permutation is a bijective (one-to-one and onto) function
ϕ : Zn → Zn
that assigns to each number i from the set Zn = {1, . . . , n} another number ϕ(i)
from this set.
• An elementary permutation is a permutation that exchanges the order of only
two numbers.
• Every permutation can be realized as a product (or a composition) of elemen-
tary permutations. A permutation that can be realized by an even number of
elementary permutations is called an even permutation. A permutation that
can be realized by an odd number of elementary permutations is called an odd
permutation.
• Proposition 1.1.1 The parity of a permutation does not depend on the repre-
sentation of a permutation by a product of the elementary ones.
• That is, each representation of an even permutation has even number of elemen-
tary permutations, and similarly for odd permutations.
• The sign of a permutation ϕ, denoted by sign(ϕ) (or simply (−1)ϕ ), is defined
by (
ϕ +1, if ϕ is even,
sign(ϕ) = (−1) =
−1, if ϕ is odd
• The set of all permutations of n numbers is denoted by S n .
• Theorem 1.1.5 The cardinality of this set, that is, the number of different per-
mutations, is
|S n | = n! .
det A = det AT .
1.1.4 Exercises
1. Show that the product of invertible matrices is an invertible matrix.
2. Show that the product of matrices with positive determinant is a matrix with positive
determinant.
3. Show that the inverse of a matrix with positive determinant is a matrix with positive
determinant.
4. Show that GL(n, R) forms a group (called the general linear group).
5. Show that GL+ (n, R) is a group (called the proper general linear group).
6. Show that the inverse of a matrix with negative determinant is a matrix with negative
determinant.
7. Show that: a) the product of an even number of matrices with negative determinant is a
matrix with positive determinant, b) the product of odd matrices with negative determinant
is a matrix with negative determinant.
8. Show that the product of matrices with unit determinant is a matrix with unit determinant.
9. Show that the inverse of a matrix with unit determinant is a matrix with unit determinant.
10. Show that S L(n, R) forms a group (called the special linear group or the unimodular
group).
11. Show that the product of orthogonal matrices is an orthogonal matrix.
12. Show that the inverse of an orthogonal matrix is an orthogonal matrix.
13. Show that O(n) forms a group (called the orthogonal group).
14. Show that orthogonal matrices have determinant equal to either +1 or −1.
15. Show that the product of orthogonal matrices with unit determinant is an orthogonal ma-
trix with unit determinant.
16. Show that the inverse of an orthogonal matrix with unit determinant is an orthogonal
matrix with unit determinant.
17. Show that S O(n) forms a group (called the proper orthogonal group or the rotation
group).
a1 e1 + · · · + ak ek ,
a1 e1 + · · · + ak ek = 0
implies a1 = · · · = ak = 0.
• Two non-zero vectors u and v which are linearly dependent are also called par-
allel, denoted by u||v.
span A = {v ∈ E | v = a1 e1 + · · · + ak ek , ei ∈ A, ai ∈ R} .
• Theorem 1.2.1 The span of any subset of a vector space is a vector space.
S k = span {u1 , . . . , uk } = {v ∈ E | v = t1 u1 + · · · + tk uk , t1 , . . . , tk ∈ R} .
1.2.1 Exercises
1. Show that if λv = 0, then either v = 0 or λ = 0.
2. Prove that the span of a collection of vectors is a vector subspace.
1. (v, v) ≥ 0
2. (v, v) = 0 if and only if v = 0
3. (u, v) = (v, u)
4. (u + v, w) = (u, w) + (v, w)
5. (au, v) = (u, av) = a(u, v)
• The inner product is often called the dot product, or the scalar product, and is
denoted by
(u, v) = u · v .
• All spaces considered below are Euclidean spaces. Henceforth, E will denote an
n-dimensional Euclidean space if not specified otherwise.
The equality
(u, v) = 0.
E = S ⊕ S⊥ .
1.3.1 Exercises
1. Show that the Euclidean norm has the following properties
(a) ||v|| ≥ 0, ∀v ∈ E;
(b) ||v|| = 0 if and only if v = 0;
v1 = u1 ,
X
j−1
vi (vi , u j )
vj = uj − , 2 ≤ j ≤ k,
i=1
||vi ||2
and
X
n
||v||2 = (ei , v)2 .
i=1
I v = v, ∀v ∈ E
0v = 0, ∀v ∈ E
L−1 (S ) = {v ∈ E | L(v) ∈ S }
• The kernel Ker(L) (or the null space) of an operator L is the set of all vectors in
E which are mapped to zero, that is
• Theorem 1.4.1 For any operator L the sets Im(L) and Ker (L) are vector sub-
spaces.
• The dimension of the kernel Ker (L) of an operator L
• The set L(E) of all linear operators on a vector space E is a vector space with
the addition of operators and multiplication by scalars defined by
• The integer powers of an operator are defined as the multiple composition of the
operator with itself, i.e.
A0 = I A1 = A, A2 = AA, . . .
AB = BA
and anti-commuting if
AB = −BA .
AB = BA = 0 .
• An operator A is involutive if
A2 = I
idempotent if
A2 = A ,
and nilpotent if for some integer k
Ak = 0 .
Selfadjoint Operators
• The adjoint A∗ of an operator A is defined by
(A∗ )∗ = A , (AB)∗ = B∗ A∗ .
• An operator A is self-adjoint if
A∗ = A
and anti-selfadjoint if
A∗ = −A
A = AS + AA
AA∗ = A∗ A = I .
Projection Operators
• Let S be a subspace of E and E = S ⊕ S ⊥ . Then for any u ∈ E there exist unique
v ∈ S and w ∈ S ⊥ such that
u = v + w.
The vector v is called the projection of u onto S .
Pu = v
• The operators P and P⊥ are called complementary projections. They have the
properties:
P∗ = P, ( P⊥ ) ∗ = P⊥ ,
P + P⊥ = I ,
P2 = P , ( P⊥ ) 2 = P⊥ ,
PP⊥ = P⊥ P = 0 .
P i Pk = 0 if i,k
and
X
k
Pi = P1 + · · · + P k = I .
i=1
• Theorem 1.4.6 1. The dimension of the subspaces Ei are equal to the ranks
of the projections Pi
dim Ei = rank Pi .
X
n
dim Ei = dim E1 + · · · + dim Ek = dim E .
i=1
where Pi are the one-dimensional projections. Then one can define a function
of the self-adjoint operator f (A) on E by
X
n
f (A) = f (λi )Pi .
i=1
• Theorem 1.4.9 Let U be a unitary operator on a real vector space E. Then there
exists an ani-selfadjoint operator A such that
U = exp A .
• Let A be a self-adjoint operator with the eigenvalues {λ1 , . . . , λn } . Then the trace
of the operator and the determinant of the operator A are defined by
X
n
tr A = λi , det A = λ1 · · · λn .
i=1
• Note that
tr I = n , det I = 1 .
• The trace of a projection P onto a vector subspace S is equal to its rank, or the
dimension of the vector subspace S ,
tr P = rank P = dim S .
If there are multiple eigenvalues, then each eigenvalue should be counted with
its multiplicity.
ζ(0) = n ,
and
ζ 0 (0) = − log det A .
Examples
• Let u be a unit vector and Pu be the projection onto the one-dimensional subspace
(line) S u spanned by u defined by
Pu v = u(u, v) .
The orthogonal complement S u⊥ is the hyperplane with the normal u. The oper-
ator Ju defined by
Ju = I − 2Pu
is called the reflection operator with respect to the hyperplane S u⊥ . The reflec-
tion operator is a self-adjoint involution, that is, it has the following properties
J∗u = Ju , J2u = I .
The reflection operator has the eigenvalue −1 with multiplicity 1 and the eigenspace
S u , and the eigenvalue +1 with multiplicity (n − 1) and with eigenspace S u⊥ .
• Let u1 and u2 be an orthonormal system of two vectors and Pu1 ,u2 be the projec-
tion operator onto the two-dimensional space (plane) S u1 ,u2 spanned by u1 and
u2
Pu1 ,u2 v = u1 (u1 , v) + u2 (u2 , v) .
Let Nu1 ,u2 be an operator defined by
Then
Nu1 ,u2 Pu1 ,u2 = Pu1 ,u2 Nu1 ,u2 = Nu1 ,u2
and
N2u1 ,u2 = −Pu1 ,u2 .
A rotation operator Ru1 ,u2 (θ) with the angle θ in the plane S u1 ,u2 is defined by
Ru1 ,u2 (θ) = I − Pu1 ,u2 + cos θ Pu1 ,u2 + sin θ Nu1 ,u2 .
X
k
U = P+ − P− + (cos θi Pi + sin θi Ni ) .
i=1
1.4.1 Exercises
1. Prove that the range and the kernel of any operator are vector spaces.
2. Show that
3. Show that for any operator A the operators AA∗ and A + A∗ are selfadjoint.
4. Show that the product of two selfadjoint operators is selfadjoint if and only if they com-
mute.
5. Show that a polynomial p(A) of a selfadjoint operator A is a selfadjoint operator.
(A−1 )−1 = A .
10. Show that the product AB of two invertible operators A and B is invertible and
11. Prove that the adjoint A∗ of any invertible operator A is invertible and
12. Prove that the inverse A−1 of a selfadjoint invertible operator is selfadjoint.
13. An operator A on E is called isometric if ∀v ∈ E,
||Av|| = ||v|| .
P∗ = P, (P⊥ )∗ = P⊥ , P⊥ + P = I, PP⊥ = P⊥ P = 0 .
20. Prove that an operator is projection if and only if it is idempotent and selfadjoint.
21. Give an example of an idempotent operator in R2 which is not a projection.
22. Show that any projection operator P is positive. Moreover, show that ∀v ∈ E
(Pv, v) = ||Pv||2 .
23. Prove that the sum P = P1 + P2 of two projections P1 and P2 is a projection operator if
and only if P1 and P2 are orthogonal.
24. Prove that the product P = P1 P2 of two projections P1 and P2 is a projection operator if
and only if P1 and P2 commute.
25. Find the eigenvalues of a projection operator.
26. Prove that the span of all eigenvectors corresponding to the eigenvalue λ of an operator A
is a vector space.
27. Let
E(λ) = Ker (A − λI) .
Show that: a) if λ is not an eigenvalue of A, then E(λ) = ∅, and b) if λ is an eigenvalue of
A, then E(λ) is the eigenspace corresponding to the eigenvalue λ.
28. Show that the operator A − λI is invertible if and only if λ is not an eigenvalue of the
operator A.
29. Let T be a unitary operator. Then the operators A and
à = TAT−1
are called similar. Show that the eigenvalues of similar operators are the same.
30. Show that an operator similar to a selfadjoint operator is selfadjoint and an operator sim-
ilar to an anti-selfadjoint operator is anti-selfadjoint.
31. Show that all eigenvalues of a positive operator A are non-negative.
32. Show that the eigenvectors corresponding to distinct eigenvalues of a unitary operator are
orthogonal to each other.
33. Show that the eigenvectors corresponding to distinct eigenvalues of a selfadjoint operator
are orthogonal to each other.
34. Show that all eigenvalues of a unitary operator A have absolute value equal to 1.
35. Show that if A is a projection, then it can only have two eigenvalues: 1 and 0.
gi j = (ei , e j ) .
These numbers are called the components of the metric tensor with respect to
the basis {ei } (also called covariant components of the metric).
• Notice that the matrix G is symmetric, that is,
gi j = g ji , GT = G.
det G > 0 .
• The elements of the inverse matrix G−1 = (gi j ) are called the contravariant
components of the metric. They satisfy the equations
X
n
gi j g jk = δij ,
j=1
27
28 CHAPTER 2. VECTOR AND TENSOR ALGEBRA
gi j = g ji .
• In orthonormal basis
gi j = gi j = δi j , G = G−1 = I .
vi = (ei , v)
are called the covariant components of the vector v. Notice that covariant com-
ponents of vectors are denoted by lower indices (subscripts).
• Theorem 2.1.2 Let v ∈ E be a vector. The covariant and the contravariant
components of v are related by
X
n X
n
vi = gi j v j , vi = gi j v j .
j=1 j=1
• Theorem 2.1.3 The metric determines the inner product and the norm by
X
n X
n X
n X
n
(u, v) = gi j ui v j = gi j ui v j .
j=1 i=1 j=1 i=1
X
n X
n X
n X
n
||v||2 = gi j vi v j = gi j vi v j .
i=1 j=1 j=1 i=1
The elements of the dual space E ∗ are also called covectors or 1-forms. In
keeping with tradition we will denote covectors by Greek letters.
• Theorem 2.2.1 The dual space E ∗ of a real vector space E is a real vector space
of the same dimension.
• Let {ei } = {e1 , . . . , en } be a basis in E. A basis {ωi } = {ω1 , . . . ωn } in E ∗ such that
hωi , e j i = δij
vi = hωi , vi
and
σi = hσ, ei i .
That is
X
n
v= ei hωi , vi
i=1
and
X
n
σ= hσ, ei iωi .
i=1
g : E → E∗
hg(v), ui = (v, u) .
Then
X
n
g(v) = (v, ei )ωi .
i=1
In particular,
X
n
g(ek ) = gki ωi .
i=1
• The inverse map g−1 : E ∗ → E that assigns to each covector σ a vector g−1 (σ)
such that
hσ, ui = (g−1 (σ), u) ,
can be defined as follows. First, we define
X
n
−1 k
g (ω ) = gki ei .
i=1
Then
X
n X
n
−1
g (σ) = hσ, ek igki ei .
k=1 i=1
• The inner product on the dual space E ∗ is defined so that for any two covectors
α and σ
(α, σ) = hα, g−1 (σ)i = (g−1 (α), g−1 (α)) .
In particular,
X
n
(ωi , σ) = gi j σ j
j=1
• The inverse map g−1 can be defined in terms of the inner product of covectors as
X
n
g−1 (σ) = ei (ωi , σ) .
i=1
1. In any expression there are two types of indices: free indices and repeated
indices.
2. Free indices appear only once in an expression; they are assumed to take
all possible values from 1 to n. For example, in the expression
gi j v j
3. The position of all free indices in all terms in an equation must be the same.
For example,
gi j v j + α i = σ i
is a correct equation, while the equation
gi j v j + α i = σ i
is a wrong equation.
4. Repeated indices appear twice in an expression. It is assumed that there is a
summation over each repeated pair of indices from 1 to n. The summation
over a pair of repeated indices in an expression is called the contraction.
For example, in the expression
gi j v j
This is the result of the contraction of the indices k and l in the expression
gik vl .
5. Repeated indices are dummy indices: they can be replaced by any other
letter (not already used in the expression) without changing the meaning of
the expression. For example
gi j v j = gik vk
just means
X
n
gi j v j = gi1 v1 + · · · + gin vn ,
j=1
• From now on we will use the Einstein summation convention. We will say that
an equation is written in tensor notation.
Examples
• First, we list the equations we already obtained above
vi = gi j v j , v j = g ji vi ,
(u, v) = gi j ui v j = ui vi = ui vi = gi j ui v j ,
(α, β) = gi j αi β j = αi βi = αi βi = gi j αi β j .
gi j g jk = δki .
etc.
ei = Λ j i e0j ,
e0j = Λ̃k j ek
ω0i = Λi j ω j , ωi = Λ̃i j ω0 j .
• By using the second equation in the first and vice versa we obtain
Λ̃Λ = I, ΛΛ̃ = I ,
Λ̃ = Λ−1 .
v0i = hω0i , vi = Λi j hω j , vi .
This leads to
g0i j = Λ̃k i Λ̃l j gkl .
G0 = (Λ−1 )T GΛ−1
and
G0−1 = ΛG−1 ΛT .
• More generally, a set of real numbers T i1 ...i p j1 ... jq is said to represent components
of a tensor of type (p, q) (p times contravariant and q times covariant) if they
transform under a change of the basis according to
0i ...i l ...l
T j11... jqp = Λi1 l1 · · · Λi p l p Λ̃m1 j1 · · · Λ̃mq jq T m11 ...m
p
q
.
• The symmetrization of a tensor of type (0, k) with components Ai1 ...ik is another
tensor of the same type with components
1 X
A(i1 ...ik ) = Ai ...i ,
k! ϕ∈S ϕ(1) ϕ(k)
k
1 X
A[i1 ...ik ] = sign (ϕ)Aiϕ(1) ...iϕ(k) ,
k! ϕ∈S
k
and anti-symmetric if
A[i1 ...ik ] = Ai1 ...ik .
Ai j = aδij ,
where a is a scalar, and the most general isotropic tensor of rank four is
• The Levi-Civita symbols εi1 ...in and εi1 ...in do not represent tensors! They have the
same values in all bases.
• Theorem 2.3.1 The determinant of a matrix A = (Ai j ) can be written as
= n!δi[1j1 · · · δijnn ] .
In particular,
εm1 ...mn εm1 ...mn = n! .
• Theorem 2.3.3 The sets of real numbers Ei1 ...in and E i1 ...in defined by
p
Ei1 ...in = |g| εi1 ...in
1
E i1 ...in = p εi1 ...in ,
|g|
where |g| = det(gi j ), define (pseudo)-tensors of type (0, n) and (n, 0) respectively.
• Theorem 2.3.4 Let {ei } be a basis in E, {ωi } be the dual basis, and {v1 , . . . , vn }
be a set of n vectors. Let V = (vi j ) be the matrix of contravariant components of
the vectors {v j }
vi j = hωi , v j i,
and W = (vi j ) be the matrix of covariant components of the vectors {v j }
vi j = (ei , v j ) = gik vk j .
vol (v1 , . . . , vn ) = 0 .
• If the vectors {v1 , . . . , vn } are linearly independent, then the volume is a positive
real number that does not depend on the orientation of the vectors.
The sign of the signed volume depends on the orientation of the vectors {v1 , . . . , vn }:
+1, if {v1 , . . . , vn } is positively oriented
sign(v1 , . . . , vn ) = sign(det V) =
−1, if {v1 , . . . , vn } is negatively oriented
1
∗A j1 ... jn−k = E j ... j i ...i Ai1 ...ik .
k! 1 n−k 1 k
∗ ∗ α = (−1)k(n−k) α .
That is,
∗∗ = (−1)k(n−k) .
(A ∧ B) ∧ C = A ∧ (B ∧ C) .
α = ∗(v1 ∧ · · · ∧ vn−1 )
or, in components,
α j = E ji1 ...in−1 vi1 1 · · · vin−1 n−1 .
u = e ||u||
u = v × w = ∗(v ∧ w) ,
or p
u j = E jik vi wk = |g| ε jik vi wk .
Ae j = Ai j ei ,
• Remark. Notice that the upper index, which is the first one, indicates the row
and the lower index is the second one indicating the column of the matrix. The
convenience of this notation comes from the fact that all upper indices (also
called contravariant indices) indicate the components of vectors and “belong”
to the vector space E while all lower indices (called covariant indices) indicate
components of covectors and “belong” to the dual space E ∗ .
• The matrix of the identity operator I is
I i j = δij .
• For any v ∈ E
v = v je j, v j = hω j , vi
we have
Av = Ai j v j ei .
That is, the components ui of the vector u = Av are given by
ui = Ai j v j .
A0 = ΛAΛ−1 .
• The determinant and the trace of the matrix of an operator are invariant under the
change of the basis, that is
det A0 = det A , tr A0 = tr A .
• Therefore, one can define the determinant of the operator A and the trace of
the operator A by the determinant and the trace of its matrix, that is,
det A = det A , tr A = tr A .
• For self-adjoint operators these definitions are consistent with the definition in
terms of the eigenvalues given before.
• The matrix of the sum A + B of two operators A and B is the sum of matrices A
and B of the operatos A and B.
• The matrix of a scalar multiple cA is equal to cA, where A is the matrix of the
operator A and c ∈ R.
• Thus, the matrix of the product AB of the operators A and B is equal to the
product AB of matrices of these operators in the same order.
• The matrix of the inverse A−1 of an invertible operator A is equal to the inverse
A−1 of the matrix A of the operator A.
A∗ = G−1 AT G .
Ak j = gki Al i gl j or gik Ak j = Al i gl j ,
A = G−1 AT G , or GA = AT G .
G−1 AT GA = I , or AT GA = G .
( r − r 0) · n = 0
or
a(x − x0 ) + b(y − y0 ) + (z − z0 ) = 0 ,
which can also be written as
ax + by + cz = d ,
where
d = ax0 + by0 + cz0 .
i j k
w = u × v = det u1 u2 u3 ,
v1 v2 v3
or, in components,
1 i jk
wi = εi jk u j vk = ε (u j vk − uk v j ) .
2
ei × e j = εi jk ek .
[u, v, w] = u · (v × w) .
• The signed volume is zero if and only if the vectors are linearly dependent, that
is, coplanar.
• For linearly independent vectors its sign depends on the orientation of the triple
of vectors {u, vw}
where (
1 if {u, v, w} is positively oriented
sign (u, v, w) =
−1 if {u, v, w} is negatively oriented
cyclic
[u, v, w] = [v, w, u] = [w, u, v] .
It is normalized so that
[i, j, k] = 1.
v = u(u · v) − u × (u × v) .
εi jk = ε jki = εki j
εi jk εmnl = 6δm n l
[i δ j δk]
= δm n l m n l m n l m n l m n l m n l
i δ j δk + δ j δk δi + δk δi δ j − δi δk δ j − δ j δi δk − δk δ j δi
εi jk εmnk = 2δm n m n m n
[i δ j] = δi δ j − δ j δi
εi jk εm jk = 2δm
i
εi jk εi jk = 6
• This leads to many vector identities that express double vector product in terms
of scalar product. For example,
u × (v × w) = (u · w)v − (u · v)w
u × (v × w) + v × (w × u) + w × (u × v) = 0
(u × v) × (w × n) = v[u, w, n] − u[v, w, n]
(u × v) · (w × n) = (u · w)(v · n) − (u · n)(v · w)
Geometry
• The point O with the zero coordinates (0, . . . , 0) is called the origin of the Carte-
sian coordinate system.
• Similarly, with every two points P and Q with the coordinates (xiP ) and (xiQ ) we
associate the vector
uPQ = r Q − r P = (xiQ − xiP )
that points from the point P to the point Q.
49
50 CHAPTER 3. GEOMETRY
d(P, Q) = || r P − r Q || .
• The standard (orthonormal) basis {e1 , . . . , en } of Rn are the unit vectors that con-
nect the origin O with the points {(1, 0, . . . , 0), . . . , (0, . . . , 0, 1)} that have only
one nonzero coordinate which is equal to 1.
are the lines called the coordinate axes. There are n coordinate axes; they are
mutually orthogonal and intersect at only one point, the origin O.
• Let a and b be real numbers such that a < b. The set [a, b] is a closed interval
in R. A parametrized curve C in Rn is a map C : [a, b] → Rn which assigns a
point in Rn
C : r (t) = (xi (t))
to each real number t ∈ [a, b].
• The point r (a) is the initial point and the point r (b) is the endpoint of the
curve.
• The curve (−C) is the parametrized curve with the opposite orientation. If the
curve C is parametrized by r (t), a ≤ t ≤ b, then the curve (−C) is parametrized
by
(−C) : r (−t + a + b) .
∂C = C1 − C0 .
• A curve C is continuous if all the functions xi (t) are continuous for any t on
[a, b].
X
k
∂S = (−1)i (S (i),0 − S (i),1 ) .
i=1
0S = 0 .
∂(∂S ) = 0 .
• Henceforth, we will consider only open sets and call them regions of space.
• A region S is called connected (or arc-wise connected) if for any two points P
and Q in S there is an arc joining P and Q that lies within S .
• A connected region, that is a connected open set, is called a domain.
• Let P be a point with Cartesian coordinates (xi ). Suppose that we assign another
n-tuple of real numbers (qi ) = (q1 , . . . , qn ) to the point P, so that
xi = f i (q) ,
• The matrix !
∂xi
J=
∂q j
is called the Jacobian matrix. The determinant of this matrix is called the Ja-
cobian.
• A point P0 at which the Jacobian matrix is invertible, that is, the Jacobian is not
zero, det J , 0, is called a nonsingular point of the new coordinate system (qi ).
then for all points sufficiently close to P0 there exist n smooth functions
qi = hi (x) = hi (x1 , . . . , xn ) ,
The Jacobian matrix of the inverse transformation is the inverse matrix of the
Jacobian matrix of direct transformation, i.e.
∂xi ∂q j ∂qi ∂x j
= δik , and = δik .
∂q j ∂xk ∂x j ∂qk
• The curves Ci along which only one coordinate is varied, while all other are
fixed, are called the coordinate curves, that is,
• The vectors
∂r
ei =
∂qi
are tangent to the coordinate curves.
• The surfaces S i j along which only two coordinates are varied, while all other are
fixed, are called the coordinate surfaces, that is,
j−1 j+1
xi = xi (q10 , . . . , qi−1 i i+1 j n
0 , q , q0 , . . . , q0 , q , q0 , . . . , q0 )} .
• Theorem 3.3.2 For each point P there are n coordinate curves that pass through
P. The set of tangent vectors {ei } to these coordinate curves is linearly indepen-
dent and forms a basis.
• The basis {ei } is not necessarily orthonormal.
• The metric tensor is defined as usual by
Xn
∂xk ∂xk
gi j = ei · e j = .
k=1
∂qi ∂q j
• The volume of the parallelepiped spanned by the vectors {e1 dq1 , . . . , en dqn },
called the volume element, is
p
dV = |g| dq1 · · · dqn ,
vi = vi = v · ei .
dV = h1 · · · hn dq1 · · · dqn .
such that
f i (h(q0 )) = q0i , hi ( f (q)) = qi .
∂q j ∂q0i 0
e0i = ej , ej = e .
∂q0i ∂q j j
They have the same orientation if the Jacobian of the change of coordinates is
positive and oppositive orientation if the Jacobian is negative.
• Thus, a set of real numbers T i1 ...i p j1 ... jq is said to represent components of a ten-
sor of type (p, q) (p times contravariant and q times covariant) if they transform
under a change of coordinates according to
3.3.2 Examples
• The polar coordinates in R2 are introduced by
x1 = ρ cos ϕ , x2 = ρ sin ϕ ,
x1 = ρ cos ϕ , x2 = ρ sin ϕ , x3 = z
The Jacobian is
det J = ρ .
Thus, the only singular point of the cylindrical coordinate system is the origin
ρ = 0. At all nonsingular points the change of variables is invertible and we have
p ! !
x1 x2
ρ= (x1 )2 + (x2 )2 , ϕ = cos−1 = sin−1 , z = x3 .
ρ ρ
The coordinate curves of ρ are horizontal half-lines in the plane z = const going
through the z-axis. The coordinate curves of ϕ are circles in the plane z = const
of radius ρ centered at the z axis. The coordinate curves of z are vertical lines.
The coordinate surfaces of ρ, ϕ are horizontal planes. The coordinate surfaces of
ρ, z are vertical half-planes going through the z-axis. The coordinate surfaces of
ϕ, z are vertical cylinders centered at the origin.
The Jacobian is
det J = r2 sin θ .
Thus, the singular points of the spherical coordinate system are the points where
either r = 0, which is the origin, or θ = 0 or θ = π, which is the whole z-axis. At
all nonsingular points the change of variables is invertible and we have
p
r = (x1 )2 + (x2 )2 + (x3 )2 ,
1
! 2
!
−1 x −1 x
ϕ = cos = sin ,
ρ ρ
!
x3
θ = cos−1 ,
r
p
where ρ = (x1 )2 + (x2 )2 .
The coordinate curves of r are half-lines going through the origin. The coordi-
nate curves of ϕ are circles of radius r sin θ centered at the z axis. The coordinate
curves of θ are vertical half-circles of radius r centered at the origin. The coor-
dinate surfaces of r, ϕ are half-cones around the z-axis going through the origin.
The coordinate surfaces of r, θ are vertical half-planes going through the z-axis.
The coordinates surfaces of ϕ, θ are spheres of radius r centered at the origin.
lim v(t) = v0
t→t0
if
lim ||v(t) − v0 || = 0 .
t→t0
• Let {ei } be a constant basis in E that does not depend on t. Then a vector valued
function v(t) is represented by its components
v(t) = vi (t)ei
dv dvi
= ei .
dt dt
d du dv d dv
(u + v) = + , (cv) = c ,
dt dt dt dt dt
where c is a scalar constant.
• A curve
r = r0 + t u
is a straight line parallel to the vector u passing through the point r 0 .
r = r 0 + a (cos t u + sin t v)
is a circle of radius a with the center at r 0 in the plane spanned by the vectors u
and v.
r = r 0 + b (cos t u + sin t v) + at w
is the helix of radius b with the axis passing through the point r 0 and parallel to
w.
• The vertical distance between the coils of the helix, equal to 2π|a|, is called the
pitch.
• The derivative
dr ∂ r dqi
= i
dt ∂q dt
of the vector valued function r (t) is called the tangent vector. If r (t) represents
the position of a particle at the time t, then r 0 is the velocity of the particle.
• The norm r
dr dqi dq j
= gi j (q(t))
dt dt dt
of the velocity is called the speed. Here, as usual
∂r ∂r X ∂xk ∂xk
n
gi j = · =
∂qi ∂q j k=1 ∂qi ∂q j
• For a curve r : [a, b] → Rn , the possibility that r (a) = r (b) is allowed. Then it
is called a closed curve. A closed curve does not have a boundary.
dr
= 1.
ds
dr ∂ r dqi
T = = i .
ds ∂q ds
L = b − a.
That is why, the parameter s is nothing but the length of the arc of the curve
from the initial point r (a) to the current point r (t)
Z t
dr
s(t) = dτ .
a dτ
• The normalized rate of change of the unit tangent defines the principal normal
−1
dT dT dT
N =ρ = .
ds dt dt
• The unit tangent and the principal normal are orthogonal to each other. They
form an orthonormal system.
• Theorem 3.5.1 For any smooth curve r = r (t), the acceleration r 00 lies in the
plane spanned by the vectors T and N . The orthogonal decomposition of r 00
with respect to T and N has the form
d || r 0 ||
r 00 = T + κ || r 0 ||2 N .
dt
• The vector
dN
+κT
ds
is orthogonal to both vectors T and N , and hence, to the plane spanned by these
vectors. In a general space Rn this vector could be decomposed with respect to
a basis in the (n − 2)-dimensional subspace orthogonal to this plane. We will
restrict below to the case n = 3.
B = T×N.
r 0 × r 00 = κ || r 0 ||3 B .
Therefore,
|| r 0 × r 00 ||
κ= .
|| r 0 ||3
• The scalar quantity
dN dB
τ= B · = −N ·
ds ds
is called the torsion of the curve.
• Theorem 3.5.2 (Frenet-Serret Equations) For any smooth curve in R3 there
hold
dT
= κN
ds
dN
= −κ T + τ B
ds
dB
= −τ N .
ds
• Theorem 3.5.3 Any two curves in R3 with identical curvature and torsion are
congruent.
• The parameters u and v are called the local coordinates on the surface.
• The curves r (u, v0 ) and r (u0 , v), with one coordinate being fixed, are called the
coordinate curves.
∂r ∂ r ∂qi ∂r ∂ r ∂qi
ru = = i and rv = = i
∂u ∂q ∂u ∂v ∂q ∂v
• A surface is smooth if the tangent plane is well defined, that is, the tangent vec-
tors are linearly independent (nonparallel), which means that it does not degen-
erate to a line or a point at every point of the surface.
• The orientation of the surface is achived by cutting it in small pieces and orient-
ing the small pieces separately. If this can be made consistenly for the whole
surface, then it is called orientable.
• The boundary ∂S of the surface r = r (u, v), where u ∈ [a, b], v ∈ [c, d] consists
of the curves r (a, v), r (b, v), r (u, c) and r (u, d). A surface without boundary
is called closed.
n = || r u × r v ||−1 r u × r v .
Notice that p
|| r u × r v || = || r u ||2 || r v ||2 − ( r u · r v )2 .
In components,
√ ∂ql ∂qm
( r u × r v )i = gεilm ,
∂u ∂v
∂qi ∂q j ∂qi ∂q j ∂qi ∂q j
r u · r u = gi j , r v · r v = gi j , r u · r v = gi j .
∂u ∂u ∂v ∂v ∂u ∂v
• The sign of the normal is determined by the orientation of the surface.
• For a smooth surface the unit normal vector field n varies smoothly over the
surface.
• The normal to a closed surface in R3 is usually oriented in the outward direction.
• In R3 a surface can also be described by a single equation
F(x, y, z) = 0 .
and
h11 = r u · r u , h12 = h21 = r u · r v , h22 = r v · r v .
• The area of a plane spanned by the vectors r u du and r v dv is called the area
element √
dS = h du dv ,
where
h = det hab = || r u ||2 || r v ||2 − ( r u · r v )2 .
dS = || r u × r v || du dv
x = u, y = v, z = f (u, v) ,
is s
!2 !2
∂f ∂f
dS = 1+ + dx dy .
∂x ∂y
if ∂F/∂z , 0.
∂r
,
∂ua
where a = 1, 2, . . . , (n − 1).
The tangent space at a point P on the hypersurface is the hyperplane equal to the
span of these vectors ( )
∂r ∂r
T = span ,..., a .
∂ua ∂u
The unit normal to the hypersurface S at the point P is the unit vector n orthog-
onal to T .
F(x) = F(x1 , . . . , xn ) = 0 ,
∂F
(dF)i = .
∂xi
where
∂qi ∂q j
hab = gi j .
∂ua ∂ub
The area element of the hypersurface is
√
dS = h du1 . . . dun−1 .
Vector Analysis
69
70 CHAPTER 4. VECTOR ANALYSIS
• The flow lines of a vector field v = vi (x)ei can be found from the differential
equations
dx1 dxn
= · · · = .
v1 vn
• The directional derivatives in the direction of the basis vectors ei are the partial
derivatives
∂f
∇ei f = i ,
∂x
which are also denoted by
∂f
∂i f = i .
∂x
• More generally, let r (s) be a parametrized curve in the natural parametrization
and u = d r /ds be the unit tangent. Then
∂ f dxi
∇u f =
∂xi ds
In curvilinear coordinates
∂ f dqi
∇u f = .
∂qi ds
• Therefore, the 1-forms dqi form a basis in the dual space of covectors.
• The vector field corresponding to the 1-form d f is called the gradient of the
scalar field f and denoted by
∂f
grad f = ∇ f = gi j ej .
∂qi
• The directional derivative is simply the action of the covector d f on the vector u
(or the inner product of the vectors grad f and u)
∇u f = hd f, ui = ( grad f, u) .
• Therefore,
∇u f = || grad f || cos θ ,
where θ is the angle between the gradient and the unit tangent u.
• Gradient of a scalar field points in the direction of the maximum rate of increase
of the scalar field.
• The maximum value of the directional derivative at a fixed point is equal to the
norm of the gradient
max ∇u f = || grad f || .
u
• The minimum value of the directional derivative at a fixed point is equal to the
negative norm of the gradient
v = grad f .
• Recall that the duality operator ∗ assigns a (n − k)-vector to each k-form and an
(n − k)-form to each k-vector:
∗ : Λk → Λn−k , ∗ : Λk → Λn−k .
∗d∗ : Λk → Λk−1
(∗d∗)2 = 0
G : Λk → Λk .
That is, if A j1 ... jk are the components of a k-vector, then the corresponding k-form
σ = GA has components
δ = G ∗ dG∗ : Λk → Λk−1 ,
δ2 = 0 .
L = dδ + δd
L : Λk → Λk .
4.4 Divergence
• The divergence of a vector field v is a scalar field defined by
div v = (−1)n+1 ∗ d ∗ v ,
∂ 1/2 i j
div σ = g−1/2 (g g σ j ) .
∂qi
div v = ∂i vi .
div v = 0 .
4.5 Curl
• Recall that the operator ∗d assigns a (n − k − 1)-vector to a k-form. In case n = 3
and k = 1 this operator assigns a vector to a 1-form. This enables one to define
the curl operator in R3 , which assigns a vector to a covector by
curl σ = ∗dσ ,
or, in components,
e1 e2 e3
∂ ∂ ∂ ∂
( curl σ)i = g−1/2 εi jk j σk = g−1/2 det .
∂q ∂q1 ∂q2 ∂q3
σ1 σ2 σ3
4.6 Laplacian
• The scalar Laplace operator (or the Laplacian) is the map ∆ : C ∞ (Rn ) →
C ∞ (Rn ) that assigns a scalar field to a scalar field. It is defined by
∆ f = ∂i ∂i f .
∆σ = −(G ∗ dG ∗ d + dG ∗ dG∗)σ .
(∆v)i = ∂ j ∂ j vi .
• In R3 it can be written as
div r = n
curl r = 0
grad (a · r ) = a
curl (a × r ) = 2a
r
grad r =
r
df r
grad f (r) =
dr r
1 r
grad = − 3
r r
grad r = krk−2 r
k
(n − 1) 0
∆ f (r) = f 00 + f
r
∆rk = k(k + n − 2)rk−2
1
∆ n−2 = 0
r
∂i xk = δki , δii = n .
1 ∂r
êi = .
hi ∂qi
where
∂r
hi =
∂qi
are the scale factors.
Then for any vector v = vi êi the contravariant and the covariant components
coincide
vi = vi = êi · v .
• The displacement vector, the interval and the volume element in the orthogonal
coordinate system are
d r = h1 ê1 + h2 ê2 + h3 ê3 ,
ds2 = h21 (dq1 )2 + h22 (dq2 )2 + h23 (dq3 )2 ,
dV = h1 h2 h3 dq1 dq2 dq3 .
1 ∂ 1 ∂ 1 ∂
grad f = ê1 f + ê2 f + ê3 f
h1 ∂q1 h2 ∂q2 h3 ∂q3
( )
1 ∂ ∂ ∂
div v = (h h v
2 3 1 ) + (h h v
3 1 2 ) + (h h v
1 2 3 )
h1 h2 h3 ∂q1 ∂q2 ∂q3
" #
1 ∂ ∂
curl v = ê1 (h3 v3 ) − 3 (h2 v2 )
h2 h3 ∂q2 ∂q
" #
1 ∂ ∂
+ ê2 (h1 v1 ) − 1 (h3 v3 )
h3 h1 ∂q3 ∂q
" #
1 ∂ ∂
+ ê3 (h2 v2 ) − 2 (h1 v1 )
h1 h2 ∂q1 ∂q
( ! ! !)
1 ∂ h2 h3 ∂ ∂ h2 h3 ∂ ∂ h2 h3 ∂
∆f = + 1 + 1 f
h1 h2 h3 ∂q1 h1 ∂q1 ∂q h1 ∂q1 ∂q h1 ∂q1
• Cylindrical coordinates:
dV = ρ dρ dϕ dz
1
grad f = êρ ∂ρ f + êϕ ∂ϕ f + êz ∂z f
ρ
1 1
div v = ∂ρ (ρvρ ) + ∂ϕ vϕ + ∂z vz
ρ ρ
êρ ρêϕ êz
1
curl v = ∂ρ ∂ϕ ∂z
ρ vρ ρvϕ vz
1 1
∆f = ∂ρ (ρ ∂ρ f ) + 2 ∂2ϕ f + ∂2z f
ρ ρ
• Spherical coordinates:
Integration
• The expression
σ = vi dqi = v1 dq1 + · · · + vn dqn
is called a differential 1-form. Each covector naturally defines a differential
form. That is why it is also called a 1-form.
• If C is a closed curve, then the line integral of a vector field is denoted by
I
v·dr
C
and is called the circulation of the vector field v about the closed curve C.
83
84 CHAPTER 5. INTEGRATION
where u1 = u, u2 = v, h = det hab and hab is the induced metric on the surface
∂qi ∂qi
hab = gi j .
∂ua ∂ub
where
∂qi ∂q j ∂q j ∂qi
Ji j = − .
∂u ∂v ∂u ∂v
• In R3 every 2-form defines a dual vector. Therefore, one can integrate vectors
over a surface. Let v be a vector field in R3 . Then the dual two form is
√
Ai j = g εi jk vk ,
or
√ 3 √ √
A12 = gv , A13 = − g v2 , A23 = g v1 .
Ttherefore,
√ 3 1
α= g v dq ∧ dq2 − v2 dq1 ∧ dq3 + v1 dq2 ∧ dq3 .
Then the surface integral of the vector field v, called the total flux of the
vector field through the surface, is
Z Z Z bZ d
α= v · n dS = [v, r u , r v ] du dv ,
S S a c
where
n = || r u × r v ||−1 r u × r v
is the unit normal to the surface and
√ ∂q j ∂qk
[v, r u , r v ] = vol (v, r u , r v ) = gεi jk vi .
∂u ∂v
where
∂q[i1 ∂qik ]
J i1 ...ik = k! ··· k .
∂u 1 ∂u
• The surface integral over a closed surface S without boundary is denoted by
I
α.
S
Let n be the unit vector orthogonal to the hypersurface and v = ∗α be the vector
field dual to the (n − 1)-form α. Then
Z Z bn−1 Z b1 √
α= ··· v · n h du1 · · · dun−1 .
S an−1 a1
This defines the total flux of the vector field v through the hypersurface S .
The normal can be determined by
√ 1 √ ∂qi1 ∂qin−1
h nj = gε ji1 ...in−1 1 · · · n−1 .
(n − 1)! ∂u ∂u
where g = det(gi j ).
• The line integral of a conservative vector field does not depend on the interior of
the curve but only on the endpoints of the curve.
Potential Theory
89
90 CHAPTER 6. POTENTIAL THEORY
97
98 CHAPTER 7. BASIC CONCEPTS OF DIFFERENTIAL GEOMETRY
7.1 Manifolds
Applications
103
104 CHAPTER 8. APPLICATIONS
8.1 Mechanics
8.1.1 Inertia Tensor
8.1.2 Angular Momentum Tensor
8.2 Elasticity
8.2.1 Strain Tensor
8.2.2 Stress Tensor
8.5 Electrodynamics
8.5.1 Tensor of Electromagnetic Field
8.5.2 Maxwell Equations
8.5.3 Scalar and Vector Potentials
8.5.4 Wave Equations
8.5.5 D’Alambert Operator
8.5.6 Energy-Momentum Tensor
[1] A. I. Borisenko and I. E. Tarapov, Vector and Tensor Analysis, Dover, 1979
[2] D. E. Bourne and P. C. Kendall, Vector Analysis and Cartesian Tensors, Nelson,
1977
[3] H. F. Davis and A. D. Snider, Introduction to Vector Analysis, 7th Edition,
Brown Publishers, 1995
[4] J. H. Hubbard and B. B. Hubbard, Vector Calculus, Linear Algebra, and Differ-
ential Forms, (2nd Edition), Prentice Hall, 2001
[5] H. M. Schey, Div, grad, curl, and all that: an informal text on vector calculus,
W.W. Norton, 1997
[6] Th. Shifrin, Multivariable Mathematics, Wiley, 2005
[7] M. Spivak, Calculus on Manifolds: A Modern Approach to Classical Theorems
of Advanced Calculus, HarperCollins Publishers, 1965
111
112 BIBLIOGRAPHY
113