Tensor Analysis and Differential Geometry
Tensor Analysis and Differential Geometry
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/265407528
Article
CITATIONS READS
0 3,237
1 author:
SEE PROFILE
All content following this page was uploaded by Rene van Hassel on 04 September 2015.
Contents
1 Preface 4
2 Multilinear Algebra 5
2.1 Vector Spaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 The Dual Space. The concept dual basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 The Kronecker tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Linear Transformations. Index-gymnastics . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 Reciproke basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Special Bases and Transformation Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8.1 General Definition 26
0
2.8.2 0-tensor = scalar = number 26
1
2.8.3 0-tensor = contravariant 1-tensor = vector 27
0
2.8.4 1-tensor = covariant 1-tensor = covector 27
0
2.8.5 2 -tensor = covariant 2-tensor =
linear transformation: V V 28
2
2.8.6 0 -tensor = contravariant 2-tensor =
linear transformation: V V 32
1
2.8.7 1 -tensor = mixed 2-tensor =
linear transformation: V V and V V 35
0
2.8.8 3 -tensor = covariant 3-tensor =
linear transformation: V (V V ) and (V V) V 38
2
2.8.9 2 -tensor = mixed 4-tensor =
linear transformation: (V V) (V V) = 39
2.8.10 Continuation of the general considerations about rs -tensors.
Contraction and . 42
2.8.11 Tensors on Vector Spaces provided with an inner product 45
2.9 Mathematical interpretation of the "Engineering tensor concept" . . . . . . . . 46
2.10 Symmetric and Antisymmetric Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.11 Vector Spaces with a oriented volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.12 The Hodge Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.14 RRvH: Identification V and V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.15 RRvH: Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.16 RRvH: The four faces of bilinear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3 Tensor Fields on Rn 78
3.1 Curvilinear Coordinates and Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.2 Definition of Tensor Fields on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3 Alternative Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.4 Examples of Tensor Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2
5 Manifolds 136
5.1 Differentiable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.2 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.3 Riemannian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5.4 Covariant derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.5 The curvature tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3
6 Appendices 149
6.1 The General Tensor Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.2 The Stokes Equations in (Orthogonal) Curvilinear Coordinates . . . . . . . . 153
6.2.1 Introduction 153
6.2.2 The Stress Tensor and the Stokes equations in Cartesian Coordinates 154
6.2.3 The Stress Tensor and the Stokes equations in Arbitrary Coordinates 154
6.2.4 The Extended Divergence and Gradient in Orthogonal
Curvilinear Coordinates 154
6.2.4.1 The Extended Gradient 154
6.2.4.2 The Extended Divergence 154
6.3 The theory of special relativity according Einstein and Minovski . . . . . . . 155
6.4 Brief sketch about the general theory of special relativity . . . . . . . . . . . . . . 155
6.5 Lattices and Reciproke Bases. Piezoelectricity. . . . . . . . . . . . . . . . . . . . . . . . 155
6.6 Some tensors out of the continuum mechanics. . . . . . . . . . . . . . . . . . . . . . . 155
6.7 Thermodynamics and Differential Forms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Index 158
4
Chapter 1 Preface
This is a translation of the lecture notes of Jan de Graaf about Tensor Calculation and
Differential Geometry. Originally the mathematical student W.A. van den Broek has
written these lecture notes in 1995. During the years, several appendices have been
added and all kind of typographical corrections have been done. Chapter 1 is rewritten
by Jan de Graaf.
To make this translation is not asked to me. I hope that this will be, also for me, a
good lesson in the Tensor Calculation and Differential Geometry. If you want to make
comments, let me here, see the front page of this translation,
In June 2010, the Mathematical Faculty asked me to translate the lecture notes of Jan de
Graaf.
Pieces of text characterised by "RRvH:" are additions made by myself.
The text is typed in ConTeXt, a variant of TeX, see context.
5
Starting Point(s):
Comment(s): 2.1.1
For every basis {ei } of V and for every vector x V there exists an unique
ordered set of real numbers {xi } such that x = xi ei .
Definition 2.1.1 The numbers {xi } are called the contravariant components of the
vector x with respect tot the basis {ei }.
Convention(s): 2.1.1
Ei = (0, , 0, 1, 0, , 0)T ,
where there becomes a number 1 on the i-th position, is a basis of Rn . This basis is
called the standard basis of Rn .
6
Notice(s): 2.1.1
X = xi Ei .
To every basis {ei } of V can be defined a bijective linear transformation
E : V Rn by Ex = X. Particularly holds that Eei = Ei . A bijective lin-
ear transformation is also called a isomorphism. By choosing a basis {ei } of
V and by defining the correspondent isomorphism E, the Vector Space V is
"mapped". With the help of Rn , V is provided with a "web of coordinates".
Comment(s): 2.1.2
For every pair of bases {ei } and {ei0 } of V, there exist an unique pair of ordered
0 0
real numbers Aii0 and Aii such that ei = Aii ei0 and ei0 = Aii0 ei .
Convention(s): 2.1.2
0 0 0
So A = [Aii ] with i0 the row index and i the column index. The matrix A is
the change-of-coordinates matrix from the basis {ei0 } to the basis {ei }.
The contravariant components of the vector x with respect to the basis {ei0 } are
0 0
notated by xi and the belonging columnvector by X .
7
Notice(s): 2.1.2
0 0 j
On the one hand holds ei = Aii ei0 = Aii Ai0 e j and on the other hand ei = ij e j ,
0 j
from what follows that Aii Ai0 = ij . On the same manner is to deduce that
j0 0 0
Aii0 Ai = ij0 . The ij and ij0 are the Kronecker deltas. Construct with them the
0 0 0 0 0
identity matrices I = [ij ] and I, = [ij0 ], then holds A A, = I, and A, A = I.
0 0
Evidently is that (A, )1 = A and (A )1 = A, .
Carefulness is needed by the conversion of an index notation to a matrix nota-
tion. In an index notation the order does not play any rule, on the other hand
in a matrix notation is the order of crucial importance.
0
For some vector x V holds on one hand x = xi ei = xi Aii ei0 and on the
0 0 0
other hand x = xi ei0 , from which follows that xi = xi Aii . This expresses
the relationship between the contravariant components of a vector x with re-
spect to the basis {ei } and the contravariant components with respect to the
0
basis {ei0 }. Analoguous is seen that xi = xi Aii0 . In the matrix notation this
0 0 0
correspondence is written as X = A, X and X = A X.
0 0
Putting the expression xi = xi Aii and ei0 = Aii, ei side by side, then is
0
seen that the coordinates xi "transform" with the inverse Aii of the change-
of-coordinates matrix Aii0 . That is the reason of the strange 19th century term
contravariant components.
i0
Out of the relation xi = xi Aii follows that x i = Aii .
0 0 0
Notation(s):
0 0
so xi
= Aii xi
0, which corresponds nicely with ei = Aii ei0 .
8
Example(s): 2.1.1 Let S = {e1 = (1, 0), e2 = (0, 1)} be the standard basis of R2
and let T = {e10 = 1 (1, 2)T , e20 = 1 (2, 1)T } be an orthonormal basis of R2 .
5 5
The coordinates of e and e are given with respect to the standard basis S.
10 20
The matrix
!
1 1 2
A, = e10 e20 = ,
5 2 1
Starting Point(s):
f(x + y) = b
b f(x) + b
f(y),
Definition 2.2.2 The Dual Space V belonging to the Vector Space V is the set of
all linear functions on V, equipt with the addition and the scalar multiplication.
For every pair linear functions bf and b
g on V, the function (bf +b g ) is defined by
(f +b
b g )(x) = f(x) + b
b g(x). For every linear function f and every real number ,
b
the linear function (b f)(x) = (b
f) is defined by (b f(x)). It is easy to control that
V is a Vector Space. The linear functions f V are called covectors or covariant
b
1-tensors.
Definition 2.2.3 To every basis {ei } of V and for every covector b f V is defined
an ordered set of real numbers { fi } by fi = b
f (ei ). These numbers fi are called the
covariant components of the covector f with respect to the basis {ei }.
b
Convention(s): 2.2.1
F = ( f1 , f2 , , fn )
b
i
Definition 2.2.4 The amount of n rowvectors b
E , defined by
i
E = (0, , 0, 1, 0, , 0),
b
where there becomes a number 1 on the i-th position, is a basis of Rn . This basis is
called the standard basis of Rn .
10
Notice(s): 2.2.1
i
F = fib
b E .
f(x) = b
b F X, where bF represents the rowvector of the covariant components of
the covector b f.
e k , defined by
To every basis {ei } of V belong n linear functions b
e k (x) = xk = (Ek )T Ex. Keep in mind, that every b
b e k most of the time is deter-
mined by the entire basis {ei }.
e i } forms a basis of V .
Lemma 2.2.1 The collection {b
g(xi ei ) = xib
g(x) = b
b e i (x)) = (gib
g(ei ) = gi xi = gi (b e i )(x),
g = gib
so b e i . The Dual Space V is spanned by the collection {b e i }. The only thing to
prove is that {b e i } are linear independent. Assume that {i } is a collection of numbers
such that ibe i = 0. For every j holds that ibe i (e j ) = i ij = j = 0. Hereby is proved
e i } is a basis of V .
that {b
Consequence(s):
Notice(s): 2.2.2
f(x) = Eb
b F(x) = b
F(Ex) = b
FX
11
Lemma 2.2.2 Let {e i } and {e i0 } be bases of V and consider their belonging dual
0
e i } and {b
bases {b e i }.
0
e i } and the basis {b
The transition between the basis {b e i } is given by the known
0
matrices A and A, , see Comment(s) 2.1.2. It goes on the following way:
0
e i } = Aii0 {b
{b e i0 } = Aii {b
e i0 } and {b e i }.
0 0
e i } and {b
Proof The transition matrices between the bases {b e i } are notated by [Bii ] and
0
e i (e j ) = ij and on the other hand holds
[Bii0 ]. On one side holds b
0 0 j0 j0 0 0
e i (e j ) = (Bii0 b
b e i )(A j e j0 ) = Bii0 A j ij0 = Bii0 Aij ,
0
such that Bii0 Aij = ij . It is obvious that Bii0 = Aii0 .
Notice(s): 2.2.3
y transform as follows
Changing of a basis means that the components of b
0 0 0 0 0
e i = yi0 Aii b
y = yi0 b
b e i = (Aii yi0 )b
e i = yib
e i = yi Aij0 b
e j = yi Aij0 b
ei .
In matrix notation
0
Y, = b
b YA, Y=b
b Y, A Y, (A, )1 )
(= b
Putting the expression yi0 = Aii0 yi and ei0 = Aii, ei side by side, then is seen
that the coordinates yi "transform" just as the basis vectors. That is the reason
of the strange 19th century term covariant components.
Notation(s):
0 0
The basisvector b e i is also written as d xi and the dual basisvector b
e i as d xi .
At this stage this is pure formal and there is not attached any special meaning
to it. Sometimes is spoken about "infinitesimal growing", look to the formal
similarity
0
i0 xi i i0 i0
dx = d x = A d x .
xi i
0 0
e i = Aii b
This corresponds nicely with b e i.
12
Starting Point(s):
Notation(s):
f : x 7 b
The covector b f(x) will henceforth be written as b f, x >.
f : x 7< b
Sometimes is written bf = <b f, >. The "argument" x leaves "blanc".
Notice(s): 2.3.1
Covectors can only be filled in at the first entrance of < , >, so elements out
of V . At the second entrance there can be filled in vectors, so elements out
of V. The Kronecker tensor is not an "inner product", because every entrance
can only receive only its own type of vector.
The Kronecker tensor is a linear functions in every separate variable. That
means that
b v V z V , R : < b
u,b u + b
v, z > = < b
u, z > + < b
v, z >, en
u V x,y V , R : < b
b u, x + y > = < b
u, x > + < b
u, y > .
The pairing between the basisvectors and the dual basisvectors provides:
1 if i = j,
i i
<b e , ej > = j = ,
0 if i 6= j,
a : V R by
Proof Choose a vector a V. Define the "evaluation function" b
b
a ( b
u) =<b
u, a > .
b
b
Comment(s): 2.3.1
Starting Point(s):
Notation(s):
j
Rei = Ri e j ,
j0
Rei0 = Ri0 e j0 . (2.1)
This means that the contravariant components of the vector Rei with respect
j
to the basis {e j } are notated by Ri and the contravariant components of the
j0
vector Rei0 with respect to the basis {e j0 } are notated by Ri0 .
These two unique labeled collection of numbers are organised in n n-
0 j 0 j0
matrices, notated by respectively R and R, . So R = [Ri ] and R, = [Ri0 ].
Hereby are j and j0 the rowindices and i and i0 are the columnindices.
Notice(s): 2.4.1
0
With the help of the transisition matrices A, and A there can be made a link
0
between the matrices R and R, . There holds namely
j j j0
Rei0 = R(Aii0 ei ) = Aii0 Rei = Aii0 Ri e j = Aii0 Ri A j e j0 .
j0 j j0
Compare this relation with ( 2.1) and it is easily seen that Ri0 = Aii0 Ri A j . In
0 0
matrix notation, this relation is written as R, = A RA, . Also is to deduce that
j 0 j0 j 0 0
Ri = Aii Ri0 A j0 , which in matrix notation is written as R = A, R, A .
The relations between the matrices, which represent the linear transformation
R with respect to the bases {ei } and {ei0 }, are now easily to deduce. There holds
namely
0 0 0 0
R, = (A, )1 RA, and R = (A )1 R, A .
The other linear types of linear transformations can also be treated on almost the same
way. The results are collected in figure 2.1.
15
x V b y, Rx > = < Pb
y V :< b y, x > holds exactly if P = R, so if Pij = Rij
holds exactly.
x V z V :< Gy, z > = < Gz, x > holds exactly if GT = G, so if g ji = gij
holds exactly. In such a case the linear transformation G : V V is called
symmetric.
Some of the linear transformations in the table can be composed by other lin-
ear transformations out of the table. If R = H G then is obvious
Rkj x j ek = Rx = (H G)x = hkl g jl x j ek . In matrix notation: RX = H(XT G)T =
HGT X.
e j = Py = G Hb
If P = G H then is obvious Pkj ykb e j . In matrix
y = hkl glj ykb
b = (HY
notation: YP bT )T G = YH
b T G.
16
0 i0 0
xV e i , x > xi = < b
xi = < b e ,x > X X
0 0 i 00
x = Ai x x = Ai xi , etc.
i i i i X = A X, etc.
0
y yi0 yi , with b e i = yi0b
y = yib e i , etc.
y V
b yi = < b
y, ei > y, ei0 >
yi0 = < b Y
b Y b,
yi0 = Aii0 yi yi = Aii yi , etc. b, A0 etc.
b= Y
Y
0
<b
y, x > R yi xi = yi0 xi R b, X0 R
b = Y
YX
R
0
Re j = Rij ei Re j0 = Rij0 ei0
R:VV e i , Re j >
Rij = < b R = [Rij ] , j is column index
i0 i0 i0 j 0 0
R j0
e , Re j0 > = Ai A j0 x j
=<b R = A, R, A
Rx V e i , Rx >
Rij x j = < b column(Rij x j ) = RX
i0 j0 i0 j
i0 0 0 0
R x j0
=<b
e , Rx > = Ai R j x j column(Rij0 x j ) = R, X
0 0
<b
y, Rx > R yi Rij x j = Rij0 x j yi0 R YRX b, R0 X0 R
= Y ,
b
P
0 0 0
e i = Pijb
Pb ej e i = Pij0b
Pb ej
P : V V e i, e j >
Pij = < Pb P = [Pij ] , j is column index
i0 i0 i0 j 0 0
Pj0
= < Pb
e , e j0 > = Ai A j0 Pij P, = A PA,
j j
Pb
y V Pi y j = < Pb
y,b
ei > row(Pi y j ) = YP b
j0 j j0 0
Pi0 y j0 = < Pb
y,b
ei0 > = Aii0 Pi y j row(P 0 y j0 ) = b
i
Y, P, = b
YPA
j j0 0
b, P0 X0 R
< Pb
y, x > R y j Pi xi = Pi0 y j0 xi R YPX
b = Y ,
G
0
e j Gei0 = gi0 j0b
Gei = gijb ej
G : V V gij = < Gei , e j > G = [gi j ] , j is column index
j
gi0 j0 = < Gei0 , e j0 > = Aii0 A j0 gij G,, = [gi0 j0 ] = (A, )T GA,
gi j xi = < Gx, e j > row(gi j xi ) = XT G
0 j 0 0
Gx V gi0 j0 xi = < Gx, e j0 > = A j0 gij xi row(gi0 j0 xi ) = (X )T G,,
0
= (X )T (A, )T GA, = XT GA,
0 0 0
< Gx, z > R gij xi z j = gi0 j0 xi z j R XT GZ = (X0 )T G,, Z R
H
k 0 0 0
Hb
e = hkl el e k = hk l el0
Hb
H : V V e k , Hb
hkl = < b el > H = [hkl ] , l is column index
k0 l0 k0 l0 k0 l0 00 0 0 0 0
h =<b
e , Hb
e > = Ak Al hkl H = [hk l ] = A H(A )T
e k , Hb
hkl yl = < b y> column(hkl yl ) = HY bT
0 0 k0 0 0 l0
Hb
y V hk l yl0 =<b
e , Hb
y >= Akk hkl yl column(h yl0 ) = H ( b
k Y,)T
0 0
= A H( b
Y,A )T
0 0
<b
u, Hb
y > R hkl uk yl = hk l uk0 yl0 R UH
b YbT = U b, )T R
b, H ( Y
Starting Point(s):
i. x, y V : (x, y) = (y, x)
ii. x, y, z V , R : (x + y, z) = (x, z) + (y, z)
iii. x V : x 6= 0 (x, x) > 0
Notice(s): 2.5.1 In the mathematics and physics there are often used other inner
products. They differ from the Euclidean inner product and it are variations on
the conditions of Definition 2.5.1 i and Definition 2.5.1 iii. Other possibilities are
Clarification(s): 2.5.1
Condition Def. 2.5.1 iii implies condition Ntc. 2.5.1 b. Condition Ntc. 2.5.1 b
is weaker then condition Def. 2.5.1 iii.
In the theory of relativity, the Lorentz inner product plays some rule and it
satisfies the conditions Def. 2.5.1 i, Def. 2.5.1 ii and Ntc. 2.5.1 b.
In the Hamiltonian mechanics an inner product is defined by the combination
of the conditions Ntc. 2.5.1 a, Def. 2.5.1 ii and Ntc. 2.5.1 b. The Vector Space
V is called a symplectic Vector Space. There holds that dimV = even. (Phase
space)
If the inner product satisfies condition Def. 2.5.1 i, the inner product is called
symmetric. If the inner product satisfies condition Ntc. 2.5.1 a then the inner
product is called antisymmetric.
18
Definition 2.5.2 let {ei } be a basis of V and define the numbers gij = (ei , e j ). The
matrix G = [gij ], where i is the row index and j is the column index, is called the
Gram matrix.
Notation(s):
Proof
a. Take a fixed u V and define the linear function x 7 (u, x). Then there exists a
u V such that for all x V holds: < b
b u, x > = (u, x). The addition u 7 b u seems
to be a linear transformation. This linear transformation is called G : V V . So
u = Gu.
b
Because dim(V) = dim(V ) < the bijectivity of G is proved by proving that G
is injective. Assume that there exists some v V, v 6= 0 such that Gv = 0. Then
holds for all x V that 0 = < Gv, x > = (v, x) and this is in contradiction with
Ntc. 2.5.1 b.
b. G is invertiblle if and only if GT is invertible. Assume that there is a columnvector
X Rn , X 6= O such that GT X = O. Then the rowvector XT G = O. With x =
E1 X 6= O follows that the covector E (XT G) = Gx = 0. This is contradiction with
the bijectivity of G.
c. The components of Gei are calculated by < Gei , ek > = (ei , ek ) = gik . There follows
that Gei = gikb e k and also that Gx = G(xi ei ) = xi gikbe k.
d. e k = lkb
Out of G(gli ei ) = gli Gei = gli gikb ek = be l follows that G1 b
e l = gli ei . At last
G1 b e l ) = yl gli ei .
y = G1 (ylb
19
Comment(s): 2.5.1
The numbers gi0 j0 are put in a matrix with the name G,, . So G,, = [gi0 j0 ] with
j0 the column index and i0 the row index, such that in matrix notation can be
written G,, = (A, )T GA, .
In the follow up, so also in the next paragraphs, the inner product is assumed
to satisfy the conditions Def. 2.5.1 i, Def. 2.5.1 ii and Ntc. 2.5.1 b, unless oth-
erwise specified. So the Gram matrix will always be symmetric.
Definition 2.5.3 In the case of an Euclidean inner product the length of a vector
x is notated by | x | and is defined by
p
| x | = (x, x).
Lemma 2.5.1 In the case of an Euclidean inner product holds for every pair of
vectors x and y
| (x, y) | | x | | y | .
Definition 2.5.4 In the case of an Euclidean inner product the angle between
the vectors x 6= 0 and y 6= 0 is defined by
(x, y)
= arccos .
| x || y |
Starting Point(s):
Starting Point(s):
Definition 2.6.1 To the basis {ei } in V is defined a second basis {ei } in V which is
e i = gij e j . This second basis is called the to the first basis
defined by {ei } = G1 b
belonging reciproke basis.
21
Comment(s): 2.6.1
j
Out of ei = gij e j follows that gki ei = gki gij e j = k e j = ek . These relations
express the relation between a basis and its belonging reciproke basis. It is
important to accentuate that the reciproke basis depends on the chosen inner
product on V. If there was chosen another inner product on V then there was
attached another reciproke basis to the same basis.
The to the reciproke basis belonging Gram matrix is easily to calculate
There holds that (ei , e j ) = gil (el , e j ) = gil glj = ij . In such a case is said that
the vectors ei and e j for every i 6= j are staying perpendicular to each other.
Lemma 2.6.1 Let {ei } and {ei0 } be bases of V and consider there belonging reci-
0
proke bases {ei } and {ei }. The transistion matrix from the basis {ei } to the basis {ei0 }
0
is given by A and the transistion matrix the other way around is given by A, . So
0 0 0
ei = Aii0 ei and ei = Aii ei .
Proof It follows direcly out of the transistions between dual bases, see Lemma 2.2.2.
The proof can be repeated but then without "dual activity".
0 0 0
Notate the transition matrices between the bases {ei } and {ei } by B, = [Bii ] and B =
0 0 0
[Bii0 ], so ei = Bii0 ei and ei = Bii ei . On one hand holds (ei , e j ) = ij and on the other
0 j0 j0 0 j0 j0
hand (ei , e j ) = (Bii0 ei , A j e j0 ) = Bii0 A j ij0 = Bii0 A j , so Bii0 A j = ij . Obviously are B, and
0 0
A each inverse. The inverse of A is given by A, , so B, = A, . Completely analogous is
0 0
to deduce that B = A .
Comment(s): 2.6.2
j
For the covariant components holds xi = x j i = x j (e j , ei ) = (x j e j , ei ) = (x, ei ).
For the contravariant components holds xi = x j ij = x j (ei , e j ) = (ei , x j e j ) =
(ei , x).
With respect to a second basis {ei0 } holds xi0 = (x, ei0 ) = (x, Aii0 ei ) =
Aii0 (x, ei ) = Aii0 xi .
The covariant components transform on the same way as the basisvectors, this
in contrast to the contravariant components. This explains the words covariant
and contravariant. The scheme beneath gives some clarification.
xi0 = Aii0 xi covariant case with A,
ei0 = Aii0 ei ( with: A, )
xi0 = Ai0 xi contravariant case with A0 = (A, )1
i
The mutual correspondence between the covariant and the contravariant com-
ponents is described with the help of the Gram matrix and its inverse. There
holds that xi = (x, ei ) = x j (e j , ei ) = g ji x j and for the opposite direction holds
xi = (x, ei ) = x j (e j , ei ) = g ji x j . With the help of the Gram matrix and its
inverse the indices can be shifted "up" and "down".
The inner product between two vectors x and y can be written on several man-
ners
x y gij = x yi
i j i
(x, y) =
x y gij = x yi .
i j i
Summarized:
xi0 = xi Aii0 X, = b
b XA,
xi = gij x j X = (GX)T
b
(x, y) = xi yi (x, y) = b
XY
0 0
(x, y) = xi0 yi (x, y) = b
X, Y
(x, y) = gij xi y j (x, y) = XT GY
0 0
0 T 0
(x, y) = gi0 j0 xi y j (x, y) = X G,, Y
23
Conclusion(s): IMPORTANT:
To a FIXED CHOSEN inner product (.) the concept of "dual space" can be ignored
without any problems. EVERY preliminary formula with hooks "< , >" in it,
gives a correct expression if the hooks "< , >" are replaced by "(, )" and if the
caps " b " are kept away. There can be calculated on the customary way such as
is done with inner products.
Starting Point(s):
Lemma 2.7.1 For every invertible symmetric n n-matrix Q there exists a whole
number p {0, , n} and an invertible matrix A such that AT QA = , with =
diag(1, , 1, 1, , 1). The matrix contains p times the number 1 and (n p)
j
times th number 1. Notate A = [Ai ], Q = [Qij ] and = [ij ], then holds
Aki Qkl Alj = ij in index notation.
Proof Because of the fact that Q is symmetric, there exists an orthogonal matrix F such
that
FT QF = = diag(1 , , n ).
with the searched matrix. The number of positive eigenvalues of the matrix Q gives
the nummer p.
24
Proof Let {ci } be a basis of V. Let Q be the corresponding Gram matrix, so Qij =
(ci , c j ). This matrix is symmetrix and invertible. Lemma 2.7.1 gives that there exists an
invertible matrix A such that AT QA = = diag(1, , 1, 1, , 1). Write A = [Aii0 ]
and define the set {ci0 } with ci0 = Aii0 ei . Since A is invertible, the set {ci0 } is a basis of V
and there holda that
j j
(ci0 , c j0 ) = Aii0 A j0 (ci , c j ) = Aii0 Qij A j0 = i0 j0 .
Comment(s): 2.7.1
Definition 2.7.1 The to Theorem 2.7.1 belonging basis {ei } is called an orthonormal
basis of the Vector Space V.
Notice(s): 2.7.1
O(p, q) = {A Rnn | AT A = }.
A special subgroup of it is
Example(s): 2.7.1
If the inner product on V has signature p = n then the group O(p, q) is exactly
equal to the set of orthogonal matrices. This group is called the orthogonal
group and is notated by O(n). An element out of the subgroup SO(n) trans-
forms an orthogonal basis to an orthogonal basis with the same "orientation".
Remember that the orthogonal matrices with determinent equal to 1 describe
rotations around the origin.
Let the dimension of V be equal to 4 and the inner product on V has signature
p = 1. Such an inner product space is called Minkowski Space. The
belonging group O(1, 3) is called the Lorentz group and elements out of this
group are called Lorentz transformations. Examples of Lorentz transforma-
tions are
cosh sinh 0 0
sinh cosh 0 0
A1 = ,
0 0 1 0
0 0 0 1
Starting Point(s):
r
Definition 2.8.1 The s -tensor on V, with r = 0, 1, 2, , s = 0, 1, 2, is a func-
tion
T : V V V V R,
| {z } | {z }
r times s times
u;b
b z ; v; w; ; y 7 R,
v; ;b
| {z } | {z }
r covectors V s vectors V
which is linear in each argument. This means that for every , R and each
"slot" holds that by way of example
u,b
T( b v, , b
z1 + b
z2 , v; w; ; y) =
T( b
u,b
v, ,b
z1 , v; w; ; y) + T( b
u,b
v, ,b
z2 , v; w; ; y).
For more specification there is said that T is contravariant of order r and is covariant
of order s. If holds that p = r+s then there is sometimes spoken about a p-tensor.
Comment(s): 2.8.1
u,b
The order of the covectors b v, ,b
z and the order ofthe vectors v; w; ; y is
of importance! Most of the time, the value of T changes if the order of two
covectors or vectors are changed. If a vector is changed with a covector the
result is a meaningless expression.
Sometimes a notation is used such that the covectors and the vectors are not
splitted up into two separated groups, but are placed on an agreed fixed posi-
tion. The order remains of importance and the previous remark is still valid!
0
2.8.2 0 -tensor = scalar = number
27
If p = 0 there are no vectors or covectors to fill in. The following definition is just a
convention.
1
2.8.3 0 -tensor = contravariant 1-tensor = vector
1
Definition 2.8.3 A 0 -tensor is a linear transformation of V to R.
Notice(s): 2.8.1
Write the tensor as by 7 T(by ). In accordeance with Lemma 2.3.1 there is some
y) =< b y V . The set of 10 -tensors is
y, a > for all b
vector a V such that T(b
exactly equal to the Vector Space V, the startingpoint.
For every basis {ei } of V a 10 -tensor T can be written as T = T(b e i )ei = Ti ei ,
with Ti ei = ai ei = a. For every b y V holds as known
e i ) = yi T(b
y ) = T(yib
T(b e i ) = T(b
e i) < b y, ai ei > = < b
y, ei > = < b y, a > .
Definition 2.8.4 The numbers Ti = T(b e i ) are called the contravariant components
of the tensor T with respect to the basis {ei }. This explains also the name "con-
travariant 1-tensor".
0
2.8.4 1 -tensor = covariant 1-tensor = covector
0
Definition 2.8.5 A 1 -tensor is a linear transformation of V to R.
28
Notice(s): 2.8.2
Write the tensor as x 7 F(x). In accordeance with Definition 2.2.1 the func-
tions F is a linear function on V and can be written as x 7 F(x) = < bf, x >, for
0
certain f V . The set of 1 -tensors is exactly equal to Dual Space V of the
b
e i = fib
with Fib ei = b
f. For every x V holds as known
x i ei ) = b
F(x) = T(b x i F(ei ) = F(ei ) < b
e i , x > = < fi b
e i, x > = < b
f, x > .
Definition 2.8.6 The numbers Fi = F(ei ) are called the covariant components of
the tensor F with respect to the basis {ei }. This explains also the name "covariant
1-tensor".
0
2.8.5 2 -tensor = covariant
2-tensor =
linear transformation: V V
Clarification(s): 2.8.1
0
For a 2 -tensor holds:
Comment(s): 2.8.2
0
p, b
Definition 2.8.9 For every b q V the p b
2 -tensor b q on V is defined by
q )(x, y) = < b
p b
(b p, x >< b
q, y > .
Notice(s): 2.8.3
p, b
If the system { b q } is linear independent then b q 6= b
p b q b
p.
If there are made good compromises then there exists an 1-1 correspondence between
the 02 -tensors and the linear transformations from V to V .
30
0
Theorem 2.8.1 Given: a 2 -tensor K.
e i , so Ku = K(u, ei )b
Explicitly: K = K(, ei )b e i.
There exists just one linear transformation K : V V such that
e i , so K w = K(ei , w)b
Explicitly: K = K(ei , )b e i.
Proof
Choose a fixed a V and look to the 01 -tensor x 7 K(a, x). Interpret anyhow
e i.
Notice 2.8.2, so K w = K(ei , w)b
0
The explicit representation after a basis transition holds K u = K(ei0 , u)b ei .
Notice(s): 2.8.4
defined by ij = (ei , e j ), are called the covariant components of the tensor with
respect to the basis {ei }.
31
Notice(s): 2.8.5
0
Definition 2.8.12 Let {ei } be basis on V. To every pair indices i and j the 2 -tensor
ei b
b e j is defined by
ei b
(b e j )(x, y) = b
e i (x)b
e j (y) = < b
e i , x >< b
e j, y > .
Lemma 2.8.1
ei b
The set {b ei b
e j } is a basis of T20 (V). There holds: = ij b e j.
If dim( V ) = n then dim( T20 (V) ) = n2 .
e i , x >< b
(x, y) = (xi ei , y j e j ) = ij (xi )(y j ) = ij < b e j , y > = ij (b
ei b
e j )(x, y),
ei b
or = i j b ei b
e j . The Vector Space T20 (V) is accordingly spanned by the set {b e j }.
ei b
The final part to prove is that the system {b e j } is linear independent. Suppose that
there are n2 numbers ij such that ij bei b
e j = 0. For every k and l holds that
j
ei b
0 = ij (b e j )(ek , el ) = ij ik l = kl .
2
2.8.6 0 -tensor = contravariant 2-tensor =
linear transformation: V V
Clarification(s): 2.8.2
2
For a 0 -tensor H holds:
x + b
H(b y, b
z ) = H(b z ) + H(b
x, b y, b
z ),
x, b
H(b y + b
z ) = H(b y ) + H(b
x, b x, b
z ),
y, b
x, b
for all b z V and for every , R.
Comment(s): 2.8.4
2
Definition 2.8.15 For every x, y V the 0 -tensor x y on V is defined by
u,b
(x y)( b v) =<b
u, x >< b
v, y > .
Notice(s): 2.8.6
y) =<b
x,b
H(b x, Hb
y >,
y) =<b
x,b
h(b y, Hb
x>
If there are made good compromises then there exists an 1-1 correspondence between
the 20 -tensors and the linear transformations from V to V.
2
Theorem 2.8.2 Given: a 0 -tensor H.
x V b
b y V : H(b y) =<b
x,b x, Hb
y>.
Explicitly: H = H(b e i , )e i , so Hb e i ,b
v = H(b v )ei .
There exists just one linear transformation H : V V such that
x V b
b y V : H(b y, H b
y) =<b
x,b x>.
e i )e i , so H b
Explicitly: H = H(,b v = H(b e i )e i .
v,b
Proof
Notice(s): 2.8.7
x V b
If b y V : H(b y ) = H(b
x,b y,bx ) then holds H = H .
x V b
If b y V : H(b y ) = H(b
x,b y,bx ) then holds H = H .
Notice(s): 2.8.8
2
Definition 2.8.18 Let {ei } be basis on V. To every pair indices i and j the 0 -tensor
ei e j is defined by
Lemma 2.8.2
(b e i , y jb
y ) = (xib
x,b e j ) = ij xi y j = ij < b y, e j > = ij (ei e j )(b
x, ei >< b x,b
y ),
35
or = i j ei e j .
The Vector Space T02 (V) is accordingly spanned by the set {ei e j }. The final part to
prove is that the system {ei e j } is linear independent. This part is done similarly as in
Lemma 2.8.1.
1
2.8.7 1 -tensor = mixed 2-tensor =
linear transformation: V V and V V
Clarification(s): 2.8.3
1
For a 1 -tensor R holds:
x + b
R(b y, z ) = R(b
x, z ) + R(b
y, z ),
x, y + z ) = R(b
R(b x, y) + R(b
x, z),
Comment(s): 2.8.5
1
Definition 2.8.20 For every pair x V, b
y V, the 1 -tensor x b
y on V is defined
by
u, v) = < b
y)( b
(x b u, x >< b
y, v > .
36
Definition 2.8.21
1
With a linear transformation R : V 7 V is associated a 1 -tensor:
x, y) = < b
R(b x, Ry > .
1
With a linear transformation P : V 7 V is associated a 1 -tensor:
x, y) = < Pb
P(b x, y > .
There exists an 1-1 correspondence between the 11 -tensors and the linear transforma-
tions from V to V. There exists an 1-1 correspondence between the 11 -tensors and the
linear transformations from V to V .
1
Theorem 2.8.3 Given: a 1 -tensor R.
x V y V : R(b
b x, y) = < b
x, R y > .
x V y V : R(b
b x, y) = < R b
x, y > .
e j , so R b
Explicitly: R = R(, e j )b u = R( b ej .
u, e j )b
Proof
Notice(s): 2.8.9
x and y in R(b
Shifting b x, y) gives a meaningless expression.
Notice(s): 2.8.10
1
Definition 2.8.23 Let {ei } be basis on V. To every pair indices i and j the 1 -tensor
ei b
e j is defined by
x, y ) = < b
e j )(b
(ei b e j, y > .
x, ei >< b
Lemma 2.8.3
(b e i , y j e j ) = ij xi y j = ij < b
x, y) = (xib e j , y > = ij (ei b
x, ei >< b e j )(b
x, y ),
or = ij ei b
e j.
The Vector Space T11 (V) is accordingly spanned by the set {ei b e j }. The final part to
e j } is linear independent. This part is done similarly as in
prove is that the system {ei b
Lemma 2.8.1.
38
0
2.8.8 3 -tensor = covariant
3-tensor =
linear transformation: V (V V) and (V
V) V
The meaning is an obvious expansion of the Clarification(s) 2.8.1, 2.8.2 and 2.8.3. See
also the general definition 2.8.1.
Comment(s): 2.8.6
0
Definition 2.8.26 u,b
For every b v, w
b V the u b
3 -tensor b v w
b on V is defined
by
u b
(b b )( x, y, z ) = < b
vw u, x >< b
v, y >< w
b, z > .
Lemma 2.8.4
e j b
The set {b e j b
e j } is a basis of T30 (V). There holds: = hij b
e j b
e j b
e j.
If dim( V ) = n then dim( T30 (V) ) = n3 .
Comment(s): 2.8.8
2
2.8.9 2 -tensor = mixed
4-tensor =
linear transformation: (V V) (V V) =
Comment(s): 2.8.9
2
Definition 2.8.29 For every set a, b V, b d V the
c, b 2 -tensor a b b
c b
d is
defined by
( a b b
c b u,b
d )( b v, x, y) = < b
u, a >< b
v, b >< b
c, x >< b
d, y > .
Notice(s): 2.8.11
c, b
Also here, if the system {b d } is linear independent then
c d 6= a b d b
a b b b b c.
2
Definition 2.8.30 Let {ei } be a basis of V and a 2 -tensor on V. The numbers
jk jk
hi , defined by hi
= (b e j ,b
e k , eh , ei ), are called the (mixed) components of the
tensor with respect to the basis {ei }. The collection of mixed components of a
2 jk
2 -tensor are organized in a 4-dimensional cubic matrix and is notated by [hi ].
Lemma 2.8.5
jk
e h b
The set {e j ek b e i } is a basis of T22 (V). There holds: = hi e j ek b
e h b
e i.
If dim( V ) = n then dim( T22 (V) ) = n4 .
41
(R) : V V : x 7 (R)x e j ,b
= Rlm (b e m , el , x)e j
0 0 0
e j ,b
= Rlm0 (b e m , el0 , x)e j0
(R) : V V : x 7 (R)b
b x = Rlm (b e m , el , e j )b
x,b ej
0 0 0
= Rlm0 (b e m , el0 , e j0 )b
x,b ej
This "game" can also be played with summations about other indices.
The case: (V V ) (V V ) and (V V) (V V).
ei b
Let K : V V be a linear transformation. Write K = Kijb e j . In this case
there is worked only with the index notation.
jk jk jk
[Khi ] 7 [hi K jk ], [xh ] 7 [hi K jk xh ], other choice: [xh ] 7 [hi K jk xi ]
With H : V V and H = H jk e j ek ,
jk jk jk
[H jk ] 7 [hi Hhi ], [x j ] 7 [hi Hhi x j ], other choice: [xk ] 7 [hi Hhi xk ].
Et cetera.
Contraction and .
<b
v, a > < b
z, d >< b
p, f > < b
u, k > .
For every choice of the covectors and vectors the righthand side is
a product of (r + s) real numbers!
Definition 2.8.32 For every pair of rs -tensors T and t, the rs -tensor T+t is defined
by (T+t)(b v, w b , ,b
z, f, g, , k) = T(b
v, w
b , ,b
z, f, g, , k)+t(bv, w
b , ,b
z, f, g, , k)
r
and for every R the s -tensor T is defined by (T)(b v, w
b , ,b
z, f, g, , k) =
T(bv, w
b , ,b
z, f, g, , k).
The proof of the following theorem goes the same as the foregoing lower order exam-
ples.
43
Theorem 2.8.4
The set of rs -tensors is a Vector Space over R, which is notated by Tsr (V).
Let {ei } be basis on V and dim(V) = n then a basis of Tsr (V) is given by
1 i1 n, , 1 ir n,
j1 j2 js
{ ei1 ei2 eir b
e b e b
e }, with
1 j1 n, , 1 js n.
So dimTsr = nr+s .
In the expansion
i i i
e j1 b
T = T j1 j2 jr ei1 ei2 eir b e j2 b
e js ,
1 2 s
The order of a tensor can be decreased with 2 points. The definition makes use of a
basis of V. The definition is independent of which basis is chosen.
Definition 2.8.33 Let {ei } be basis on V. Let T Tsr (V) with r 1 and s 1.
Consider the summation
0
e i , , ei , ) = T( ,b
T( ,b e i , , ei0 , ).
The dual basis vectors stay on a fixed chosen "covector place". The basis vectors
stay on a fixed chosen "vector place". The defined summation is a r1
s1 -tensor. The
corresponding linear transformation from Tsr (V) to Ts2
r1 (V) is called a contraction.
44
Example(s): 2.8.2
1 0
The contraction of a R is scalar. In index notation: Rii = Rii0 . This is
1 -tensor
the "trace of the matrix". In the special case of a 11 -tensor of the form a b
b:
0
bi ai = bi ai0 = < b
b, a >.
Consider the mixed 5-tensor , in index notation
A contraction over the first two indices (i and j) gives a 3-tensor , the
covariant components are given by
Pay attention to the order of how "the vectors and covectors are filled in"!
Comment(s): 2.8.11
Starting Point(s):
There is known out of the paragraphs 2.5 and 2.6 that if there is "chosen an inner prod-
uct on V", "V and V can be identified with each other". There exists a bijective linear
transformation G : V V with inverse G1 : V V. To every basis {ei } on V, there is
available the associated "reciprocal" basis {ei } in V, such that (ei , e j ) = ij .
This means that it is sufficient to work with 0p -tensors, the covariant tensors. Every
"slot" of some mixed rs -tensor, which is sensitive for covectors can be made sensitive
for a vector by transforming such a vector by the linear transformation G. Otherwise
every "slot" which is sensitive for vectors can be made sensitive for covectors by using
the linear transformation G1 .
Summarized: If there is chosen some fixed inner product, it is enough to speak about
p-tensors. Out of every p-tensor there can be constructed some type rs -tensor, with
r + s = p.
Conclusion(s): (IMPORTANT)
A FIXED CHOSEN inner product (, ) on V leads to:
calculations with the usual rules of an inner product, replace all angular hooks
< , > by round hooks (, ),
correct expressions, if the hats b are dropped in all the formules of paragraph
2.8.1 till 2.8.10.
Example(s): 2.8.3
Starting Point(s):
Comment(s): 2.9.1
Out of the linear Algebra, the 1-dimensional blocks of numbers (the rows and
columns) and the 2-dimensional blocks of numbers (the matrices) are well-
known. Within the use of these blocks of numbers is already made a differ-
ence between upper and lower indices. Here will be considered q-dimensional
blocks of numbers with upper, lower or mixed indices. These kind of "super
matrices" are also called "holors" 1. For instance the covariant components of
a 4-tensor leads to a 4-dimensional block of numbers with lower indices.
Notation(s):
T02 (Rn ) = Rnn is the Vector Space of all the nn-matrices with upper indices.
T11 (Rn ) = Rnn is the Vector Space of all the n n-matrices with mixed indices.
T20 (Rn ) = Rnn is the Vector Space of all the n n-matrices with lower indices.
T21 (Rn ) is the Vector Space of all the 3-dimensional cubic matrices with one
upper index and two lower indices.
Tsr (Rn ), with r, s {0, 1, 2, } fixed, is the Vector Space of all (r + s)-dimensional
holors with s lower indices and r upper indices.
Comment(s): 2.9.2
The Vector Space Tsr (Rn ) over R is of dimension n(r+s) and is isomorf with
(r+s)
Rn . If for instance the indices are lexicographic ordered then an identi-
(r+s)
fication with Rn can be achieved.
Notation(s):
Comment(s): 2.9.3
Notation(s):
0
Definition 2.9.2 A covariant 1-tensor, 1 -tensor or covector is a transformation
F : Bas(V) T10 (Rn ) with the property
F({ei }) = [x j ]
)
j
x j0 = A j0 x j .
F({ei0 }) = [x j0 ]
Comment(s): 2.9.4
0
e i = xi0b
x = xib
With every covariant 1-tensor corresponds a linear function b ei
which is independent of the chosen basis.
Notation(s):
Comment(s): 2.9.5
0
With every contravariant 1-tensor corresponds a vector x = xi ei = xi ei0
which is independent of the chosen basis.
49
Notation(s):
Comment(s): 2.9.6
Comment(s): 2.9.7
1
Definition 2.9.6 A mixed 2-tensor or 1 -tensor is a transformation
S : Bas(V) T11 (Rn ) with the property
S({ei }) = [Tlk ]
0 0 0
Tlk0 = Akk All Tlk .
0
S({ei0 }) = [Tl0 ]
k
Comment(s): 2.9.8
0
Definition 2.9.7 A covariant p-tensor or p -tensor is a transformation
S : Bas(V) Tp0 (Rn ) with the property
S({ei }) = [Ti1 ip ] i ip
Ti0 i0p = Ai10 Ai0 Ti1 ip .
S({ei0 }) = [Ti0 i0p ]
1 1 p
1
51
Comment(s): 2.9.9
Some examples.
52
Example(s): 2.9.1
and there follows that F1 is a mixed 2-tensor. This can also be seen in
matrix language. The argument is then:
0 0
There holds I, = A I A, for every invertible n n-matrix A, .
F1 is the Kronecker tensor.
Condider the transformation F2 which adds to every basis of V the matrix
[km ]. The question becomes if F2 is a covariant 2-tensor?
Holds "I,, = (A, )T I A, " forevery invertible n n-matrix? The answer is
"no", so F2 is not a covariant 2-tensor.
Condider the transformation F3 which adds to every basis of V the matrix
[km ]. The question becomes if F3 is a contravariant 2-tensor?
answer is "no", because "I,, = A I (A )T " is not valid for every invertible
0 0
n n-matrix.
If there should be a restriction to orthogonal transition matrices then F2
and F3 should be 2-tensors.
The to the mixed 2-tensor F1 belonging linear transformation from V to
V is given by x 7< b e i , x > ei = xi ei = x, the identitiy map on V.
Consider the transformation F which adds to every basis of V the matrix
diag(2, 1, 1). The question becomes if F is a covariant, contravariant or a
mixed 2-tensor? It is not difficult to find an invertible matrix A such that
diag(2, 1, 1) 6= A1 diag(2, 1, 1) A. So it follows immediately that F is not a
2-tensor of the type as asked.
53
Example(s): 2.9.2
({ei }) = [qkl ] = Q.
a scalar? If {ei } and {ei0 } are two arbitrary basis of V and ({ei }) = qkl and
0
({ei0 }) = qkl0 then holds
0 0
qkl0 = All0 Akk qkl ,
such that
0 0
qll0 = All0 Alk qkl = lk kl = ll ,
such that ({ei }) = ({ei0 }). is obviously a scalar. The argument in the
0
matrix language should be that trace((A, )1 Q A, ) = trace(Q, ) for every
invertible n n-matrix A, .
Consider a covariant 2-tensor . Write
({ei }) = [qkl ] = Q.
Example(s): 2.9.3
Given are the tensors qij , qij , qij . Is the change of indices a tensor operation or
in matrix language: "Is the transposition of a matrix a tensor operation?" For
matrices with mixed indices this is not the case, but for matrices with lower or
upper indices this is a tensor operation. The explanation will follow in matrix
language.
0 0
? Mixed indices: Notate Q = [qij ] and Q, = [qij0 ]. Then holds
0 0
Q, = (A, )1 Q A, . Hereby follows that (Q, )T = (A, )T QT (A, )T . There
0
follows that (Q, )T 6= (A, )1 QT A, , in general, so the transposition of a
matrix with mixed components is not a tensor operation.
? Lower indices: Notate Q = [qij ] and Q,, = [qi0 j0 ]. Then holds
Q,, = (A, )T Q A, . Hereby follows that (Q,, )T = (A, )T QT A, . This is a tensor
operation!
? Upper indices: Analogous as in the case of the lower indices.
Given are the tensors qij , qij , qij . Is the calculation of a determinant of a matrix
is a tensor operation? To matrices with mixed indices it is a tensor operation,
but to matrices with lower or upper indices it is not. This means that the
calculation of a determinant of a matrix with mixed indices defines a scalar.
0
Because det(Q, ) = det((A, )1 Q A, ) = det(Q), but det(Q,, ) 6= det((A, )T Q A, ),
in general.
Example(s): 2.9.4
Let n = 3. Given are the contravariant 1-tensors xi and y j . Calculate the cross
product z1 = x2 y3 x3 y2 , z2 = x3 y1 x1 y3 and z3 = x1 y2 x2 y1 . This is
not a tensor operation, so zk is not a contravariant 1-tensor. In other words: zk
is not a vector. To see that there is made use of the following calculation rule
Example(s): 2.9.5
Starting Point(s):
Comment(s): 2.10.1
Definition 2.10.2 Let Tk (V)( = Tk0 (V)). The tensor is called symmetric
if for every number of k vectors v1 , , vk V and for every Sk holds that
(v1 , , vk ) = (v(1) , , v(k) ). The tensor is called antisymmetric if for every
number of k vectors v1 , , vk V and for every Sk holds that (v1 , , vk ) =
sgn() (v(1) , , v(k) ).
56
Comment(s): 2.10.2
If k > n, then is every antisymmetric tensor equal to the tensor which adds 0
tot every elemenst of its domain.
The change of an arbitrary pair of "input"-vectors has no influence to a sym-
metric tensor, it gives a factor 1 to an antisymmetric tensor.
The sets of the symmetric and the antisymmetric k-tensors are subspaces of
Tk (V).
Notation(s):
Notice(s): 2.10.1
b g = b
f b g b
f, b f = 0 and b
f b f + b
g bg =b g for all R.
f b
e i b
For every basis {ei } in V the set {b e j | 1 i < j n} is linear independent.
57
Notice(s): 2.10.2
e j1 b
{b e jk | 1 j1 < j2 < < jk n}
Vk
a basis of (V).
Clarification(s): 2.10.1
Consequence(s):
!
Vk n
The dimension of (V) is equal to .
k
58
Comment(s): 2.10.3
In this definition is made use of the operator perm, which is called permanent.
Perm adds a number to a matrix. The calculation is almost the same as the
calculation of a determinant, the only difference is that there stays a plus sign
for every form instead of alternately a plus or minus sign.
Notice(s): 2.10.3
fb
b g =b gb f.
fb
b g = 0 b f = 0 and/or b
g = 0.
bf + b
g b g =b g + b
f b g bg =b g + b
fb gbg for all R.
e ib
For every basis {ei } of V is the set {b e j | 1 i j n} linear independent.
Notice(s): 2.10.4
The order in b f1 b
fk is not of importance, another order gives the same sym-
metric k-tensor.
fk = 0 if and only if there exists an index j such that b
f1 b
b fk = 0.
e j1 b
{b e jk | 1 j1 jk n}
Wk
a basis of (V).
Clarification(s): 2.10.2
Consequence(s):
n+k1
!
Wk
The dimension of (V) is equal to .
k
Example(s): 2.10.1
Notice(s): 2.10.5
i
e ik = k1!b e i1 b e ik .
A b e 1 b
i
e ik = k1!b e i1 b
e ik .
S b e 1 b
A 2-tensor can always be written as the sum of a symmetrical and an antisym-
metrical 2-tensor. Consider the covariant components of a 2-tensor on V,
i j = 12 (ij + ji ) + 12 (ij ji ).
n+k1
! !
n
For k > 2 then + < nk , such that the Vector Spaces
k k
Wk Vk
(V) and (V) together dont span the space Tk (V).
Vk Vl Vk+l
Definition 2.10.8 If (V) and (V) then the tensor (V) is
defined by
(k + l) !
= A( ).
k!l!
Comment(s): 2.10.4
Vk Vl Vm Vm
Theorem 2.10.1 For (V), (V), (V) and (V) holds that
= (1)(kl)
( ) = ( )
( + ) = + .
Clarification(s): 2.10.3
The proof of the given theorem is omitted. The proof is a no small accounting
and combinatorial issue, which can be found in
(Abraham et al., 2001) ,Manifolds, , page 387.
The practical calculations with the wedge product are done following obvi-
ous rules. For instance if k = 2, l = 1 and = buwb + b x, = b
v b x + b
z
, dan geldt = buw b b x + buw b bz + b
v b
x b
z.
62
Example(s): 2.10.2
! !
1 0
Consider R2 with basis {e1 , e2 }, given by e1 = and e1 = . The associ-
0 1
ated dual basis is given by {be 1 ,b
e 2 }. The following notations are here employed
e1 = , e2 = and be 1 = dx,b e 2 = dy.
x y
The Vector Space 1 (R2 ) = (R2 ) is 2-dimensional and a basis of this space
V
b
b = (1 dx + 2 dy) (1 dx + 2 dy)
= 1 2 dx dy + 2 1 dy dx = (1 2 2 1 )dx dy.
! !
a1 b1
Let a = ,b = R2 . The numbers a1 , a2 , b1 and b2 are the con-
a2 b2
travariant components of a and b with respect tot the basis { , }. There
x y
holds that
(dx dy)(a, b) = < dx, a >< dy, b > < dx, b >< dy, a > = 1 2 2 1 .
This number is the oriented surface of the parallelogram spanned by the vec-
tors a and b.
The Vector Space 2 (R2 ) is 1-dimensional and a basis is given by {dx dy}.
V
63
Example(s): 2.10.3
Consider R3 with basis {e1 , e2 , e3 } given by e1 = = (1, 0, 0)T , e2 = =
x y
(0, 1, 0)T and e3 = = (0, 0, 1)T . The corresponding dual basis is notated by
z
{dx, dy, dz}.
The basis of the 3-dimensional Vector Space 1 (R3 ) is {dx, dy, dz}. The basis
V
of the 3-dimensional Vector Space 2 (R3 ) is {dx dy, dx dz, dy dz}, and
V
(dy dz)(a, b) = a2 b3 b2 a3 .
This number is the oriented surface of the projection on the y, z-plane of the
parallelogram spanned by the vectors a and b. In addition holds that
(dx dy dz)(a, b, c) =
(dx dy dz)(a, b, c) + (dy dz dx)(a, b, c) +
(dz dx dy)(a, b, c) (dy dx dz)(a, b, c)
(dx dz dy)(a, b, c) (dz dy dx)(a, b, c) =
a1 b2 c3 + a2 b3 c1 + a3 b1 c2 a2 b1 c3 a1 b3 c2 a3 b2 c1 .
This number is the oriented volume of the parallelepiped spanned by the vec-
tors a, b and c.
64
Example(s): 2.10.4
Consider R4 with basis {e0 , e1 , e2 , e3 } given by e0 = = (1, 0, 0, 0)T , e1 =
t
= (0, 1, 0, 0)T , e2 = = (0, 0, 1, 0)T and e3 = = (0, 0, 0, 1)T . The
x y z
corresponding dual basis is notated by {dt, dx, dy, dz}.
The basis of the 4-dimensional Vector Space 1 (R4 ) is {dt, dx, dy, dz}.
V
= 01 23 dt dx dy dz.
(dt dz)(a, b) = a0 b3 b0 a3 .
This number is the oriented surface of the projection on the t, z-plane of the
parallelogram spanned by a and b.
0
b0 c0
a
(dt dy dz)(a, b, c) = det a2 b2 c2
3
a b3 c3
Comment(s): 2.10.5
Starting Point(s):
66
Comment(s): 2.11.1
e i , a j >,
because of the representation given in Clarification 2.4, where aij = < b
for i = 1, , n, and j = 1, , (n k), and xij = < b e i , x j >, for i = 1, , n, and
j = 1, , k. Developping this determinant to the first (n k) columns, then
becomes clear that ( a1 a(nk) )(x1 , , xk ) is writable as a linear
combination of nk k k determinants
i1 i
x1 xk1
. ..
det .
. . ,
i
x k xik
1 k
Example(s): 2.11.1
Consider R3 with basis {e1 , e2 , e3 }, given by e1 = (1, 0, 0)T , e2 = (0, 1, 0)T and
e3 = (0, 0, 1)T . Define the volume on V by = b e1 b e2 b e 3 . Then holds
that (e1 , e2 , e3 ) = 1.
Let a, b R3 then b 1 (R3 ) and there holds that
V
a
1
a b1 x1
such that
a e 1 + (a3 b1 a1 b3 )b
b = (a2 b3 a3 b2 )b e 2 + (a3 b1 a1 b3 )b
e 3.
V2
In addition a (R3 ) and there holds that
1
a x1 y1
a)(x, y) = det a x y
2 2 2
(
3 3 3
a x y
2 2
3 3
1
y1
x y x y x
= a1 det + a2 det + a3 det ,
x3 y3
1
x y1
2 2
x y
or
e2 b
a = a1 b e 3 + a2 b
e3 b
e 1 + a3 b
e1 b
e 2.
Notice(s): 2.11.1
If for the basis {ei } of V holds that (e1 , , en ) = 1, then holds that
=b e n.
e1 b
Moreover holds for every k {1, , (n 1)},
e1 e n.
e (k+1) b
ek = b
The j1 , , j(nk) are the indices which are left over and is the amount of
permutations to get the indices i1 , , ik , j1 , , j(nk) in their natural order
1, 2, , n.
68
Starting Point(s):
Clarification(s): 2.12.1
The startingpoint of an inner product means that the inner product is sym-
metric, see Def. 2.5.1 i. In this paragraph it is of importance.
Comment(s): 2.12.1
With the help of the inner product there can be made a bijection between V and
V . This bijection is notated by G, see Theorem 2.5.1. For every a, b, x, y V
there holds that
!
(a, x) (a, y)
(Ga Gb)(x, y) = b a b (x, y) = det
b
(b, x) (b, y)
(a1 , x1 ) (a1 , xk )
. .
det ..
.. .
(ak , x1 ) (ak , xk )
n n
Because
V of the
fact that
V k = , there holds that
nk
k (nk)
dim (V) = dim (V) . Through the choice of the inner product and
the volume it is apparently possible to define an isomorphism between
Vk
(V) and (nk) (V).
V
69
a k) =
a1 b
0 < j n : (b a1 ak , followed by linear expansion.
Example(s): 2.12.1
Consider R3 and the normal inner product and volume then holds that
1
e b
b e2 = e1 e2 = be 3.
Notice(s): 2.12.1
with r is the number of negative values in {(ei1 , ei1 ), , (eik , eik )}, see 2.2 and
is the amount of permutations to get the indices i1 , , ik , j1 , , j(nk) in their
natural order 1, 2, , n.
70
Example(s): 2.12.2
(X, Y) = x1 y1 + x2 y2 .
Let { x , y } be the standard basis of R2 and notate the corresponding dual
basis by {dx, dy}. The same notation as used in Example 2.10.2. Notice that
the standard basis is orthonormal. Define the oriented volume on V by =
dx dy and notice that ( x , y ) = 1. The isomorphism G is given by
G = dx dx + dy dy.
V0
Let (R2 ) then holds that
V2
= (1) = = dx dy (R2 ).
V1
Let (R2 ) then holds that
V1
= (1 dx + 2 dy) = 1 dx + 2 dy = 1 dy 2 dx = (R2 ).
V0
= (12 d dy) = 12 (d dy) = 12 (R2 ).
71
Example(s): 2.12.3
Consider R3 . The used notations are the same as in Example 2.10.3. Define
the inner product by
G = dx dx + dy dy + dz dz
dx = dy dz (dx dy) = dz
1 = dx dy dz dy = dx dz (dx dz) = dy (dx dy dz) = 1
dz = dx dy (dy dz) = dx
( ) = (2 3 3 2 ) dx + (3 1 1 3 ) dy + (1 2 2 1 ) dz.
G = dt dt dx dx dy dy dz dz
(dt dx) = dy dz
dt = dx dy dz (dt dy) = dx dz
1 = dt dx dy dz dx = dt dy dz (dt dz) = dx dy
dy = dt dx dz (dx dy) = dt dz
dz = dt dx dy (dx dz) = dt dy
(dy dz) = dt dx
(dt dx dy) = dz
(dt dx dz) = dy (dt dx dy dz) = 1
(dt dy dz) = dx
(dx dy dz) = dt
Note that the inner product has signature (+, , , ) or (, +, +, +). Which signa-
ture is used, is a matter of convention. But today the signature (+, , , ) is very
often used, becomes standard.
72
2. Let V be a symplectic vector space. Prove that the dimension of V is even and that
axiom (ii) of the inner product can never be satisfied.
Notice(s): 2.14.1
Identification of V with V .
Let the linear transformation : V 7 V be defined by ((x))() = (x)
then (x) (V ) = V . If ((x))() = 0 for every V then (x) = 0 for
every V and there follows that x = 0. So is injective, together with
dimV = dimV = dimV = n < gives that is a bijective map between V
and V .
Nowhere is used a "structure". Nowhere are used coordinates or something
like an inner product. is called a canonical or natural isomorphism.
Identification of V with V .
The sets V and V contain completely different objects.
There is needed some "structure" to identify V with V .
Example(s): 2.14.1
First of all < , >: V V R, the Kronecker tensor, see Section 2.3:
<b
u, x > = b
u(x).
r
The s -tensors, see Section 2.8.10:
a b d b
p b
q b v, w
u (b b , ,b
z, f, g, , k) =
| {z } | {z }
r covectors s vectors
<b
v, a > < b
z, d >< b
p, f > < b
u, k > Tsr (V),
The k-tensor, see Section 2.10.4, can be seen as a construction where the rs -tensors
are used,
< b f 1 , x 1 > < f
b1 , x k >
V
b b
f1 fk (x1 , , xk ) = det
.
.. .
.. k (V),
< fk , x1 > < fk , xk >
b
b
f1 , ,b
with b fk V .
This tensor is antisymmetric.
Another k-tensor, see Section 2.10.6 can also be seen as a construction where the
r
s -tensors are used,
< b f 1 , x 1 > < f
b1 , x k >
W
f1 fk (x1 , , xk ) = perm
.
. .
. k (V),
. .
b b
< fk , x1 > < fk , xk >
b
b
with bf1 , ,b
fk V . For the calculation of perm, see Comment 2.10.3. Another
notation for this tensor is b fk ( = b
f1 b f1 b
fk ).
This tensor is symmetric.
76
with aij = < b e i , a j >, for i = 1, , n, and j = 1, , (n k), and xij = < b
ei , x j >, for
i = 1, , n, and j = 1, , k, see Comment 2.11.1. This tensor is a linear combina-
tion of k-tensors and antisymmetric.
a k) =
a1 b
0 < j n : (b a1 ak , followed by linear expansion,
M
M
V V R V V R
(v, w) 7 v0 M w ( f, w) 7 f M w
M V V M V V
M = Ms t s t M = Mst bs t
Mst = gs u Mu t
G1 M
M
V V R M
V V R
(v, f ) 7 v0 M f 0 ( f, g) 7 f M g0
j
Mi = M(bi , j ) Mij = M(i , j )
M V V M V V
M = Mst s bt M = Ms t bs bt
Mst = Ms u gu t Ms t = gs u Mu v gv t
M G1 G1 M G1
78
Ei = (0, , 0, 1, 0, , 0)T ,
The map f is also called a chart map. The inverse map f : U is called a
parametrization of . The variables x j are functions of the variables ui . If there is
notated f = (g1 , , gn ) then holds that x j = g j (ui ). The chart map and the param-
etrization are often not to describe by simple functions. The inverse function theorem
tells that f is differentiable in every point of U and also that
fi n g j
(x , , x )
1
(u1 , , un ) = ik ,
x j uk
fi gl
" # " #
with ul = f l (x1 , , xn ). In corresponding points the matrices
and are the
x j uk
inverse of each other. The curves, which are described by the equations f i (x1 , , xn ) =
C, with C a constant, are called curvilinear coordinates belonging to the coordinate
curves.
79
Example(s): 3.1.1
With some effort, the corresponding chart map is to calculate. There holds
that
q
r(x, y) = x2 + y2 ,
x
y 0, x 6= 0
arccos
x2 +y2
(x, y) =
2 arccos 2 2
x
y 0, x 6= 0.
x +y
The subject of study in this chapter is tensor fields on Rn . Intuitive it means that at
every point X of Rn ( or of some open subset of it) there is added a tensor out of some
Tensor Space, belonging to that point X. The "starting" Vector Space, which is added
to every point X, is a copy of Rn . To distinguish all these copies of Rn , which belong to
the point X, they are notated by TX (Rn ). Such a copy is called the Tangent Space in X.
Let be an open subset of Rn . All these Tangent Spaces, which belong to the points
X , are joined together to the so-called tangent bundle of :
[
T() = TX (Rn ) = Rn = {(X, x) | X , x Rn }.
X
The origins of all the Tangent Spaces TX (Rn ), with X , form together against the
open subset .
X
Let xk = gk (u1 , , un ) be a parametrization of . The vectors ci = ui are tangent to
the coordinate curves in X. So there is formed, on a natural way, a with the curvilinear
80
The kernel index notation is used just as in the foregoing chapter. In stead of
0 0 0 0
ui = ui (x1 , , xn ) is written xi = xi (x1 , , xn ) = xi (xi ) and analogous xi = xi (xi ).
" 0 #
xi k
The matrix (x ) is invertible in every point X, because the determinant is sup-
xi
posed to be"not equal# to zero. Before is already noticed that the inverse in the point X
xi k0 0 0 0 0 0
is given by (x ) with xk = xk (xk ). Differentiation of xi = xi (xi ) to x j leads to
xi
0
0
0 xi k xi k0 k
ij0 = (x ) i0 (x (x )).
xi x
X
The basis of TX (Rn ), associate with the coordinates xi is notated by { } or shorter with
xi
0
{ }. If there is a transition to other coordinates xi there holds that
x i
xi
= ,
xi xi xi
0 0
such that the transition matrix of the basis { i } to the basis { i0 } is equal to the matrix
x x
" #
x i
. Consequently the transition matrix of the basis { i0 } to the basis { i } is given
xi
0
x x
" 0 #
xi
by the matrix .
xi
In Chapter 2 is still spoken about a general Vector Space V. In the remaining lecture
notes the Tangent Space TX (Rn ) plays at every point X Rn the rule of this general
Vector Space V. At every point X Rn is added besides the Tangent Space TX (Rn ) also
the Cotangent Space TX (Rn ) = (T (Rn )) and more general the Vector Space T r (Rn )
X Xs
of the tensors which are covariant of the order s and contravariant of the order r. But
it also possible to add subspaces such as the spaces of the symmetric or antisymmetric
tensors, notated by X (Rn ), respectively X (Rn ) at X.
W V
To every basis of TX (Rn ) belongs also a dual basis of TX (Rn ). This dual basis, associ-
at the end of Section 2.6 . The result agrees with the folklore of the
infinitesimal calculus!
There belongs to a vector field a on Rn , n functions ai on Rn , such that a(X) = ai (xk ) .
xi
0 0 0 0
In other (curvilinear) coordinates xi is written a(xk (xk )) = ai (xk ) 0 and there holds
xi
that
0
i0 xi k k0 i k k0
k0
a (x ) = (x (x )) a (x (x )),
xi
0
0 xi i
which is briefly written as ai = a.
xi
xi
which is briefly written as i0 = 0 i .
xi
r i i
There belongs to a s -tensor field on Rn , n(r+s) functions j1 jr , such that
1 s
i i
(X) = j1 jr (xk ) dx j1 dx js .
1 s xi1 xir
0
In other (curvilinear) coordinates xi is written
0 i0 i0 0 0 0
(xk (xk )) = j10 jr0 (xk ) i0 i0 dx j1 dx js ,
1 s
x 1 x r
and there holds that
0 0
i01 i0r k0 xi1 k k0 xir k k0 x j1 k0 x js k0 i1 ir k k0
j0 j0 (x ) = (x (x )) (x (x )) 0 (x ) 0 (x ) j j (x (x )),
1 s xi1 xir x j1 x js 1 s
A 0-form is a scalar field and an 1-form is a covector field. In fact every k-form is a 0k -
tensor field, see Definition 3.2.4. This class of tensor vector fields is important. That is
reason there is paid extra attention to these tensor fields.
To a k-form on Rn belong nk functions i1 ik , for 1 i1 ik n, on Rn such that
X
(X) = i1 ik (xl ) dxi1 dxik . (3.1)
1i1 << ik n
0
Lemma 3.2.1 If in other (curvilinear) coordinates xi is written
0 i0
0
X 0
(xl (xl )) = i0 i0 (xl ) dxi1 dx k , (3.2)
1 k
1i0 << i0 n
1 k
with
i i (xi1 , , xik )
J i10 ik0 = 0 i0
.
1 k (xi1 , , x k )
83
The terms in the summation of the right part of 3.4 are not equal to zero if the indices
j0p for p = 1, , k are not equal. Choose a fixed, ordered collection of indices i01 , , i0k
with 1 i01 < < i0k n. Choose now the terms in the summation of the right side of
3.4 such that the unordered collection j01 , , j0k is exactly the collection i01 , , i0k . Note
that there are k! possibilities. To every unordered collection j01 , , j0k there is exactly
one Sk such that j0p = i0(p) , for p = 1, , k. Out all of this follows
X X xi1 xik i0 i0
dxi1 dxik = i0
j0
dx (1) dx (k) .
1i0 << i0 n Sk x (1) x (k)
1 k
i0 i0 0 i0
To put the term dx (1) dx (k) into the order dxi1 dx k has to be corrected
with a factor sgn(), the factor obtained by the order of the permutation. So
X X x i1 x ik
0 i0
dxi1 dxik = sgn() i0 j0 dxi1 dx k .
x (1) x (k)
1i0 << i0 n Sk
1 k
i i
In the term between the brackets we recognize the determinant J i10 ik0 , such that
1 k
0 i0
X
i i
dxi1 dxik = J i10 ik0 dxi1 dx . k
1 k
1i0 << i0 n
1 k
With all of this follows that the representation in 3.1 can be written as
0 i0
X X 0
i i
(X) = i1 ik (xl ) J i10 ik0 (xl ) dxi1 dx k =
1 k
1i1 << ik n 1i0 << i0 n
1 k
0 i0
X X 0
i i
J i10 ik0 (xl ) i1 ik (X) dxi1 dx k .
1 k
1i0 << i0 n 1i1 << ik n
1 k
Compare this with 3.2 and immediately follows the relation 3.3.
All the given definitions of the tensor fields, in this section, are such that the tensor
fields are defined as maps on Rn . Often are tensor fields not defined on the whole Rn
but just on an open subset of it. The same calculation rules remain valid of course.
We assume that the elements of Fsr are smooth enough. The components of an element
i i
F Fsr we note by F j1 jr with 1 ik n, 1 jl n, 1 k r and 1 l s. For
1 s
coordinate systems we use both the notation f as {xi }.
This means that if, for some curvilinear coordinate system on , a nr+s number of func-
tions on f () are given, that there exists just one rs -tensor field on .
It has to be clear that the components of a tensor field out of definition 3.2.4 are the
same as the components of a tensor field out of definition 3.3.1, both with respect to the
same curvilinear coordinate system.
The alternative definition is important, because one wants to do algebraic and analyti-
cal operations, for instance differentation, without to be linked to a fixed chosen coor-
dinate system. If after these calculations a set of functons is obtained, it is the question
if these functions are the components of a tensor field. That is the case if they satisfy
the transformation rules. Sometimes they are already satisfied if there is satisfied to
these transformation rules inside a fixed chosen class ( a preferred class) of curvilin-
ear coordinate systems. An example of such a preferred class is the class of the affine
coordinate transformations. This class is described by
0 0 0
xi = bi + Lii xi , (3.5)
h 0i h 0i
with bi Rn and Lii Rnn invertible. Coordinates which according 3.5 are associ-
ated with the cartesian coordinates are called affine coordinates. Even more important
are certain subgroups of it:
h 0i
i. Lii orthogonal: Euclidean invariance, the "Principle of Objectivity in the
continuum
h 0i mechanics.
ii. i
L Lorentz: Lorentz invariance in the special theory of relativity.
h i0 i
iii. Lii symplectic: Linear canonical transformations in the classical mechanics.
If inside the preferred class the transformations are valid, than are the components of
the obtained tensor field outside the preferred class. An explicit formula is most of the
time not given or difficult to obtain.
All the treatments done in the previous chapter can also be done with the tensor fields.
They can be done pointswise for every X on the spaces TXsr (Rn ).
85
In section 2.3 we introduced the Kronecker tensor. This is the tensor which adds to
every basis the identity matrix with mixed indices. Now we define the Kronecker tensor
field as the 11 -tensor field that adds to every X Rn the Kronecker tensor in TX1 (Rn ).
1
Because of the fact that
0 0 0
0 0 xi k0 xi k k0 xi k0 xi k k0 x j k0 i k k0
ij0 (xk )
= (x ) = (x (x )) (x ) = (x (x )) j0 (x ) j (x (x ))
xi xi
0 0
x j x j x
Let (, ) be a symmetric inner product on Rn . Let v, w TX (Rn ) and define the inner
product (, )X by
(v, w)X = ij vi w j .
Here are vi and w j the components with respect to the elementary basis of TX (Rn ).
These are found on a natural way with the help of the cartesian coordinates. Let GX be
(Rn ), which belongs to the inner product (, ) . This
the isomorphism of TX (Rn ) to TX X
isomorphism is introduced in Theorem 2.5.1 and is there defined by
v, with < b
GX : v 7 b v, y >X = (v, y)X .
0
Definition 3.4.1 The fundamental tensor field g is the 2 -tensor field on Rn de-
fined by
p
There is used that dxp ( ) = s , see also Definition 2.8.12.
x s
1
Comment(s): 3.4.1 The length of the vectors and hi dxi (not summate!) are
hi xi
equal to one, because
s s
!
1 1 1 1
hi xi = , = gii = 1 (not summate!)
hi xi hi xi X h2i
and
q q
i
hi dx = (hi dx , hi dx )X = h2i gii = 1 (not summate!).
i i
1
( )
n o
The bases and h dx i are orthonormal bases to the corresponding
i
hi xi
tangent space and its dual.
Let {xi } be the cartesian coordinates on Rn and look to the differential form (n-form)
dx1 dxn , see Definition 3.2.5. The element X Rn is fixed and v1 , , vn TX (Rn ).
The number
(dx1 dxn )(v1 , , vn )
(x1 , , xn ) 10 n0
dx1 dxn = dx dx
(x1 , , xn )
0 0
0 0
such that in general (dx1 dxn )(v1 , , vn ) will give another volume than
(dx1 dxn )(v1 , , vn ). If we restrict ourselves to affine coordinate transformations
0 0 0 i (x1 , , xn )
x = b + Li x with det L = 1, than holds
i i i = 1. In such a case, the
(x1 , , xn )
0 0
so
0 0 (x1 , , xn )
0 (xk ) = (xk (xk )) 0 .
(x1 , , xn )
0
Notate the cartesian coordinates on R2 by x and y, and the polar coordinates by r and
. The cartesian coordinates depend on the polar coordinates by x = r cos and y =
r sin . There holds
x
x x
p y
r cos r sin x + y
!
2 2
= = ,
y y
sin r cos
y
p x
x + y
r 2 2
With the use of these transition matrices we find the following relations between the
bases and dual bases
88
x y sin
= p + p = cos
r x2 + y2 x x2 + y2 y x r
r
cos
y = sin r + r
= y +x
x y
x y
dr = p dx + p dy
dx = cos dr r sin d
x2 + y2 x2 + y2
(
y x dy = sin dr + r cos d
d = x2 + y2 dx + x2 + y2 dy
With the help of these relations are tensor fields, given in cartesian coordinates, to
rewrite in other coordinates, for instance polar coordinates. In polar coordinates is the
vectorfield
x y
+ 2
x + y x
2 2 x + y y
2
1
given by , the 2-form (x2 + y2 ) dx dy by r3 dr d and the volume form dx dy
r r
by r dr . The fundamental tensor field which belongs to the natural inner product
on R2 can be described in polar coordinates by
dx dx + dy dy = dr dr + r2 d d. (3.7)
where a and b, with a < b, are the radii of the inside and outside wall of the tube.
Further is some material constant.
x x x
cos sin cos cos sin sin
y y y
= sin sin sin cos cos sin
cos sin 0
z z z
x xz
p p y
x2 + y2 + z2 x2 + y2
y yz
x
= p 2 p .
x + y + z 2 2 x + y
2 2
z
q
p x + y
2 2 0
x2 + y2 + z2
cos sin sin sin cos
x y z
cos cos sin cos sin
=
x y z
sin cos
0
sin sin
x y z
x y z
p p p
x2 + y2 + z2 x2 + y2 + z2 x + y + z
2 2 2
p
x + y
2 2
xz yz
= .
2
(x2 + y2 + z2 ) px2 + y2 (x + y2 + z2 )
p
(x2 + y2 + z2 ) x2 + y2
y x
2 0
(x + y2 ) (x + y2 )
2
With the help of these two transition matrices the relations between bases and dual
bases can be shown. Tensor Fields expressed in cartesian coordinates can be rewritten in
spherical coordinates. So is the volume dx dy dz rewritten in spherical coordinates
equal to 2 sin d d d. The electrical field due to a point charge in the originis
given by cartesian coordinates by
3
!
2
(x + y + z )
2 2 2
x + y +z
x y z
90
1
and in spherical coordinates by the simple formula . Further transforms the fun-
2
damental tensor field, corresponding to the natural inner product on R3 , as follows
dx dx + dy dy + dz dz = d d + 2 d d + 2 sin2 d d.
The state of stress of a hollw ball under an internal pressure p is given by the contravari-
ant 2-tensor field
a3 p b3 b3 1 b3
! ! ! !
1
T = 3 1 3 + 1+ + 1+ ,
b a3 2 3 2 2 3 2 sin2
where a and b, with a < b, are the radii of the inside and outside wall of the ball.
Let f be a scalar field on Rn and let {xi } be a curvilinear coordinate system on Rn . Let
0
{xi } be another curvilinear coordinate system on Rn . Since
f xi f
=
xi xi xi
0 0
f
are the functions i f = the components of a covariant tensor field.
xi
Definition 3.6.1 The covariant tensor field d f = i f dxi is called the gradient
field of the scalar field f .
Let a be a vector field and let ai be the components of this vector field with respect to
the curvilinear coordinates xi . The functions ai j f form the components of a 11 tensor
field.
La f = < d f, a > = ai i f.
If there is defined an inner product on Rn , than there can be formed out of the gradient
field, the contravariant vectorfield
91
G1 d f = gki i f .
xk
Confusingly enough G1 d f is often called the gradient of f. If {xi } is an orthogonal
curvilinear coordinate system than we can write
1 f 1 1 f 1
G1 d f = + + .
h1 x h1 x
1 1 hn xn hn xn
(Lv w) j = wi i v j vi i w j .
0
Let {xi } be some other coordinate system, than holds
0 0 0 0 0
(Lv w) j = wi i v j vi i0 w j
0 0
i0 i k j j i0 i k j
= Ai w Ai0 k A j v Ai v Ai0 k A j w j
0 0
j j j
= w k A j v v k A j w j
k k
k j j0 j0 j k j j0 j0 j
= w v k A j + A j k v v w k A j + A j k w
j0 j0
= A j wk k v j vk k w j + wk v j vk w j k A j
j0 j0 j0
= A j (Lv w) j + w j vk j Ak k A j
j0
= A j (Lv w) j .
It seems that the functions (Lv w) j are the components of a contravariant vector field.
The vector field Lv w is called the Lie product of v and w. With this product the space of
vector fields forms a Lie algebra. For a nice geometrical interpretation of the Lie product
we refer to (Abraham et al., 2001) ,Manifolds, or (Misner et al., 1973) ,Gravitation.
( )
i
Definition 3.6.4 The n3 function defined by
j k
( )
i
j k X = i X,
j k
0
Let {xi } be another curvilinear coordinate system on Rn , than holds
0
i
0
= < dxi , j0 k0 X >
0 0
j k
0 j
= < Aii dxi , A j0 j Akk0 k X >
0 j j
= Aii < dxi , A j0 Akk0 j k X + A j0 j Akk0 k X >
0 j 0 j
= Aii A j0 Akk0 < dxi , j k X > + Aii A j0 j Akk0 < dxi , k X >
i0 j
k
i
i0 j
k
= Ai A j0 Ak0 + ik ,
A A A
i j0 j k 0
j k
i g jk + j gki k gij = i j X, k X + j X, i k X +
j k X, i X + k X, j i X +
k i X, j X i X, k j X
= 2 k X, i j X .
1
Multiply the obtained identity by gmk and then it turns out that
2
( )
m 1
= gmk i g jk + j gki k gij . (3.9)
i j 2
So we find that
2 2 1 1 2
= =
r and
2 r2 r
1 2
2 1
1
= r.
2 2
All the other Christoffel symbols, which belong to the polar coordinates, are equal to
zero.
Let a be a vector field on Rn and let {xi } be a curvilinear coordinate system on Rn . Write
0
a = ai i . Let there also be a second curvilinear coordinate system {xi } on Rn , then
x
holds
0 j
0 j 0 0
j0 ai = A j0 j Aii ai = A j0 Aii j ai + ai j0 Aii . (3.10)
The second term in Formula 3.10 is in general not equal to zero, so the functions j ai
are not the components of a 11 -tensor field.
1
Lemma 3.6.1 The functions 5 j ai form the components of a 1 -tensor field.
Proof Because of the transformation rule of the Christoffel symbols, see Formula 3.8,
and of Formula 3.10 holds
0
i
0 0
0
5 j0 ai = j0 ai +
k
0 0
a
j k
0 j
0
0 j i 0
0
= Aii A j0 j ai + j0 Aii ai + Aii A j0 Akk0 + Aii j0 Aik0 Akl al .
j k
0 j
i 0
+ 0 Ai0 ai + Ai0 0 Ai Ak0 al
= Aii A j0 j ai + Akk0 k l
A a
j i i j k0
l l
j k
0 j
i
k i0
i0 k0
= Aii A j0 j ai + + i
+ i
al
a A a A A A
0
j i j0
j k
i l k0
such that
95
0 0 j 0 0 0
5 j0 ai = Aii A j0 5 j ai + j0 Aii ai Aii Aik0 j0 Akl al
0 j 0 0 0
= Aii A j0 5 j ai + j0 Aii ai ik0 j0 Akl al
0 j
= Aii A j0 5 j ai .
Definition 3.6.6 The covariant derivative of a vector field a, notation 5a, is given
by the 11 -tensor field
5a = 5 j ai dx j ,
xi
where the components 5 j ai are given by Formula 3.11.
Let be a covector field on Rn . It is easy to see that the functions j i are not the com-
ponents of a 02 -tensor field. For covector fields we introduce therefore also a covariant
derivative.
5 = 5 j i dx j dxi ,
With the help of the covariant derivative of a vector field, there can be given a definition
of the divergence of a vector field.
Definition 3.6.8 The divergence of a vector field a is given by the scalar field 5i ai .
The functions ai are the components of a with respect to some arbitrary curvilinear
coordinate system.
96
Notice(s): 3.6.2 Because of the fact that the calculation of a covariant derivative
is a tensorial operation, it does not matter with respect of what coordinate system
the functions ai are calculated and subsequent to calculate the divergence of a. The
0
fact is that 5i0 ai = 5i ai .
With the help of covariant derivative, the gradient field and the fundamental tensor
field there can be given a definition of the Laplace operator
Definition 3.6.9 Let be a scalar field, the Laplace operator, notation 4, is de-
fined by
4 = 5 G1 d = 5i gij j .
Notice(s): 3.6.3 Again the observation that because of the tensorial actions of
the various operations, it does not matter what coordinate system is chosen for the
calculations.
Later on we come back to the classical vector operations grad, div, rot and 4. They are
looked from some other point of view.
A differential form of order k is a tensor field that adds to every point X Rn a anti-
symmetric covariant k-tensor in kX (Rn ). A differential form of order k is also called a
V
k-form or a antisymmetric k-tensor field, see Definition 3.2.5. There are (n + 1) types
of non-trival k-forms. These are the 0-forms ( the scalarfields), 1-forms ( the covector-
fields), . . . , n-forms. In Section 2.10 is already commented that antisymmetric k-tensors,
with k > n, are not interesting types, because they add 0 to every point. !
n
To an arbitrary curvilinear coordinate system {x } and a k-form , belong
i functions
k
i i , 1 i1 < < ik n, such that for every X Rn holds
1 k
X
(X) = i i (X) dxi1 dxik . (3.13)
1 k
1 i1 < < ik n
coordinates.
In this section we define a differentiation operator d, such that k-forms become
(k + 1)-forms, while n-forms become zero.
then holds
r+1
( f1 , , fr )
X !
l
(1) = 0. (3.14)
l=1
xl (x1 , , xl1 , xl+1 , , xr+1 )
Proof We give a sketch of the proof. Call F = ( f1 , , fr )T . The l-th sommand of the
summation in the left part of Formula 3.14 is than, on a factor 1, to write as
F F F F
!
det , , l1 , l+1 , , r+1 =
xl x1 x x x
2 F F F F
l 1 , , l1 , l+1 , , r+1 +
x x x x x
F 2 F F F F F 2 F F
+ 1 , , l1 l , l+1 , , r+1 + 1 , , l1 , l l+1 , , r+1 +
x x x x x x x x x x
F F F 2 F
+ 1 , , l1 , l+1 , , l r+1 .
x x x x x
Note that a summand, where r is equal to one of the i j s, is equal to zero. The sum
formed by the terms, where r is not equal to one of the i j s, is obviously to write as
X
d = (d) j j dx j1 dx jk+1
1 k+1
1 j1 < < jk+1 n
Note that the exterior derivative of a n-forms is indeed 0. At this moment, there is still
the question if d0 , this is the exterior derivative of with respect to the coordinates
0
{xi }, is the same as d.
Example(s): 3.6.1
Example(s): 3.6.2
1 1 1
!
d = dx + dy + dz dx +
x y z
2 2 2 3 3 3
! !
dx + dy + dz dy + dx + dy + dz dz =
x y z x y z
2 1 3 1 3 2
! ! !
dx dy + dx dz + dy dz
x y x z y z
12 13 23
d = dz dx dy + dy dx dz + dx dy dz
z y x
23 13 12
!
= + dx dy dz
x y z
0
Proof Let {xi } and {xi } be two coordinate systems. We prove the proposition for the
differential form
= dx1 dxk ,
where is an arbitrary function of the variables xi . The approach to prove the propo-
sition for an arbitrary differential form of the form dxi1 dik is analog. The
proposition follows by taking linear combinations.
The exterior derivative of with respect to the variables xi is given by
n
X r
d = dx dx1 dxk .
xr
r=1
100
0
On the basis of Lemma 3.2.1, can be written, with respect to the coordinates xi , as
X x1 , , xk i01 i0
= 0 i0 dx dx k .
i
1 i < < i n
0 0 x 1, , x k
1 k
0
The exterior derivative of with respect to the coordinates xi , notated by d0 , is given
by
X n
X x 1 , , xk
i01
i0
0 r0
d = k +
0 dx dx dx (3.15)
x x 1 , , x k
r0 i0 i
0
r =1 1 i < < i n
0 0
1 k
n
X x , , x
1 k
i01 i0
X
r0
dx dx dx k . (3.16)
xr
0 i0 i0
x 1, , x k
0 r =1 1 i0 < < i0 n
1 k
The first sum, see Formula 3.15, is to write as ( with the use of the index notation)
r0 k
0 dx dx dx =
1
x r
0
xr xr l
dx dx1 dxk =
xr xl xr
0
r
dx dx1 dxk ,
xr
and that we recognize as d. The second sum, see Formula 3.16, is to write as
X X x 1 , , xk
r0 i0 i0
0 dx dx 1 dx k
x r0 i i0
x 1, , x k
1 j < < j
0 0 n {r ,i < < i }={j < < j
0 0 0 0 0 }
1 k+1 1 k 1 k+1
(3.17)
where the inner sum is a summation over all possible combinations r0 , i01 < < i0k ,
a collection of (k + 1) natural numbers, which coincides with the collection j01 < <
j0k+1 . The inner sum of Formula 3.17 can then be written as
k+1
X x1 , , xk j0l
j01 j0 j0 j0
dx dx dx l1 dx l+1 dx k+1 .
j0 0 j0 j0 j0
x l x j1 , , x l1 , x l+1 , , x k+1
l=1
j0 0 j0 j0 j0
Put the obtained (k + 1)-form dx l dx j1 dx l1 dx l+1 dx k+1 in the order
0 j0
dx j1 dx k+1 . This costs a factor (1)(l+1) . With the use of Lemma 3.6.3 it follows
that the second sum, see Formula 3.16, is equal to zero.
Proof Because of Theorem 3.6.1, it is enough to prove the proposition just for one
coordinate system {xi }. The same as in the foregoing theorem it is enough to prove the
proposition for the k-form
= dx1 dxk ,
This summation exist out of n (n 1) terms, an even number of terms. These terms
become pairwise zero, because
2 2
= and dxr dxl = dxl dxr .
xl xr xr xl
where xi are arbitrary (curvilinear) coordinates and and are functions of the vari-
ables xi . There holds that
a b = dx1 dxl+m ,
such that
n
X !
d(a b) = p + p dxp dx1 dxl+m
x x
p=1
Furthermore holds
n
X
d = p dxp dx1 dxl+m
x
p=1
and
102
n
X
d = p dx1 dxl dxp dxl+1 dxl+m .
x
p=1
In this last expression it costs a factor (1)l to get dxp to the front of that expression.
Hereby is the theorem proven for the special case.
If there are chosen a symmetric inner product and a oriented volume on Rn , they can
be transferred to every tangent space ( see the Subsections 3.4.2 and 3.4.3). In every
point X can then the Hodge image (X) be considered. This Hodge image is then an
antisymmetric (n k)-tensor (see Section 2.12). In this section we consider combina-
tions of the algebraic operator and the differential operator d on differential forms.
Let x and y be Cartesian coordinates on R2 and take the natural inner product.
If = 1 dx + 2 dy, than holds
= 2 dx + 1 dy
1 2
!
d = + dx dy,
x y
1 2
d = + .
x y
f f
df = dx + dy,
x y
f f
df = dx + dy,
y x
2 f 2 f
!
d df = + dx dy,
x2 y2
2 f 2 f
d df = + .
x2 y2
This last result is the Laplace operator with respect to the natural inner product and
volume form on R2 .
103
Consider R3 with the Cartesian coordinates x, y and z and the usual inner product.
If = 1 dx + 2 dy + 3 dz, than holds
2 1 3 1 3 2
! ! !
d = dx dy + dx dz + dy dz,
x y x z y z
3 2 1 3 2 1
! ! !
d = dx + dy + dz,
y z z x x y
= 1 dy dz 2 dx dz + 3 dx dy,
1 2 3
!
d = + + dx dy dz,
x y z
1 2 3
d = + + .
x y z
Let f be scalar field, than holds
f f f
df = dx + dy + dz,
x y z
f f f
df = dx dy dx dz + dy dz,
z y x
2 f 2 f 2 f
!
d df = + + dx dy dz,
x2 y2 z2
2 f 2 f 2 f
d df = + + .
x2 y2 z2
Also in R3 , the operator d d seems to be the Laplace operator for scalar fields.
Notice(s): 3.7.1 All the combinations of d and are coordinate free and can be
written out in any desired coordinate system.
Let there are chosen a symmetric inner product and a matching oriented volume on Rn
and let be a k-form.
104
4 = (1)n k ( d d + (1)n d d ) .
Check this. Check furthermore that in R4 with the Lorentz inner product, for scalar
fields , 4 is the same as , where represents the dAlembertian.
Comment(s): 3.7.1 dAlemertian is also called the the Laplace operator of the
Minkowski space. In standard coordinates t, x, y and z and if the inner product
has the signature (+, , , ), it has the form
2 2 2 2
= ,
t2 x2 y2 z2
These classical vector operations are grad, div, curl and 4 have only to do with
scalar fields and vector fields. In this section we give coordinate free definitons of
these operations. Hereby will, beside the operators d and , also the isomorphism
GX : TX (Rn ) TX (Rn ) play a role of importance. This is determined by the cho-
sen inner product, see Subsection 3.4.2. We consider here the usual inner product on
R3 and the orthogonal coordinates {xi }. Furthermore we use the scale factors hi . With
the help of these scale factors, the components of the fundamental tensor field gij can
1
( )
n o
be written as gij = ij hi . Furthermore are the bases
2 and h i dx i orthonormal
hi xi
!
1
in every tangent space TX (Rn ) and its dual. Further holds that GX = hi dxi ,
hi xi
wherein may not be summed over i.
1
Apparently are the components, with respect to this base, given by .
hi xi
such that
2 h2 1 h1 3h 1h
! !
3 1
dG = dx1 dx2 + dx1 dx3 +
x1 x2 x1 x3
3 h3 2 h2
!
dx2 dx3 .
x2 x3
106
To calculate the Hodge image of dG , we want that the basis vectors are orthonormal.
1
Therefore we write dx1 dx2 = h1 dx1 h2 dx2 , than follows that dx1 dx2 =
h1 h2
h3
dx3 . With a similar notation for the dx1 dx3 and dx2 dx3 it follows that
h1 h2
3 h3 2 h2 1h 3h
! !
1 1 1 3
dG = h1 dx1 + h2 dx2
h2 h3 x2 x3 h1 h3 x3 x1
2 h2 1 h1
!
1
h3 dx3 ,
h1 h2 x1 x2
3 h3 2 h2 1 1 h1 3 h3 1
! !
1 1
curl = + +
h2 h3 x2 x3 h1 x1 h1 h3 x3 x1 h2 x2
2 h2 1 h1 1
!
1
.
h1 h2 x1 x2 h3 x3
We write again
1 1 1
= 1 + 2 + 3 ,
h1 x 1 h2 x 2 h3 x3
than holds
G = 1 h2 h3 dx2 dx3 2 h1 h3 dx1 dx3 + 3 h1 h2 dx1 dx2 ,
such that
1 h2 h3 2 h1 h3 3 h1 h2
!
d G = + + dx1 dx2 dx3
x 1 x 2 x 3
and so we get
1 h2 h3 2 h1 h3 3 h1 h2
!
1
div = + + .
h1 h2 h2 x1 x2 x3
107
This is the well-known formula, which will also be found, if the divergence of is
written out as given in Definition 3.6.8.
This differential operator we define here for scalar fields and for vector fields.
Definition 3.8.4 Let be a scalar field, than the Laplace operator for , notation
4 , is defined by
4 = div grad = d d .
Definition 3.8.5 Let be a vector field, than the Laplace operator for , notation
4 , is defined by
4 = G1 ( d d d d ) G .
Note that grad div = G1 d d G and that curl curl = G1 d dG, hereby
follows that the above given definition is consistent with the classical formula
All formulas out of the classical vector analysis are in such a way to prove. See
(Abraham et al., 2001) ,Manifolds, , page 379, Exercise 6.4B.
108
With respect to the standard basis {Ei } is every point X R3 to write as X = xi Ei . Let
now xi be real functions of a real parameter t, where t runs through a certain interval
I. We suppose that the functions xi are enough times differentiable, such that in the
future no difficulties arise with respect to differentiation. Further we assume that the
dxi
derivatives are not simultaneously equal to zero, for any value of t I.
dt
Definition 4.1.1 A space curve K is the set of points X = X(t) = xi (t) Ei . Hereby
runs t through the interval I. De map t 7 X(t) is injective and smooth enough.
We call the representation xi (t) of the space curve K a parameter representation. The
dxi
vector Ei is the tangent vector to the space curve at the point X, which will also be
dt
dX
written as . Another parametrisation of K can be obtained by replacing t by f (u).
dt
Hereby is f a monotonic function, such that f (u) runs through the interval I, if the
parameter u runs through some interval J. Also the function f is expected to be enough
times differentiable and such that the first order derivative is not equal to zero for any
value of u. We call the transition to another parametric representation, by the way of
t = f (u), a parameter transformation. Note that there are infinitely many parametric
representations of one and the same space curve K.
The arclength of a (finite) curve K ( a finite curve is also called arc) described by the
parametric representation xi (t), with t0 t, is given by
s
Z t !2 !2 !2
dx1 () dx2 () dx3 ()
s(t) = + + d. (4.1)
t0 d d d
111
The introduced function s, we want to use as parameter for space curves. The parameter
is then s and is called arclength. The integral given in 4.1 is most often difficult, or not
all, to determine. Therefore we use the arclength parametrisation of a space curve only
for theoretical purposes.
Henceforth we consider the Euclidean inner product on R3 . Out of the main theorem
of the integration follows than that
2
ds dX dX
= , . (4.2)
dt dt dt
The derivative of s to t is the length of the tangent vector. If s is chosen as parameter of
K, than holds
2 2 2
dX dX dt dX dX dt ds
, = , = = 1, (4.3)
ds ds ds dt dt ds dt
where we used Formula 4.2. Property 4.3 makes the use of the arclength as parameter
dX
so special. In future we use for the vector a special notation, namely X.
ds
Example(s): 4.1.2 Look to the circular helix, as introduced in Example 4.1.1, with
the start value t = 0. There
holds that s(t) = t a2 + h2 .
Note that indeed X, X = 1.
Definition 4.1.2 The tangent line to a curve K at the point X is straight line given
by the parametric representation
Y = X + X.
The parameter in this definition is in such a way that || gives the distance from the
tangent point X along this tangent line.
Definition 4.1.3 The osculation plane to a curve K at the point X is the plane that
is given by the parametric representation
Y = X + X + X.
112
Here we assumed that X and X are linear independent and in particular that X 6= 0.
Geometrically it means that X is no inflection point. Also in inflection points can be
defined an osculation plane. An equation of the osculation plane in a inflection point
X is given by
d3 X
!
det Y X, X, = 0.
ds3
For some arbitrary parametrisation of a space curve K, with parameter t, the parameter
representation of the tangent line and the osculation plane in X, with X 6= 0, are given
by
dX
Y = X +
dt
and
dX d2 X
Y = X + + 2 .
dt dt
h x1 sin t h x2 cos t + a x3 a h t = 0.
is called the curvature vector. We introduce the vectors n and b, both are unit vectors
on the straight lines of the principal normal and the binormal. We agree that n points
in the direction of X and that b points in the direction of X X. The vectors , n and b
are oriented on such a way that
b = n, n = b , = n b.
Let Y be a point in the neighbourhood of X at the space curve K. Let 4 be the angle
between the tangent lines in X and Y and let 4 be the angle between the binormals in
X and Y. Note that 4 is also the angle between the osculation planes in X and Y.
Definition 4.1.4 The curvature and the torsion of the space curve K in the
point X is defined by
!2 !2
d 4
=
2
= lim , (4.4)
ds 4s0 4s
!2 !2
d 4
=
2
= lim . (4.5)
ds 4s0 4s
Lemma 4.1.1 There holds that 2 = (, ) and 2 = b, b .
Proof Add in a neighbourhood of X to every point of the curve an unit vector a, such
that the map s 7 a(s) is sufficiently enough differentiable. The length of a is equal to
1, there holds that (a, a) = 1 and there follows that (a, a) = 0. Differentiate this last
equation to s and there follows that (a, a) + (a, a) = 0. Let 4 be the angle between
a(s) and a(s + 4s), where X(s + 4s) is point in the neighbourhood of X(s). There holds
that
cos (4) = (a(s), a(s + 4s)).
2
4
lim = (a, a).
4s 0 4s
Choose for a successively and b, and the statement follows.
The curvature of a curve is a measure for the change of direction of the tangent line.
1
R = is called the radius of curvature. So far we have confined ourselves till points at
K which are no inflection points. But it is easy to define the curvature in an inflection
point. In an inflection point is X = 0, such that with Lemma 4.1.1 follows that = 0.
The reverse is also true, the curvature in a point is equal to zero if and only if that point
is an inflection point.
For the torsion we have a similar geometrical characterisation. The torsion is zero if and
only if the curve belongs to a fixed plane. The torsion measures the speed of rotation
of the binormal vector at some point of the curve.
The three vectors , n and b form in each point a orthonormal basis. The consequence
is that the derivative of each of these vectors is a linear combination of the other two.
These relations we describe in the following theorem. They are called the formules of
Frenet.
= n, (4.6)
n = + b, (4.7)
b = n, (4.8)
The sign of is now also defined. The sign of has to be taken so that Equation 4.8
is satisfied.
Proof The definition of is such that it is a multiple of n. Out of Lemma 4.1.1 it follows
that the length of is equal to and so there follows directly Equation 4.6.
With the result of above we conclude that (, b) = 0. The fact that (b, ) = 0 there
follows that (b, ) = (b, ) = 0. Hereby follows that b is a multiple of n. Out of
Lemma 4.1.1 follows that || is the length of b. Because of the agreement about the sign
of , we have Equation 4.8.
Because (n, ) = 0 there follows that (n, ) = (n, ) = and because (n, b) = 0
there follows that (n, b) = (n, b) = , such that
n = (n, ) + (n, b) b = + b,
They call the positive oriented basis {, n, b} the Frenet frame or also the Frenet trihe-
dron, the repre mobile, and the moving frame. Build the of the arclength depend
matrix
115
F = (, n, b},
then holds that FT F = I and det = 1, so the matrix F is direct orthogonal (so, orthog-
onal and detF = 1). The formulas of Frenet can now be written as
0 0
d
F = F R, with R = 0 .
(4.9)
ds
0 0
Theorem 4.1.2 Two curves with the same curvature and torsion as functions
of the arclength are identical except for position and orientation in space. With a
translation and a rigid rotation one of the curves can be moved to coincide with
the other. The equations = (s) and = (s) are called the natural equations of
the curve.
Proof Let the given functions and be continuous functions of s [0, a), with
a some positive constant. To prove that there exists a curve K of which the curvature
and the torsion are given by respectively and . But also to prove that this curve K is
uniquely determined apart from a translation and a rigid rotation. The equations 4.9
can be interpreted as a linear coupled system of 9 ordinary differential equations. With
the existence and uniqueness results out of the theory of ordinary differential equations
follows that there exists just one continuous differentiable solution F(s) of differential
equation 4.9, to some initial condition F(0). This matrix F(0) is naturally a direct or-
thonormal matrix. The question is wether F(s) is for all s [0, a) a direct orthonor-
mal matrix? Out of F = F R follows that FT = RT FT = R FT . There holds that
d T
F FT = F R FT and F FT = F R FT , that means that F F = 0. The matrix F(s) FT (s)
ds
is constant and has to be equal to F(0) FT (0) = I. Out of the continuity of F(s) follows
that det F(s) = det F(0) = 1. So the matrix F(s) is indeed for every s [0, a) a direct or-
thonormal matrix. The matrix F(s) gives the vectors , n and b, from which the searched
curve follows.
The arclength parametrisation of this curve is given by
Z s
X(s) = a + ds,
0
where a is an arbitrary chosen vector. Out of this follows directly the freedom of trans-
lation of the curve.
Let F(0) be another initial condition and let F(s) be the associated solution. Because of
the fact that columns of F(0) form an orthonormal basis, there exist a rigid rotation to
transform F(0) into F(0). So there exists a constant direct orthogonal matrix S, such that
F(0) = S F(0). The associated solution is given by F(s) = S F(s), because
d d
(S F(s)) = S F(s) = S F(s) R.
ds ds
116
Based on the uniqueness, we conclude that F(s) can be found from the solution F(s) and
a rigid rotation.
A space curve is, apart from its place in the space, completely determined by the func-
tions (s) and (s). They characterize the space curve. This means that all properties of
the curve, as far they are independent of its place in the space, can be expressed through
relations in and . We shall give some characterisation of curves with the help of the
curvature and the torsion. Note that out of the equations of Frenet follows that
b + = 0. (4.10)
1. = 0.
Out of Equation 4.8 follows that b = 0, such that
d
(X(s), b) = (X, b) + (X, b) = (, b) = 0.
ds
Evidently is (X(s), b) a constant, say . Than holds that X(s) b for every s lies in
a plane perpendicular to b. The space curve lies in a fixed plane.
2. = 0.
Out of Equation 4.6 follows that = 0, such that
X(s) = a + s (0).
X u = b
follows that
!1
2
X u, u = = 1 + 2 (u, u), (4.11)
so
!
X u, u = 0.
2 + 2
Evidently is X 2 + 2
s u, u a constant and equal to (X(0), u). The vector X
s u lies for every s in a plane perpendicular to u and through X(0). We conclude
2 + 2
that the space curve is a cylindrical helix.
Notice(s): 4.1.1
The tangent vector X makes a constant angle with some fixed vector u,
see Equation 4.11.
The function h(s) = (X(s) X(0), u) tells how X(s) has "risen"in the direction
dh
u, since leaving X(0). And = X, u is constant, so h(s) rises at a constant
ds
rate relative to the arclength.
1 d2
Evidently is the vector X + X s u constant and equal to the
2 + 2 ds2 2 + 2
constant vector m, with m = 2 n(0) + X(0). We conclude that
+ 2
X(s) 2 s u m = n(s),
+ 2 2 + 2
such that
s u m = 2 .
X(s) 2
+ 2 + 2
118
The space curve is, as we already know, a cylindrical helix, especially a circular helix.
4.2.1 Surfaces
We consider again the standard basis {Ei } of R3 , with which every point X R3 can
be written as X = xi Ei . Let xi be real functions of the two real parameters u1 and
u2 , with (u1 , u2 ) R R2 and open. We suppose that that the functions xi are
enough times differentiable to both variables, such that in the remainder there will
be no difficulties with respect to differentiation. Further we assume that the matrix,
xi
formed by the partial derivatives , has rank two. This means that the vectors 1 X
u j
and 2 X are linear independent in every point X. (We use in this section the notation
j for ).
u j
u2 = constant can not fall together. Just as the curves in R3 , there are infinitely many
possibilities to describe the same surface S with the help of a parametric representa-
0 0 0 0
tion. With the help of the substitution u1 =" u1 (u# 1 , u2 ), u2 = u2 (u1 , u2 ), where we
ui
suppose that the determinant of the matrix 0 is nonzero, there is obtained a new
ui
0
parametric " representation of the surface S, with the coordinates ui . The assumption
ui
#
h i
that det =
6 0 is again guaranteed by the fact that the rank of the matrix j x i is
ui
0
ui
equal to two. From now on the notation of the partial derivatives i0 will be Aii0 .
u
Let S be a surface in R3 with the coordinates ui . A curve K at the surface S can be de-
scribed by ui = ui (t), where t is a parameter for K. The tangent vector in a point X at
this curve is given by
119
dX dui
= i X,
dt dt
which is a combination of the vectors 1 X and 2 X. The tangent lines in X to all the
curves through X at the surface lie in a plane. This plane is the tangent plane in X to
S, notation TX (S). This tangent plane is a 2dimensional linear subspace of the tangent
space TX (R3 ). The vectors 1 X and 2 X form on a natural way a basis of this subspace.
0
With the transition to other coordinates ui holds i0 X = Aii0 i X, this means that in the
tangent plane there is a transistion to another basis.
with 0 < < and 0 < < 2 . Note that the rank of the matrix formed by the
columns X and X is equal to two, if 6= 0 and 6= .
Definition 4.2.2 The first fundamental tensor field is the fundamental tensor field
that adds the inner product to TX (S). Out of convenience, we notate the first fun-
damental tensor field by g.
The components of the first fundamental tensor field, belonging to the coordinates ui ,
are given by
gij = (i X, j X)X ,
such that g can be written as g = gij dui du j . Hereby is {du1 , du2 } the reciprocal
basis, which belongs to the basis {1 X, 2 X}.
120
The vectors 1 X and 2 X are tangent vectors to the parameter curves u2 = C and u1 =
C. If the parameters curves intersect each other at an angle , than holds that
(1 X, 2 X)X g12
cos = = .
|1 X| |2 X| g11 g22
It is evident that the parameter curves intersect each other perpendicular, if g12 = 0. A
parametrisation ui of a surface S is called orthogonal if g12 = 0.
such that
( )
k
= gkl (i j X, l X).
i j
It is clear that
hij = (i j X, NX ). (4.13)
121
1
Lemma 4.2.1 The Christoffel symbols are not the components of a 2 -tensor field,
the functions hij are the components of a covariant 2-tensor field.
0
Proof Let xi (u j ) be a second parametric representation of the surface S. There holds
that
k0
( ) ( )
k0 l0 0 j k 0
0 0 = g (i0 j0 X, l0 X) = Akk A j0 Aii0 + Akk (i0 Akj0 ).
i j i j
The second term is in general not equal to zero, so the Christoffel symbols are not the
components of a tensor field. ( See also Formula 3.8, the difference is the inner product!)
Furthermore holds that
j j0 j
hi0 j0 = (i0 j0 X, NX ) = (i0 (A j0 j X), NX ) = ((i0 A j ) j X + A j0 (i0 j X), NX )
j j j
= A j0 (i0 j X, NX ) = A j0 Aii0 (i j X, NX ) = A j0 Aii0 hij ,
from which follows that the functions hij are the components of a tensor field.
Definition 4.2.3 The second fundamental tensor field is the covariant 2-tensor
field of which the components, with respect to the base uk , are given by hij , so
h = hi j dui du j .
Lemma 4.2.2 The Christoffel symbols are completely described with the help of
the components of the first fundamental tensor field. There holds that
( )
k 1
= gkl (i g jl + j gli l gij ). (4.14)
i j 2
i g jl = i ( j X, l X) = (i j X, l X) + ( j X, il X),
j gli = j (l X, i X) = ( jl X, i X) + (l X, ji X),
l gij = l (i X, j X) = (li X, j X) + (i X, l j X),
such that
( )
k
i g jl + j gli l gij = 2 (i j X, l X) = 2 glk ,
i j
Theorem 4.2.1 The intersection of a surface S with a flat plane, that lies in some
small neighbourhood of a point X at S and is parallel to TX (S), is in the first approx-
imation a hyperbola, ellipse or a pair of parallel lines and is completely determined
by the second fundamental tensor.
Proof We take Cartesian coordinates x, y and z in R3 such that X is the origin and the
tangent plane TX (S) coincides with the plane z = 0. In a sufficiently small neighbour-
hood of the origin, the surface S can be descibed by an equation of the form z = f (x, y).
A parametric representation of S is given by x1 = x, x2 = y and x3 = z = f (x, y). We
assume that the function f is enough times differentiable, such that in a neighbourhood
of the origin the equation of S can written as
f f 1
z = f (x, y) = f (0, 0) + (0, 0) + (0, 0) + (r x2 + 2 s x y + t y2 ) + h.o.t.
x y 2
1
= (r x2 + 2 s x y + t y2 ) + h.o.t.,
2
with
2 f 2 f 2 f
r = (0, 0), r = (0, 0) and t = (0, 0).
x2 x y y2
The abbreviation h.o.t. means higher order terms. A plane that lies close to X and is
parallel to the tangent plane to S at X is described by z = , with small enough. The
intersection of this plane with X is given in a first order approximation by the equation
r x2 + 2 s x y + t y2 = 2 .
The tangent vectors to the coordinate curves, with respect to the coordinates x and y,
in the origin, are given by
!T
f
x X = 1, 0,
x
and
!T
f
y X = 1, 0, ,
y
Furthermore holds in the origin X = 0 that NX = (0. 0 1)T . It is evident that h11 =
r, h12 = h21 = s, h22 = t, see Formula 4.13, such that the equation of the intersection is
given by
We conclude that the intersection is completely determined by the numbers hij and that
the intersection is an ellipse if det[hi j] > 0, a hyperbola if det[hi j] < 0 and a pair of
parallel lines if det[hi j] = 0.
This section will be closed with a handy formula to calculate the components of the
second fundamental tensor field. Note that 1 X 1 X = NX , with = |1 X 2 X|.
This can be represented with components of the first fundamental tensor field. There
holds
= |1 X| |2 X| sin,
with the angle between 1 X and 2 X, such that 0 < < . There follows that
s
q g212 q q
= g11 g22 (1 cos ) = g11 g22 (1
2 ) = g11 g22 g12 = det[gij ].
2
g11 g22
Furthermore is
1 1
hi j = (i j X, NX ) = (1 X 2 X, i j X) = p det(1 X, 2 X, i j X).
det[gij ]
= X = u j j X.
The curvature vector X is the vector along the principal normal of the curve K and
satisfies
l
X = = u j j X + u j uk k j X = u j j X + u j uk +
X h N
l k j X
k j
l
= ul + u j uk l X + u j uk h jk NX .
(4.15)
j k
The length of the curvature vector is given by the curvature in X ( see Lemma 4.1.1).
It is, with the help of Formula 4.15, easy to see that the geodesic curvature can be calu-
lated with the help of the formula
s ( ) ! ( ) !
i j
ui + ul uk u +
j u u gij .
p q (4.16)
l k p q
Note that the geodesic curvature only depends on components of the first fundamental
tensor field.
Out of Formula 4.15 follows that the principal curvature is given by u j uk h jk . Note that
the principal curvature only depends on the components of the second fundamental
tensor field and the values of ui . These last values determine the direction of . This
means that different curves at the surface S, with the same tangent vector in a point X
at S, have an equal principal curvature. This result is known as the theorem of Meusnier
Note that T has the value 1 in every point of K. We vary now K on a differentiable
way on the surface S, where we keep the points X0 and X1 fixed. So we obtain a new
curve K. This curve K can be represented by u j (s) + j (s), with j differentiable and
j (s0 ) = j (s1 ) = 0. The parameter s is not necessarily the arclength parameter of K.
Consequently the length of K is given by
s
d(ui + i ) d(u j + j
Z s1 Z s1
gij (u + )
k k )ds = T(uk + k , uk + k ) ds.
s0 ds ds s0
Because the length of K is minimal, the expression above has its minimum for = 0,
that means that
Z s1 !
d
T(uk + k , uk + k ) ds = 0.
d s0 =0
hereby is used partial integration and there is used that k (s0 ) = k (s1 ) = 0. Because
Formula 4.19 should apply to every function k , we find that
T d T
= 0. (4.20)
uk ds uk
Because of the fact that T in every point of K takes the value 1, it is no problem to replace
T by T2 in Formula 4.20 and the equations become
i j
d i j
g ij u u g ij u u = 0,
uk ds uk
or
d
ui u j k gij gki ui + gk j u j = 0,
ds
or
ui u j k gij 2ui gki + ( j gki + i gk j ) ui u j = 0,
or
2 gki ui + (i gk j + j gki k gij ) ui u j = 0,
126
or
( )
i k
u + ui u j = 0.
i j
This are exactly the equations for geodesic lines. Because of the fact that K satisfies
these equations, is K a geodesic through X0 and X1 .
Example(s): 4.2.3 The vector field formed by the tangent vector to K is a tangent
vector both to S and K. This tangent vector field has contravariant components
dui (t)
. There holds indeed that
dt
dX(u j (t)) du j (t)
= j X(t).
dt dt
Example(s): 4.2.4 The vector field formed by the basis vectors j X(t), with i fixed,
j
is a tangent vector field and the contravariant components are i .
Example(s): 4.2.5 The vector field formed by the reciproke basis vectors dui ,
j
with i fixed, is a tangent vector field and has covariant components i and con-
travariant components gij .
dv(t)
Let v be tangent vector field. In general, the derivative will not be an element of
dt
the tangent plane TX(t) (S). In the following definition we will give an definition of a
derivative which has that property.
127
v
Definition 4.2.7 The covariant derivative of v along K, notation ,
dt
is defined by
v dv
= PX ,
dt dt
with PX the projection at the tangent plane TX (S).
The covariant differentiation in a point X at S is a linear operation such that the tangent
vectors at S, which grasp at the curve K, is imaged at TX (S). For every scalar field f at
K holds
!
( f v) d( f v) df dv df v
= PX = PX v + f = v + f . (4.21)
dt dt dt dt dt dt
Example(s): 4.2.6 Consider the vector field out of Example 4.2.3. Call this vec-
tor field w. The covariant derivative of this vector field along the curve K can be
expressed by Christoffel symbols. There holds
d du j d2 u j du j d
!! !
w dw
= PX = PX jX = PX jX + jX
dt dt dt dt dt2 dt dt
d2 u j du j duk d2 u j l
du j duk
!
= PX + = + l X
X X X
j k j j
dt2 dt dt dt2 dt dt
k j
d2 u j j duk dul
= 2 + X,
k l dt dt j
dt
Example(s): 4.2.7 Consider the vector field out of Example 4.2.4. Also the covari-
ant derivative of this vector field is to express in Christoffel symbols.
j d j j d j du
k
l
i X = i j X = P X i j X + i j X = i l X
dt dt dt dt dt
k j
j duk
= X.
dt j
i k
Example(s): 4.2.8 Consider the vector field out of Example 4.2.5. There holds
d j d i i
0 = i = du , j X = du , j X + dui , j X ,
dt dt dt dt
where the rule of Leibniz 4.22 is used. Out of this result follows that
( ) k ! ( ) k
i i l du i du
i
du , j X = du , j X = du , l X = ,
dt dt j k dt j k dt
such that
( ) k
i i du
du = du j .
dt j k dt
In particular, we can execute the covariant differentiation along the parameter curves.
These are obtained by taking one of the variables uk as parameter and the other variables
u j , j 6= k fixed. Then follows out of Example 4.2.7 that for the covariant derivative of the
basis vectors along the parameter curves that
( ) l
j du
i X = j X. (uk is the parameter instead of t.)
du k i l duk
At the same way follows out Example 4.2.8 that the covariant derivatives of the reci-
proke basis vectors along the parameter curves are given by
( ) k ( )
i du i
dui = du j = du j . (4.23)
dul j k dul j l
In general the covariant derivative of a tangent vector field v along a parameter curve
is given by
( )!
j
j j j l j
v = v j X = k v j X + v j X = k v + v j X,
duk duk duk k l
and here we used Formula 4.21. The covariant derivative of a tangent vector field with
respect to the reciproke basis vectors is also easily to write as
129
( )!
j
j j l
v = v j du = k v j du + v j k du = k v j vl du j . (4.24)
du k duk du k j
Lemma 4.2.3 The functions k v j , given by Formula 4.25 are the components of a
1
1 -tensor field at S. This tensor field is called the covariant derivative of v at S.
j
Let ij , ij and i be the components of respectively a 02 -
Definition 4.2.9
tensor field, a 20 -tensor field and a 11 -tensor field. Then we define the functions
j
k i j , k ij and k i by
l l
k ij = k ij il ,
(4.27)
lj
k i
k j
ij ij
i
lj
j
il
k = k + + ,
(4.28)
k l
k l
j j
j
l
l
j
k i = k i + l .
(4.29)
k l
i
k i
130
Lemma 4.2.4 The components of the first fundamental vector field behave by
covariant differentiation like constants, or k gij = 0 and k gij = 0.
such that
( ) ( )
ij j il i
k g = g glj ,
l k l k
where is made use of Formula 4.23. With the help of Definition 4.28 follows then
k gi j = 0. In a similar way it is to see that k gij = 0.
Note that a curve K is a geodesic if and only if the tangent vector field of this curve is
parallel transported along this curve. If K is a geodesic than there holds that
dui
!
i X = 0.
dt dt
IMPORTANT NOTE
The system of differential equations for parallel transport in 2 dimensions reads
1
dul
1
dul
1
dt
dt
v1
l 1 l 2
v
d 0
2 + = .
dt v
2
2 dul 2 dul v 0
l 1 dt l 2 dt
3.
134
This section is just written to get some feeling about what the Christoffel symbols sym-
bolise. Let X be some point in space, with the curvilinear coordinates xi . The coordi-
nates depend on the variables j , so xi ( j ). The vector j X is tangent to the coordinate
curve of x j and the vectors {i X} form a basis. To this basis belongs the reciprocal basis
j
{y j }, that means that (y j , i X) = . If the standard inner product is used, they can be
i
calculated by taking the inverse of the matrix (1 X N X).
If the point X is moved, not only the coordinates xi change, but also the basis vectors
{i X} and the vectors of the reciprocal basis {y j }.
The vector j (i X) can be calculated and expanded in terms of the basis {i X}, the coe-
eficients of this expansion are the Christoffel symbols, so
( )
k
j i X = X,
j i k
( Definition
see also the similarity with ) 3.6.4. Another notation for the Christoffel sym-
k
bols is k . The symbols and k are often called Christoffel symbols of the second
ji j i ji
kind.
Most of the time the metric tensor is used to calculate the Christoffel symbols. The
metric tensor is
G = (1 X N X)T (1 X N X),
the matrix with all the inner products between the tangent vectors to the coordinate
axis. The inverse matrix of G is also needed and the derivatives of all these inner prod-
ucts. The inner products between different tangent vectors is notated by the coefficient
gi j = (i X, j X) of the matrix G.
135
Chapter 5 Manifolds
Section 5.1 Differentiable Functions
with
lim | r(h) | = 0.
|h| 0
Take cartesian hcoordinates ath Rni and Rm . Let f k be the k-th component function of f
j
i
and write a = ak . Let A = Ai be the matrix of A with respect to the standard bases
j
of Rn and Rm . Look in particular the component function f i and h = hi E j , than holds
It follows that
fi
Aij = (a).
x j
The linear transformation A is called the derivative in a and the matrix A is called the
df
functional matrix. For A, we use also the notation (a). If m = n, than can also be
dX
f 1, , f n
determined the determinant of A. This determinant is just (a), the Jacobi
x1 , , xn
determinant of f in a.
Let K be a curve in U with parameter t (, ), for a certain value of > 0. So
dX
K : t X(t). Let a = X(0). The tangent vector at K in the point a is given by (0). Let
dt
L be the image curve in V of K under f . So L : t Y(t) = f (X(t)). Call b = Y(0) = f (a).
dY df dX
The tangent vector at L in the point b is given by (0) = (a) (0).
dt dX dt
If two curves K1 and K2 through a at a have an identical tangent vector than it follows
that the image curves L1 and L2 of respectively K1 and K2 under f have also an identical
137
tangent vector.
The three curves K1 , K2 and K3 through a have at a tangent vectors, which form an
addition parallelogram. There holds than that the tangent vectors at the image curves
L1 , L2 and L3 also form an addition parallelogram.
Let M be a set 2 3.
Note that transition maps only concern points which occur in more than one chart and
they map open subsets in Rn into open subsets of Rn .
Definition 5.2.3 A collection of chart balls and there corresponding chart maps
{Ui , i } of M is called an atlas of M if M = i Ui and if every transition map is
differentiable in the points where they are defined.
In the remainder there is supposed that M is a manifold. Let U and U0 be chart balls
of M such that U U0 6= . Let also U and U0 be the corresponding charts, with the
2 RRvH: Problem is how to describe that set, for instance, with the help of Euclidean coordinates?
3 RRvH: It is difficult to translate Dutch words coined by the author. So I have searched for
English words, commonly used in English texts, with almost the same meaning. The book of
(Ivancevic and Invancevic, 2007) ,Applied Differential Geometry was very helpful.
138
Let K be curve at M such that a part of the curve lies at U U0 . That part is a curve that
appears at the chart U, at the chart U0 and is a curve in Rn . A point X(t0 ) U U0 ,
for a certain t0 I, can be found at both charts U and U0 . At these charts the tangent
vectors at K in X(t0 ) are given by
d( X) d((0 ) X)
(t0 ) and (t0 ).
dt dt
Let K1 : t 7 X(t), t I1 and K2 : 7 Y(), I2 be curves at M, which have a
point P in common in U U0 , say P = X(t0 ) = Y(0 ), for certain t0 I1 and 0 I2 .
Suppose that the tangent vectors on K1 and K2 in P at the chart U coincide. The tangent
vectors on K1 and K2 in P at the chart U0 also coincide, because by changing of chart
0
these tangent vectors transform with the transition matrix Aii .
Definition 5.2.6 Two curves K1 and K2 at M which both have the point P in
common are called equivalent in P, if the tangent vectors on K1 and K2 in P at a chart
U coincide. From the above it follows that this definition is chart independent.
Note that the tangent space is a vector space of dimension n. We notate these tangent
vectors by their description at the charts. A basis of the tangent space is formed by
the tangent vectors i in P at the parameter curves, which belong to the chart U. The
u
relationship of the tangent vectors i0 , which belong to the parameter curves of chart
u
U0 , is given by
i
0 = Ai0 .
u i ui
j f (ui0 ) = j g(ui0 ),
The cotangent space is a vector space of dimension n. The covectors dui in P of the
parameter functions ui , which belong to the chart U, form a basis of the cotangent space.
For two charts holds
0 0
dui = Aii dui .
d d dui
( f K) = f (ui (t)) = i f .
dt dt dt
This expression, which is chart independent, is called the directional derivative in
P of the function f with respect to the curve K and is conform the definition 3.6.2,
Subsection 3.6.1. In the directional derivative in P we recognize the covectors as lin-
ear functions at the tangent space and the tangent vectors as linear functions at the
cotangent space. The tangent space and the cotangent space in P can therefore be con-
sidered as each others dual.
The tangent vectors in a point P at the manifold M, we can also define as follows:
A tangent vector according definition 5.2.7 can be seen as such a linear transformation.
Let K be a curve and define
d
Df = f K,
dt
than D satisfies 5.1, because for constants and holds
dui dui dui dui dui dui
i ( f + g) = i f + i g, i ( f g) = f i g + g i f.
dt dt dt dt dt dt
Let M be a manifold.
In every tangent space can be introduced an inner product with which tensoralgebra
can be done. Unlike surfaces in R3 we dont have in general an a priori given inner
product, that can be used simultaneously to all the tangent spaces. This missing link
between the different tangent spaces of a manifold is completed in the following defin-
ition.
141
(P; v, w) = gij vi w j ,
with gi j = g ji and [gij ] positive definite. In the tangent space TP (M) serves as funda-
mental tensor. Therefore we call the 2-tensor field that belongs to M the fundamental
tensor field.
geodesics of the Riemannian manifold. Even as in the chapter before can be proved that
the shortest curves in a Riemannian manifold are geodesics.
142
When there work no outside forces at the system a geodesic orbit is followed by
the system on the Riemannian manifold.
This gives us the idea to define the covariant derivative of a vector field w = wi i X
along a curve K by
dwi du j k
( ) !
i i
w i X = + w i X. (5.2)
dt dt j k dt
Here is a vector field along a curve K. This vector field is independent of the choice of
the coordinates,
0
0
du j0 0
dw i
i
+ wk =
dt 0 0 dt
j k
d i0 i i0 j k i 0
j0 dup 0
A w + Ai A j0 Ak0 + Ais j0 Ask0 Ap Ak wq =
dt i j k
dt q
0 dwi i du j h
+ Ai0 du wi + Ai0 As Ak0 du wq .
p
Aii + k
w
h i s p k 0 q
dt dt dt dt
j k
Let M be (Riemannian
) manifold and {U, } a chart 5, with coordinates ui and Christoffel
k
. Let K be a parametrised curve at the chart U and T a rs tensor field,
symbols
l m
that at least is defined in every point of K.We want to introduce a differentiation opera-
T is an on K defined rs tensor field. dt
tor dt along K such that dt is called the covariant
derivative along K.
We consider first the case r = 1, s = 0. The covariant derivative of a tangent vector field
of M along K we define by Expression 5.2. If a = 0 delivers, we call a pseudoparallel
dt
along the curve K. Out of the theory of the ordinary differential equations follows
that a tangent vector a0 on M given at the begin of the curve K can be continued to
a pseudoparallel vector field along K. In other words, a0 can be parallel transported
along the curve K.
Notice that the geodesics are exactly those curves, where with the use of the arclength
parametrisation, the tangent vectors are pseudoparallel to the curve. There holds
5 RRvH: The definition of a chart ball and a chart is not consequently used by the author. U = (U) is a
chart of M, {U, } is a chart ball of M. The cause of this confusion is the use of the lecture notes of Prof.
Dr. J.J. Seidel, see (Seidel, 1980) ??.
144
uk duk
( )
k
= + ui u j = 0.
ds ds i j
d r dar dr
(a r ) = r + ar
dt dt dt
r
du j r
r dr
k
= + +
a a a
r r
j k dt dt dt
dk r du
j
ak .
=
r
dt j k dt
If we want that the Leibniz rule holds, by taking the covariant derivative, then we have
to define
du j
( )
dk r
k = , (5.3)
dt dt j k dt r
along K.
There can be directly proved that by the change of a chart holds
k0 = Akk0 k .
dt dt
Analogously an arbitrary 2-tensor field is treated. Take for instance r = 0, s = 2
and notate the components of by ij . Take two arbitrary pseudoparallel vector fields
a = ai i X and b = b j j X, along K and require that the Leibniz rule holds then
d dai db j d
ij ai b j = ij b j + ij ai + ai b j
dt dt dt dt
j i
dkl dum dum
=
ak bl .
jl ki
dt m k dt m l dt
So we have to define
du j du j
( ) ( )
dkl m n
kl = . (5.4)
dt dt j k dt ml j l dt kn
du j p du j k
( ) ( )
k d k k r
l = + . (5.5)
dt dt l j p dt l j l dt r
To higher order tensors it goes the same way. Taking the covariant derivative along
a parametrised curve means, take first the normal derivative and then add for every
index a Christoffel symbol.
For the curve K we choose a special curve, namely the h-th parameter curve. So u j =
K j + jh t, with K j constants. So there holds t = uh Kh . With = = h we
dt duh
find that
i
h wi = h wi +
k
w ,
h k
r
h k = h k r ,
h k
r s
h kl = h kl ks ,
rl
h k
h l
jk jk
m
jk
j
mk
k
jm
h i = h i + + i ,
m i
h i
h m
h m
The covariant derivative along all parameter curves converts a rs -tensor vector field
into a s +r 1 -tensor on M. Most of the time the covariant derivative is interpreted as the
latter.
However, the second covariant derivative of a vector field is not symmetric. There holds
146
m k
h i vk = h i vk k m
+
v i v
m
h i
h m
k k m k k m
= h i vk + j j k m j
+ + + v .
v v v v
h h m i
i j
i j h i
h m
h mi j
Reverse the rule of h and i and out of the difference with the latter follows
( ) ( ) ( )( ) ( )( )!
k k k m k m
h i vk i h vk = h i + v j.
i j h j h m i j i m h j
The left hand side is the difference of two 12 -tensor fields and so the result is a 12 -tensor
field. Because of the fact that v j are the components of a vector field, the expression
between the brackets in the right hand side, are components of a 13 -tensor field.
(h i i h ) vk = Khij
k
v j,
k
(h i i h ) w j = Khij wk ,
(h i i h ) kj = Khim
k
m m k
j Khij m .
On analogous way one can deduce such kind of relations for other type of tensor fields.
With some tenacity the tensorial character of the curvature tensor of Riemann-
Christoffel, defined in 5.6, can be verified. This can be done only with the use of the
transformation rule:
( 0 ) ( )
i i0 j k i 0
0 0 = Ai A j0 Ak0 + Aii j0 Aik0 ,
j k j k
( ) ( )
k k
In the case that = then
i j j i
k
Khij + Kkjhi + Kijh
k
= 0.
l
Khijk = gkl Khij
Out of this follows that Kiijk = Khimm = 0. In 2 dimensions the curvature tensor seems
to be given by just one number. In that case only the following components can be not
equal to zero
K1212 = K1221 = K2112 = K2121 = h11 h22 h12 h21 = det[gik ] det[hki ].
The last identity can be easily proved by choosing just one smart chart. The "tangent
plane coordinates" for instance, such as is done in the proof of Theorem 4.2.1.
We study this tensor field on symmetry. The first term keeps unchanged if i and j are
changed. The combination of the 3th and 4th term also. Only the 2th term needs some
further inspection. With
( )
k 1
= gkm [k j; m] = gkm j gkm
k j 2
we find that
( )
k 1 km 1
i = i g j gkm + gkm i j gkm .
k j 2 2
The 2th term in the right hand side turns out to be symmetric in i and j. For the 1th
term we write with the help of i gkm = gkr glm i grl the expression
148
1 km
i g j gkm = gkr glm i grl j gkm .
2
Also this one is symmetric in i and j.
Out of this, there can be derived a constant, given by
K = Khi ghi .
With the help of this scalar is formed the Einstein tensor field
1
Ghi = Khi K ghi .
2
This tensor field depends only of the components of the fundamental tensor field and
plays an important rule in the general relatively theory. It satisfies also the following
properties
Chapter 6 Appendices
Section 6.1 The General Tensor Concept
On the often heard question: "What is a tensor now really?" is a sufficient answer, vague
and also sufficient general, the following: "A tensor is a function T of a number of
vector variables which are linear in each of these variables separately. Furthermore
this function has not to be necessary real-valued, she may take values in another vector
space."
Notation(s): Given
k Vector spaces E1 , E2 , , Ek .
A vector space F.
t : E1 E2 Ek F
(u1 , u2 , , uk ) 7 t(u1 , u2 , , uk ) F.
Comment(s): 6.1.1
a. Multilinear means that for every inlet, for instance the j-th one, holds that
Exercise. If dim E j = n j and dim F = m, calculate then the dim Lk (E1 , E2 , , Ek ; F).
150
Notation(s):
Exercise. Let see that Lk (E1 , , Ek ; F), with dim F < , is basically the same as
Lk (E1 , , Ek , F ; R).
Proof Take L(Ek , Lk1 (E1 , , Ek1 ; F)) and define Lk (E1 , , Ek ; F) by
= ( (uk ))(u1 , , uk1 ). The addition 7 is a isomorphism, i.e. a bijective map.
( You put in a "fixed" vector uk Ek at the k-th position and you hold on a multilinear
function with (k 1) inlets.)
(t1 t2 )( p , , p , q , , q , x1 , , xs1 , y , , y ) =
1 r1 1 r2 1 s2
t1 ( p , , p , x1 , , xs1 ) t2 ( q , , q , y , , y ),
1 r1 1 r2 1 s2
with p , q E and x j , y E arbitrary chosen.
j j j
151
Comment(s): 6.1.2
Theorem 6.1.2 If dim(E) = n then has Tsr (E) the structure of a nr + s -dimensional
real vector space. The system
j1 js
{ei1 eir e q | 1 ik n, 1 ik n},
Proof We must show that the previously mentioned system is linear independent in
Tsr (E) and also spans Tsr (E).
Suppose that
i i j1 js
j1 jr ei1 eir e q = 0
1 s
k1 kr
fill all systems ( e e el1 els ), with 1 k j n, 1 ls n, in the mentioned
p p i i
(r + s)-tensor then follows with < e , e q > = q that all numbers j1 jr have to be equal
1 s
to zero. Finally, what concerns the span, every tensor t Tsr (E) can be written as
i1 ir j1 js
t = t( e e , e j1 , , e js ) ei1 eir e e .
i i i1 ir
Comment(s): 6.1.3 The numbers t j1 jr = t( e , e e j1 e js ) are called the
1 s
components of the tensor t with respect to the basis e j }. Apparently these numbers
are unambiguously.
Example(s): 6.1.1 The Kronecker-delta is the tensor T11 (E) which belongs to
the identical transformation I L(E; E) under the canonical isomorphism
T11 (E) ' L(E; E). I.e. x E p E (p, x) = (p, x).
The components are ij with respect to every base.
152
If P L(E; F) then also, pure notational, P L(T01 (E), T01 (F)). The "pull-back transfor-
mation" or simple "pull-back" P L(F , E ) = L(T10 (E); T10 (F)) is defined by
< P ( f), x > = < f, Px >
with f F and x E.
Sometimes it is "unhandy" that P develops in the wrong direction, but this can be
repaired if P is an isomorphism, so if P1 : F E, exists.
Definition 6.1.2 Let P : E F be an isomorphism. Then is Prs : Tsr (E) Tsr (F)
defined by
Prs (t)(q 1 , , q r , y1 , , ys ) = t( P q 1 , , P q r , P1 y1 , , P1 ys )
The following theorem says that "lifting up the isomorphism P to tensor spaces" has all
the desired properties, you expect. Chic expressed: The addition P Psr is a covariant
functor.
153
Proof
6.2.1 Introduction
154
The Stokes equations play an important rule in the theory of the incompressible viscous
Newtonian fluid mechanics. The Stokes equations can be written as one vector-valued
second order partial differential equation,
grad p = 4 u, (6.1)
with p the pressure, U the velocity field and the dynamic viscosity. The Stokes equa-
tions express the freedom of divergence of the stress tensor. This stress tensor, say S, is
a 20 -tensor field, which can be written as
S = p I + u + (u)T . (6.2)
The herein occuring 20 -tensor field u ( yet not to confuse with the covariant derivative
of u) is called the velocity gradient field. But, what exactly is the gradient of a velocity
field, and evenso, what is the divergence of a 20 -tensor field? This differentiation op-
erations are quite often not properly handled in the literature. In this appendix we put
on the finishing touches.
We consider 6.1 and 6.2 on a domain Rn . Let {Xi } be the Cartesian coordinates on
.
Figures
2.1 Indexgymnastic. 16
157
References
1 Abraham, R., Marsden, J. and Ratiu, T. (2001). Manifolds, Tensor Analysis and Appli-
cations djvu-file. Springer Verlag.
2 van Hassel, R. (2010). Program to calculate Christoffel symbols pdf-file..
3 Ivancevic, V. and Invancevic, T. (2007). Applied Differential Geometry pdf-file. World
Scientific.
4 Misner, C., Thorne, K. and Wheeler, J. (1973). Gravitation djvu-file. Freeman.
5 Seidel, J. (1980). Tensorrekening pdf-file. Technische Hogeschool Eindhoven.
158
Index
e l
Einstein tensor field 148 Lagrange (equations of) 142
equivalent function (manifold) 139 Laplace operator 96
equivalent (of curves in point) 138 Laplace operator (Rn ) 104
esic line 124 Laplace operator (scalar field) 107
Euclidean inner product 17 Laplace operator (vector field) 107
exterior derivative 98 Lie product 91
linear function 8
f Lorentz group 25
first fundamental tensor field 119 Lorentz inner product 17
formules of Frenet 114 Lorentz transformation 25
Frenet frame 114
Frenet trihedron 114 m
functional matrix 136 manifold 137
function (manifold) 139 Meusnier, theorem of 124
functor 152 Minkowski inner product 71
fundamental tensor 29 Minkowski Space 25
fundamental tensor field 85 mixed 2-tensor 50
mixed components 37, 40
g mixed (r
geodesic 124 s)-tensor 51
geodesic curvature 123 moving frame 114
gradient 105
gradient field 90 n
Gram matrix 18 natural equations 115
natural isomorphism 73
h non-degenerate 73
helix, circular 110, 118 normal 112
160
o s
oriented volume 65, 86 scalar 27
orthogonal curvilinear 86 scalar field 81
orthogonal group 25 scale factors 86
orthonormal basis 24 signature 24
osculation plane 111 Signature inner product 24
space curve 110
p standard basis Rn 5
parameter representation 110 standard basis Rn 9
parameter transformation 110 surface 118
parametric curve 118 symmetric linear transformation 15
parametric representation 118 symmetrizing transformation 60
parametrisation, orthogonal 120
parametrization 78 t
perm 58 tangent bundle 79
permanent 58 tangent line 111
permutation 55 tangent plane 119
permutation, even 55 Tangent Space 79
permutation, odd 55 tangent space (manifold) 139
polar coordinates 79 tangent vector field 126
principal curvature 124 tangent vector (manifold) 139, 140
principal normal 112 tensor, antisymmetric 55
producttensor 44 tensor field (manifold) 140
pseudoparallel (along K) 143 tensorial 51
tensorproduct 44
r tensor, symmetric 55
radius of curvature 114 torsion 113
real number 27 transformation group 25
reciproke basis 20 transition maps 137
rectifying plane 112
Representation Theorem of Riesz 19 v
Riemannian manifold 141 vector field 81
161