0% found this document useful (0 votes)
139 views10 pages

Tens C

1. Tensor calculus provides a unified framework for dealing with quantities that depend on several variables and require multiple components per space point. It is necessary when dealing with problems involving more than 3-4 variables. 2. Tensor calculus treats all coordinate systems equally, allowing transformations between systems. Physical laws should be independent of the coordinate system used. Tensors transform in a way that maintains the form of physical laws under coordinate changes. 3. The key idea is that coordinate systems can be freely chosen but transformations between them must be invertible. This allows writing quantities in one system as functions of another system and derivatives to be related through a Jacobian transformation matrix.

Uploaded by

tn1750
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views10 pages

Tens C

1. Tensor calculus provides a unified framework for dealing with quantities that depend on several variables and require multiple components per space point. It is necessary when dealing with problems involving more than 3-4 variables. 2. Tensor calculus treats all coordinate systems equally, allowing transformations between systems. Physical laws should be independent of the coordinate system used. Tensors transform in a way that maintains the form of physical laws under coordinate changes. 3. The key idea is that coordinate systems can be freely chosen but transformations between them must be invertible. This allows writing quantities in one system as functions of another system and derivatives to be related through a Jacobian transformation matrix.

Uploaded by

tn1750
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Tensor Calculus

(à la Speedy Gonzales)

The following is a lightning introduction to Tensor Calculus. The presen-


tation in Mathematical Methods for Physicists by G. Arfken is misleading.
It does not distinguish between co- and contra-variant (cotangent and tan-
gent) vectors in 7/9 of Chapter 3. Sections 3.8 and 3.9 finally do introduce
“Noncartesian Tensors”. This is about as pedagogical as (1) dividing the
entire fauna into Hippopotami and “Nonhippopotamous animals”, then
(2) spending more than 3/4 of your time studying these water beasts and
thereupon (3) less than 1/4 of your time in generalizing the graceful char-
acteristics of this species to the remainder of the animal kingdom.

1. Why and What With All This Jazz?!?

First of all, the whole purpose of the Tensor Calculus is to provide a unified framework
for dealing with quantities (and their derivatives and integrals) which depend on several
variables and need several numbers (a.k.a. components) per point in space to be fully
specified. Whilst the number of these variables (generalized coordinates) and components
is less than four, the usual human imagination and geometric intuition, and a little pedantry
is often quite satisfactory. However, precious few are the problems which involve only this
small number of variables 1) .
The classic Tensor Calculus by J.L. Synge and A. Schild (Dover, New York, 1978) is
wholeheartedly recommended, because it is self-contained, inexpensive (about $7.–) and
because it covers some easy application to Classical Mechanics, Hydrodynamics, Elasticity,
Electromagnetism, Relativity, . . . Finally, it treats Cartesian coordinates as a (very) special
case, which they indeed are.
—◦—

Having learned (and having been intimidated by) some “vector calculus”, with all the
many “vector identities”, (multiple) curls, divergences and gradients, contour,- surface-
and volume-integrals. . . you may rightly be asking why tensors? (Never mind what they
are. . .)
Perhaps the most honest answer is “because they exist”. Examples are not as abun-
dant in daily experience as vectors and scalars, but they exist: the metric, which is used
to define the line element (the differential along a given but arbitrary curve); the electro-
magnetic stress-tensor; the conductivity in an anisotropic medium; the generalized Young
modulus which correlates shears and stresses with the forces which created them; . . .
While it is admittedly comfortable to start with the (hopefully) well understood and
already familiar Cartesian coordinate system, many things become literally trivial and

1)
Even a single billiard (pool) ball requires five or occasionally six coordinates: three for
rotations (spin) and two (three if it jumps) for translation! As for the whole game. . .

1
obvious only in sufficient generality 2) . This will be one major motivation for the study of
tensors in general.
The other motivation is more in the method than in the madness of it; that is, to
some extent, the focus on the particular technique and squeezing as much purchase out of
it as possible will be an end unto itself. The technique relies on the fact that whatever
coordinates are being used are merely a figment of our mathematical description, and not
really essential; therefore, physically meaningful statements (equations, laws, . . .) should
be independent of any choice of coordinates whatsoever. Now, all physically meaningful
statements (equations, laws, . . .) are made in terms of physically meaningful quantities
(temperature, force, acceleration, coefficient of viscosity, electrostatic field, . . .). Thus,
we must first learn how these “building blocks” change when we (arbitrarily) change the
coordinate system, so as to be able to put these together into meaningful statements.
Thereupon, writing down physical laws will still not be as easy as building with A-
B-C blocks, but ill-formed candidates will crumble most obviously. In fact, several of the
“integration order reducing formulae”, such as Gauss’s and Stoke’s laws can be proven by
simply observing that no other expression can be constructed with the required number of
derivatives and integrations, and the integrand at hand, subject to the fact that the left-
hand-side and the right-hand-side must be quantities of the same type (scalar, vector, . . .).
Many of the arguments of tensor calculus come as an evolution of this principle.

1.1. Coordinate systems are free to choose!


The fundamental idea in tensor calculus is the transformation of variables. Given a collec-
tion of linearly independent (generalized) coordinates 3) xi , i = 1, 2, . . . , n to span (describe,
parametrize, coordinatize) an n-dimensional space, any point in this n-dimensional space
is unambiguously given by as the ordered n-tuple (x1 , x2 , . . . , xn ).
No one may stop us from adopting another collection of (perhaps more suitable or
simply nicer) coordinates
x̃i = x̃i (x1 , x2 , . . . , xn ) , i = 1, 2 . . . , n . (1.1)
Clearly, every point in the same n-dimensional space now must be unambiguously repre-
sentable by an ordered n-tuple (x̃1 , x̃2 , . . . , x̃n ). To communicate back and forth with all
those who prefer the original coordinate system {xi } and ensure that the two coordinate
systems are both worth the paper on which they are written, the n equations (1.1) must
be invertible. As everyone should know by now, that means that the Jacobian of the
transformation  ∂ x̃1 ∂ x̃1 
µ ¶  · · · n 
∂(x̃1 , . . . x̃n ) def 
 ∂x 1 ∂x 

J = det 
 .
. . . .
. 
 (1.2)
1
∂(x , . . . x ) n 
 . . . 

 ∂ x̃n n 

∂x1 · · · ∂∂xx̃n
is non-zero. We assume this from now on.
2)
This is very, very much like the roundness of the Earth, which is difficult to comprehend
unless one is willing to take a global. But, when one does, things like the Coriolis “force” and its
geographical effects become obvious.
3)
Please, note that xi is not the ith power of the variable x, but the ith variable in the
collection. As both sub- and superscripts do occur eventually, there is no reason to obstinately
insist on subscripting the coordinate variables.

2
Note the matrix in Eq. (1.2), called imaginatively the ‘transformation matrix’. The
name is indeed descriptive, as it is used—you guessed it—to transform things from one set
of coordinates to another. Consider the differential of each of the n equations (1.1):
n
X ¡ ∂ x̃i ¢
dx̃i = dxj , for each i = 1, 2, . . . , n . (1.3)
∂xj
j=1

Summations such as this occur all the time and present a tedious nuisance. Note also that
the summation index, here j, occurs once as a superscript (on dxj ) and once as a subscript

on ∂x j . Of course, a superscript on something in the denominator counts as a subscript;
also, a subscript on something in the denominator counts as a superscript. We therefore
adopt the
Einstein Summation Convention: When an index is repeated—
once as a superscript and once as a subscript—a summation over this
index is understood, the range of the summation being 1, 2, . . . , n.

∂ x̃i
Thus, we can rewrite Eq. (1.3) as dx̃i = ∂xj
dxj .

The coordinates xi , i = 1, 2 . . . , n are components of the radius vector ~r = xi êi


(remember summation?), the one which points from the origin (0, 0, . . . , 0) to the point
(x1 , x2 , . . . , xn ). Notice that while the coordinates xi (components of the vector ~r) have a
superscript, the unit vectors êi have a subscript. What is that all about?
Well, note that the components of a gradient have subscripts:
£ ¤ ∂f
~
∇f = (1.4)
i ∂xi
The textbook by Arfken is misleading by not distinguishing a vector with contra-variant
∂f
components, such as xi , from co-variant ones, such as ∂x i . Let’s see how these differ.

Let x̃i and xi be coordinates in two different coordinate systems. Then, for example,
(note the summation)
¡ ∂ x̃i ¢ j
dx̃i = dx , foreach i = 1, 2, . . . , n , (1.5)
∂xj
using the chain-rule. Straightforward, right? Well, equally straightforward should be the
˜ i ), for which (note the summation)
case of the gradient of a scalar function f (xi ) = f(x̃
¡ ∂ f˜ ¢ ¡ ∂xj ¢¡ ∂f ¢
= for each i = 1, 2, . . . , n . (1.6)
∂ x̃i ∂ x̃i ∂xj
¡ ∂ x̃i ¢
Note the absolutely crucial fact, that the components of d~r transform with ∂x j , while
¡ ∂xj ¢
~ transform with
the components of ∇f !!!
∂ x̃i
These transformation factors look quite opposite to each other. In fact, when viewed
as matrices, they are inverses of one another. Writing all n equations (1.5), we have
 1  ∂ x̃1 ∂ x̃1
 
dx̃ ∂x 1 · · · ∂x n dxj
 ...  =   ... .. ..    ...  , (1.7)
. .
n n n n
dx̃ ∂ x̃
· · · ∂∂xx̃n dx
∂x1

3
while     
∂ f˜ ∂x1 ∂x1 ∂f
∂ x̃1 ∂ x̃1
··· ∂ x̃n ∂x1
 .   ..   . 
 ..  =  ... ..
. .   ..  . (1.8)
∂ f˜ n
∂x ∂x n ∂f
∂ x̃n ∂ x̃1 ··· ∂ x̃n ∂x1

In fact, leaving the index i to run freely over 1, 2, . . . , n in Eqs. (1.5) and (1.6), those
equations are equivalent to the above matrix equations. Also, it should be obvious that
£ ∂ x̃i ¤ £ j¤
the matrices ∂x j and ∂x ∂ x̃i
are inverse to each other (do the calculation!):
 ∂ x̃1   ∂x1   
∂ x̃1 ∂x1 1 0 ··· 0
∂x1 · · · ∂xn ∂ x̃1 · · · ∂ x̃n
 .  . 0 1 ··· 0
. . . ..  = . . . 
 .. .. ..   .. .. .   .. .. . . ...  . (1.9)
n n n n
∂ x̃ ∂ x̃ ∂x ∂x
∂x1
· · · ∂xn ∂ x̃1
· · · ∂ x̃n 0 0 ··· 1
This same, rewritten “in the index notation”, becomes:
½
∂ x̃i ∂xj ∂ x̃i 1 i = k,
= = δki , δki = (1.10)
∂xj ∂ x̃k ∂ x̃k 0 i 6= k,
i j
since the n variables x̃i are linearly independent. Of course, it is also true that ∂∂xx̃j ∂x
∂ x̃ i
k = δk .

The quantity δki is called the Kronecker symbol. To save spacetime and paper, we’ll never
write out things like (1.9) again.
—◦—

Thus we conclude, not all vectors transform equally with respect to an arbitrary
change of coordinates. Some will transform as in (1.6), and some as in (1.5). This turns
out to exhaust all possibilities, and so we in general have (1.5)-like and (1.6)-like vectors.
Historically, the following names became adopted:

Definition.
A vector is contra-variant if its components transform oppositely from
~ do.
the way those of ∇
~ do.
A vector is co-variant if its components transform the way those of ∇
—◦—

As frivolous as it may seem, this notation easily provides for the following.
A vector A~ = Ai êi (describing the wind, the rotation of the Earth around the Sun, the
Gravitational field, electrostatic field, . . .) couldn’t care less which coordinates we choose
to describe it. Thus, it must be true that à ~ = A.
~ Indeed:
h i ih k i
~ =Ãiˆẽ = ¡ ∂ x̃ ¢Aj ¡ ∂x ¢ê
à i k
∂xj ∂ x̃i
(1.11)
h¡ ∂ x̃i ¢¡ ∂xk ¢i
= ~ ,
Aj êk = δjk êk Aj = Ak êk = A
∂xj ∂ x̃i
where we have used that (recall summation) δjk Aj = Ak (why?), and Eq. (1.9).

4
~ (with components Ai ) and another
Just the same, if we have a contravariant vector A,
covariant vector B~ (with components Bi ), we can form a product (Ai Bi ), which does not
transform. Since precisely that—no transformation—is the key property of what is called
a scalar, the product (Ai Bi ) has earned its name—the scalar product.

More Names.
One also says that two indices are “contracted”: in Ai êi , the two copies
of i are contracted and there is an implicit summation with i running
over 1, 2, . . . , n. A contracted index is also called a “dummy” index. It
is sort-of the discrete version of a variable over which one has integrated.

1.2. More Products and More Indices


Suppose we are given two contravariant vectors, A ~ and B, ~ with components Ai , B j . Con-
sider the product Ai B j , with both i and j left to run freely i, j = 1, 2, . . . , n. Clearly,
there are n2 such quantities: A1 B 1 , A1 B 2 , A1 , B 3 , . . . An B n . How does such a composite
quantity C ij = Ai B j transform?
Straightforwardly:
∂ x̃i ∂ x̃j k l ∂ x̃i ∂ x̃j kl
C̃ ij = Ãi B̃ j = A B = C . (1.12)
∂xk ∂xl ∂xk ∂xl
More generally, a quantity that has p superscripts and q subscripts may transform simply
as a product of p contravariant vector components and q covariant vector components;
that is, as
i ···i ∂ x̃i1 ∂ x̃ip ∂xl1 ∂xlq k1 ···kp
T̃j11···jqp = · · · · · · T . (1.13)
∂xk1 ∂xkp ∂ x̃j1 ∂ x̃jq l1 ···lq

Definition.
k ···k
Any quantity like Tl11···lq p , which transforms according to Eq. (1.13) for
some p, q, is called a tensor.
The total number of transformation factors (=number of free indices) act-
k ···k
ing on a tensor Tl11···lq p is called the rank of the tensor (= p+q).
The type of a tensor is the pair of numbers (p, q), specifying the transfor-
mations matrices ∂∂xx̃ and ∂x
∂ x̃
separately.
—◦—

Section 3.3 introduces what is known as the “quotient rule” and is likely to confuse.
Here’s (one possible version of) the corrected list:
K i Ai = B (3.29a0 )
Kji Ai = Bj (3.29b0 )
K ij Ajk = Bki (3.29c0 )
ij
Kkl Aij = Bkl (3.29d0 )
Kij Ak = Bijk (3.29e0 )

5
Note that the contracted indices do not appear on the other side, while all the free ones
do. This is precisely the “quotient rule” (not that anything is being quotiented): the left-
and the right-hand side of any equation must transform the same and this can easily be
checked to be true for any of these or similar expressions. Sometimes, this is also called the
“index conservation rule”, and is simply an extension of the old saw about being careful
not to equate Apples with PC’s.
Let us check Eq. (3.29b0 ). In the twiddled coordinate system, it reads K̃ji Ãi = B̃j .
Since B̃j are components of a covariant vector, we have
∂xl (3.29b0 ) ∂xl ¡ m ¢
K̃ji Ãi = B̃j = Bl = K l A m , (1.14)
∂ x̃j ∂ x̃j
in using Eq. (3.29b0 ), the free index was l and we labeled the dummy index by m (it can
be anything you please, as long as it cannot get confused with some other index 4) ).
Now, Am is also covariant; transforming it back to the twiddled system and moving
r.h.s. to the left produces
∂xl m ∂ x̃i
K̃ji Ãi − K Ãi = 0 , (1.15)
∂ x̃j l ∂xm
which is easily rewritten as
h ∂xl m ∂ x̃i i
K̃ji − K Ãi = 0 . (1.16)
∂ x̃j l ∂xm
As this must be true for any Ãi (we only used its transformation properties, not the
direction or magnitude), it must be that
∂xl ∂ x̃i m
K̃ji = K , (1.17)
∂ x̃j ∂xm l
which says that Kji is a rank-2 tensor, of type (1,1), as we could have read off directly from
Kji having two free indices, one superscript and one subscript—thus, a little care with the
“index notation” makes the calculation (1.14)–(1.17) trivial.
Finally, note that by keeping consistently track of upper and lower indices (super- and
sub-scripts) of a tensor, we know precisely how it transforms under an arbitrary change of
coordinates. Also, the footnote on p.127 is unnecessary; you don’t need to lose your mind
over deciding which “cosine ajl ” to use.
—◦—

~ ]i = ∂fi . Since both the


Consider the ith component of the gradient of a scalar, [∇f ∂x
ith component of ∇f~ and êi are components of a covariant vector, how can we contract
them? Recall that this issue is not new: the components of d~r are contravariant (dxi ), yet
we can calculate its magnitude,
def
ds2 = d~r · d~r = gij dxi dxj , (1.18)

4)
A dummy index is very much like the integration variable in a definite integral; do not ever
use a symbol which was already present in the expression for a new dummy index!!!

6
where gij is called the metric. ds2 is taken to mean the (physically measurable) distance
between two (infinitesimally near) points and should be independent of our choice of co-
ordinates; ds2 is defined to be a scalar, hence the scalar product. For the left- and the
right-hand side of this defining equation to make sense, gij must transform as a rank-2
covariant tensor. Moreover, gij = gji , since we can swap dxi and dxj and then relabeling
i ↔ j without ever changing the overall sign:
gij dxi dxj = gij dxj dxi = gji dxi dxj ,
£ ¤ (1.19)
gij − gji dxi dxj = 0 ,
whence gij = gji .
gij being a matrix, we define gij to be the matrix-inverse of gij . That is, g ij is defined
to be that matrix which satisfies
g ij gjk = δki , and gij gjk = δik . (1.20)
The uniqueness of gij is an elementary fact of matrix algebra. The only minor point we
need to make is that gij is also a function of space, gij = gij (x1 , . . . , xn ), while δki is a
constant. Clearly, therefore, gij = g ij (x1 , . . . , xn ) is then such a matrix-valued function
that the relations (1.20) hold point-by-point in all of the (x1 , . . . , xn )-space.

Having introduced g ij , it is easy to write


¡ ¢ ¡ ¢
~ = ∂f g ij êj = ∂f êi ,
∇f (1.21)
∂xi ∂xi
with
def
êi = gij êj . (1.22)
More generally, just like d~r · d~r = gij dxi dxj , we have
~·B
A ~ = gij Ai B j ,

~ ·D
C ~ = gij Ci Dj , (1.23)
~ ·C
A ~ = Ai Ci ,

and so on, for any two contravariant vectors A, ~ B


~ and any two covariant vectors C, ~ D.
~
i i
The combination (gij A ) = Aj transforms as a covariant tensor; the index on A has been
lowered. Similarly, the combination gij Ci = C j transforms as a contravariant vector; the
index on Ci has been raised. Thus, upon rising and lowering indices, contractions are
performed just like in Eqs. (3.29’)—as corrected above.
Of course, even if we have tensors of higher rank, the metric gij or the inverse-metric
g is used to contract indices. For example, the tensors T ij , U klm may be contracted using
ij

the metric once: £ ¤jlm


gik T ij U klm = T · U , (1.24)
or twice £ ¤m
gik gjl T ij U klm = T·U . (1.25)
Clearly, at this point, the ~ -and- · notation becomes very confusing and is best abandoned.
(For tensors of rank 5, you’d need to stack five arrows atop each other.)

7
1.3. The cross-product

The formulae (1.23) provide a perfectly general definition of the scalar product of any two
vectors; the expressions in (1.23) are invariant under absolutely any change of coordinates,
and in any number of dimensions.
How about the cross-product then? Well, recall the ‘primitive’ definition A ~×B ~ =
~ B|
n̂|A|| ~ sin θ, where θ is the angle between A~ and B,~ n̂ is the unit vector perpendicular to
both A ~ and B,~ and chosen such that the triple A, ~ B,
~ n̂ forms a right-hand triple. Already
this reveals that the cross-product exists only in three dimensions! In two dimensions, there
can be no n̂ (in two dimensions, there is no third direction!), and in n dimensions, there
is are n−2 linearly independent n̂’s all of which orthogonal to the two given vectors.
Without much ado, the standard determinant formula for the cross-product of two
three-dimensional covariant vectors is matched with the following index-notation formula:
~ × B)
(A ~ i = ²ijk Aj Bk , (1.26)
where (
1 i, j, k is an even permutation of 1,2,3;
ijk def
² = −1 i, j, k is an odd permutation of 1,2,3; (1.27)
0 otherwise,
is the Levi-Civita symbol, also called the totally antisymmetric symbol, or the alternating
symbol. Indeed, ²ijk = −²ikj = −²jik = −²kji = +²jki = +²kij , for i, j, k = 1, 2, 3.
Note that the cross product of two covariant vectors produced a contravariant one (1.26).
Having learned however the trick that contraction with the metric (i.e., the inverse-metric)
lowers (i.e., raises) indices, we can also define
~ × C)
(A ~ i = ²ijk Aj gkl C l , (1.28)
and
~ × D)
(C ~ i = ²ijk gjm C m gkl D l . (1.29)
Of course, it is equally reasonable to write
~ × D)
~ i = g ij ²jkl C k D l , def
(C ²ijk = gil gjm gkn ²lmn . (1.30)
A-ha: Cartesian coordinate systems are indeed very simple, in that there gik = δik form
an identity matrix. And, yes—this simplicity is preserved only by constant rotations, not
by general coordinate transformations!
In n dimensions, the ²··· symbol will have n indices, but will be defined through a
formula very much like (1.27). While its utility in writing the cross-product is limited to
three dimensions, it can be used to write a perfectly general formula for the determinant
of a matrix:
1 i1 ···in j1 ···jn
det[L] = n! ² ² Li1 j1 · · · Lin jn ,
(1.31)
i1 ···in
= ² Li1 1 · · · Lin n ;

det[M ] = 1 i1 ···in
n!
² ²j1 ···jn Mij11 · · · Mijnn ,
(1.32)
i1 ···in
= ² Mi11 · · · Minn :

8
and
i1 j1
det[N ] = 1
n! ²i1 ···in ²j1 ···jn N · · · N i n jn ,
(1.33)
= ²i1 ···in N i1 1 · · · N in n .
Here, L is a twice-covariant matrix (rank-two tensor), M a mixed one and N is a twice-
contravariant matrix.

1.4. Further stuff...


There is another contradiction in the textbook. On p.158, it states that the vectors ~ε1 , ~ε2 , ~ε3
are not necessarily orthogonal. Yet, on the next page, Eq. (3.128), it states that ~εi = hi êi ,
for each i and with no summation. But if each ~εi is simply proportional (by a factor hi ) to
the êi —which were treated throughout as orthogonal—then the ~εi must be orthogonal also!
Remember that the scaling coefficients
√ hi were defined as the square-roots of the diagonal
elements of the metric, hi = gii , and for orthogonal systems only! The definition of the
basis vectors
def ∂~r
~εi = (1.34)
∂xi
of course makes sense in general and the formulae after Eq. (3.128) are correct in general.
The paragraph between Eq. (3.127) and Eq. (3.128), including the latter, are correct only
for orthogonal systems, since the êi were always treated as orthogonal.
By contrast, throughout these notes, the basis vectors êi were never accused of or-
thogonality or, Heavens forbid, having their length fixed to 1; a set of basis vectors {êi }
we presume general until proven otherwise!
Finally, note that merely counting the free indices on a quantity does not necessarily
tell how that quantity transforms (and this is not contradicting the statements above).
Facing an unknown quantity, the burden of showing that it does transform as a respectable
tensor should lies not on the quantity but on you. Consider, for example, transforming
the derivative of a vector:
∂ Ãj ∂xk ∂ ³ ∂ x̃j l ´ ∂xk ∂ x̃j ∂Al ∂xk ∂ 2 x̃j
= A = + Al . (1.35)
∂ x̃i ∂ x̃i ∂xk ∂xl ∂ x̃i ∂xl ∂xk ∂ x̃i ∂xl ∂xk
j i
The second term reveals that ∂A ∂A
∂xi is not a tensor, not even ∂xi , which we could naı̈vely
~
think of as the gradient of the contravariant vector A.
The remedy for this is to replace the usual partial derivative with another one, the
covariant derivative operator, but that and its consequences would make these notes con-
siderably longer, which was not the intention. Suffice it merely to note that (ah, yes;
rather confusingly) the components of this covariant derivative are denoted by ∇i . It acts
as follows:
∂f
∇i f = , (1.36a)
∂xi
∂Aj
∇i Aj = i
+ Γjkj Ak , (1.36b)
∂x
∂Aj
∇i Aj = − Γkij Ak , (1.36c)
∂xi

9
∂Ajk
∇i Ajk = i
− Γlij Alk − Γlik Ajl , (1.36d)
∂x
∂Akj
∇i Akj = i
− Γlij Akl + Γkil Alj , (1.36e)
∂x
and so on: an additional Γ-term is added per index, positive for superscripts, negative for
subscripts. This Γ-object is defined entirely in terms of the metric (and its inverse) as
³ ∂gjk ´
def 1 il ∂gjl ∂glk
Γijk = g + − . (1.37)
2 ∂xk ∂xj ∂xl

10

You might also like