Commentary 1.1 Tensors Matrices and Indexes
Commentary 1.1 Tensors Matrices and Indexes
This all started because I was confused about the difference between co- and contra-
variant vectors and how to turn tensors into matrices. There seemed to be conflicting
definitions, most of which I have now resolved. The exploration took me over a month,
when I intended a few days. Along the way I created some word macros to help write all
the equations of which there might be 400. I have now found about 25 useful rules for
tensors, vectors, indices and matrices. They come first, then many notes and examples
finally a contents at the end.
1) The rules
1. A tensor of type (or rank) has upper indices and lower indices. (4b)
2. If we just say a tensor has rank , we are being a bit vague: .
3. A scalar is a type (0,0) tensor. Scalars are invariant under coordinate changes.
(4b)
4. A vector (or contravariant vector) is a type (1,0) tensor, e.g. . Each is a
component of the vector . (4b)
5. A dual vector (or one-form or covariant vector or covector) is a type (0,1) tensor,
e.g. (4b)
6. Upper indices are co↑travariant, lower indices are co↓ariant (3a Up-Down
Mnemonic)
7. Both kinds of vectors can be written as column or row matrices. Section 5b3 and
many other examples.
8. When we say a vector is covariant or contravariant we really mean its components
are covariant or contravariant. A vector is invariant under coordinate changes. Its
components are not. Therefore length and velocity are not real vectors in
Minkowski spacetime.
9. A rank 2 tensor can be written as a two dimensional matrix. Components must be
written so that the first index indicates row components and the second index
column components. (2c RC Mnemonic). Whether the indices are up or down is
irrelevant for this rule.
10. By 'contracting' two tensors we mean multiplying and adding
components, e.g. . We would write this which would produce . This
is most easily done by multiplying the matrices or . Subject to these
rules:
a. When converting tensors to matrices for multiplication the summed indices
must be last and first (or adjacent): not . can be
calculated without matrices but it's horrid. Section 2a.
b. If you know the matrix for a rank 2 tensor , then the matrix of the
tensor with indices reversed is the transpose of or . Section 2.1.
1
Indexes or Indices? It does not matter. Following Carroll I have used the latter, except that the file name for
this document uses the former.
1
c. You can only contract upper and lower indices. Contracting two upper or
two lower indices would give something that is not a proper tensor. (4e)
d. a,b,c even apply to the partial derivative matrix . (13) / Section 2e.
e. The metric is a type (0,2) tensor and it lowers an index.
The inverse metric raises an index. (4f)
, the Kronecker delta (identity matrix) Note C.
Replace by when in curved spacetime.
f. The order of the tensors does not matter (unlike matrices).
always. rarely.
The same applies for tensors of any rank including vectors.
11. Co-/contra- variant vector components combined give correct magnitude (length)
and dot products. Section 5.
a. Norm (=length²): .
b. Dot product: .
c. a & b only work on flat manifolds (all are constant). Section 5c.
12. More tensor rules
a. Raising or lowering an index does not change its left-right order. Free
indices must be the same on both sides of the equation. Dummy indices
only appear on one side of the equation - they are being summed over.
(30)
Correct: .
(1)
the notation above uses the Einstein summation convention. Therefore, in order for
matrix multiplication to be defined, the dimensions of the matrices must satisfy
(2)
where denotes a matrix with rows and columns. Writing out the product
explicitly,
2
(3)
where
(4)
Fig 2.01
Fig 2.02
Similar strange things happen with other orders of indices in . Rather than changing
the rules of matrix multiplication depending on that order it is much easier to change the
3
order of the terms on the RHS ( ) or use the fact that swapping indices is
the same as transposing the matrix, which takes us to the next point.
(6)
are the electric field vector components and the magnetic field components.
.
This means that when we write out as a matrix, we can always use the transpose of
. This is very useful for getting indexes adjacent for calculating contractions. It's worth
writing as a rule:
2c) RC Mnemonic
Also, to multiply matrices you multiply all the elements of a Row in A by the
corresponding elements of a Column in B and add them up. This gives you the element
at that Row / Column in C. Once again, Roman Catholic is a mnemonic but the real
business is done by the indices.
It is also usually mandatory to only make summations when the summed index is upper
on one variable and lower on the other. So the equation (1) should be
(8)
Matrix multiplication is not, in general, commutative: AB ≠ BA. When one is the identity
matrix, they clearly commute.
Unlike matrices, the order of terms when indices are used is not important:
(9)
4
But often it it is useful to rearrange terms and swap indices / transpose matrices before
writing out as matrices.
(10)
is OK. It is also often written
(11)
That is
(12)
Which all makes sense for the upper / lower rule on summation indices. I have not come
across a notation like but the implication would be obvious (but possibly see 4l 10C8).
Therefore like (7)
(14)
If we carefully multiply out the components as in (16)-(19), we end up with exactly the
same problem as we had in section 2a. The solution is to swap on in (14) and
write the matrix for as the transpose of the matrix for .
Row Col
1 1 (16)
Row 1 contracts with row 1
1 2 (17)
Row 1 contracts with row 2
2 1 (18)
5
Row 2 combines with row 1
2 2 (19)
Row 2 contracts with row 2
3) Vectors
Here is a useful Wikipedia article on Covariance and contravariance of vectors. On
detailed investigation of some examples (Section 5) things turned out not to be so
simple.
Contravariant vectors
Carroll just calls these vectors.
From Wiki: "For a vector (such as a direction vector or velocity vector) to be basis-
independent, the components of the vector must contra-vary with a change of basis to
compensate. That is, the matrix that transforms the vector components must be the
inverse of the matrix that transforms the basis vectors. The components of vectors (as
opposed to those of covectors) are said to be contravariant. Examples of vectors with
contravariant components include the position of an object relative to an observer, or
any derivative of position with respect to time, including velocity, acceleration. In
Einstein notation, contravariant components are denoted with upper indices as in
(20)
"
Carroll writes that as (his equation 1.37)
(21)
The circumflex on and the parentheses on the indices are to remind us that are
basis vectors not vector components.
(23)
Immediately after that Carroll does acknowledge the names contra / co-variant vectors
but "in this day and age these terms sound a little dated."
6
4) Tensors
These are notes taken from sections 1.6 and 1.7 of the book.
(24)
(25)
(26)
We could write out the values for U as two lots of (25) multiplied by e and two lots
multiplied by f. We could continue to pile up indices. The more we have, the more values
(or functions) we have to enumerate the tensor. We also notice that the tensor has
taken on co- and contra-variant indices and components.
After (1.57): A scalar is a type (0,0) tensor, a vector is a type (1,0) tensor, a dual vector
is a type (0,1) tensor.
(27)
Carroll writes that we can check that is a well-defined tensor and that it would not
be well defined if we contracted over two upper or two lower indices. I leave this for the
future.
7
At (1.71): The order of indices still matters:
(28)
in general.
10C6: "You shall not sum over two covariant indices. Nor shall you sum over two
contravariant indices."
(30)
Interestingly, Carroll is now using Greek letters for vectors ( ) and Latin for dual
vectors ( ). This reinforces our belief that it's the components not the vectors that are
dual or not.
10C3: "You shall not have different free indices on opposite sides of an equality or in
different terms of the same expression."
10C5: "You shall not have more than two of any one index in an index expression."
(31)
8
4k) The 10 Commandments of Index Expressions and Tensor Calculus
I have marked some of the points above with 10Cx "....". These are taken from the "The
10 Commandments of Index Expressions and Tensor Calculus" by Orodruin on Physics
Forums at https://www.physicsforums.com/insights/the-10-commandments-of-index-
expressions-and-tensor-calculus/. There were a few more, which follow. I have omitted 1
& 2 on despair and handwriting.
10C7: "When there are two occurrences of an index in your expression, you shall not
rename them to something that already exists in your expression."
10C8: " You shall not differentiate with respect to an already existing index unless that
index is a free contravariant index and you are taking the divergence."
10C9: "You shall be free to compute scalar expressions in any coordinate system."
We have a vector v in R² as shown in figure 5a.1. Using the unprimed basis x, y in it, its
components are (2 , 3) and in the primed basis they are (1 , 1). Or
(33)
Increasing the basis x, y → x', y', decreases the component values. That's
contravariance. Or as Wikipedia states "the matrix that transforms the vector
9
components must be the inverse of the matrix that transforms the basis vectors". We can
demonstrate that in this case. Loosely using the Einstein summation convention have
(34)
(36)
(37)
(38)
Yes, the coordinates transform as the inverse of the basis vectors.
which is in terms of the primed basis, so the components of in the primed basis are
(43)
Again, the coordinates vary as the inverse of the basis vectors.
(43) looks a bit odd because the indices aren't quite right. Up to now we have been a bit
vague about them.
(44)
That can't be right. We know that the magnitude of a vector is a scalar and invariant.
What do we do with unruly components? We introduce a fair ruler and compensating
components , distinguished by using lower indices. See Fig 5a.2 below.
Our compensating coordinates must adjust the terms in (44) to get the correct, invariant,
answer. So (44) becomes
(46)
and must 'normalise' the two terms in (46) so that they are in units of the ruler. If
we had to scale by , to get ruler units, we would have to scale by a further to get
to . We would also have a scale factor in the Y direction. will be different in the
primed coordinate system. We can now calculate , .
(47)
We can use the same formula for the primed coordinate system. Looking at Fig 5a.2 we
can see
(48)
Considering the values we have carefully chosen, it is not surprising that on combining
(46)-(48) we find , which is also unsurprisingly the value we get when
using the ruler.
11
We now want to find out how our 'compensating' coordinates transform under a
coordinate transformation. That is, find in terms of .
Similar to (47) we have
(49)
Using (38) for
(50)
Using (47) for
(51)
Substitute values from (48)
(52)
Similarly we find
(53)
Writing that in another form we have
(54)
The non-diagonal components in the matrix must be zero to avoid skewing the primed
coordinates. The matrix is the same as (35) which transformed unprimed basis vectors to
primed basis vectors. So we might call our 'compensating coordinates' 'covariant
coordinates' because they vary the same way as the basis vectors. We shall do so
henceforth!
I leave it as an exercise (to myself in the distant future) to prove this more generally,
like (39) - (43). I got as far as
(55)
where
(57)
(58)
In this case it is easy to calculate:
(59)
We see that produces co- from contra-variant coordinates and it is only dependent
on the scale factors . It's called the Metric. the inverse metric does the opposite.
In addition to producing co's from contra's it gives a measure of the size (and / or shape
of the coordinate system).
12
(60)
where range from 0 to 3 and the 0th component is time. So the scale factor for the
time coordinates is .
System Components
Unprimed (61)
Primed
System Components
Unprimed (62)
Primed
We get over this problem similarly to the length (46) we now define the dot product in
any coordinate system as
13
(63)
From (56)
(64)
Using (57), (48), (61)
(65)
(66)
In the primed system we have
(67)
and
(68)
(69)
(70)
The primed system (70) gives the same dot product as the unprimed (66).
We could have calculated the dot product using the ruler, but that would have been
cheating and spoiled the fun; the ruler is just another coordinate system (the ruler was
arbitrary, not fair, after all); and as we shall see in the next example it's not all about
linear scale.
(71)
where are our basis vectors and is the Kronecker delta equivalent to the
identity matrix. This changes our picture completely. In our Wikipedia inspired system we
had basis vectors . (These are ) We calculated covariant
coordinates of these basis vectors. Now, with Carroll, we add new dual (covariant) basis
vectors which balance with the contravariant basis vectors. We still need the ruler to tell
us what 1 means in .
We show this in fig 5a.2 below. The Wikipedia basis vectors (the same as
) appear along the components of . are designed to satisfy (71). We
can read our old contravariant components of from the figure or using equations from
earlier in this example. The 'Wikipedia' covariant components cannot be read off and all
come from equations.
14
Fig 5a.4
(72)
similarly
(73)
It does work as it should because that is the way we set it up. But we have taken
liberties: Carroll only told us (his equation 1.44) that "the dual space is the space of
all linear maps from the original vector space to the real numbers; in math lingo, if
is a dual vector, then it acts as a map such that
15
(74)
where are vectors and are real numbers." It is not entirely clear that the
in equation (71) is the simple matrix multiplication that we have calculated in
(72), (73). However, it's a working hypothesis.
Now we have components of in four systems: primed, dual primed, unprimed and dual
unprimed. We also have two ways of getting them: Carroll or 'Wikipedia'. Reading off Fig
5a.4 ( is a bit tricky to see, I checked it with a ruler) and using other equations
we have
All the components are the same by design (Carroll) or by laborious calculation
(Wikipedia).
Carroll then mentions (1.72) that the metric lowers indices and so (1.73) can convert
vectors to dual vectors:
(76)
and (1.66) that the action of the metric on two vectors is called the inner/scalar/dot
product
(77)
and it is invariant. (67) is the same as our (63). Then he says that that the norm
(magnitude² or length²) of a vector is its dot product with itself which gives our (46). The
norm is not always positive (or 'not always positive definite') in Minkowski space
(spacetime) and a vector with zero norm is not necessarily a zero vector. In this case it
would be along the light cone. See Note D.
So, slightly going backwards and forwards, Carroll agrees with our simple example.
16
Summary 5a
What has this example demonstrated?
1. Ordinary vector components vary inversely to basis vectors (38), (43). We call
them contravariant.
2. Contravariant vector components alone give incorrect results (45), (62).
3. We introduce covariant vector components:
a. They compensate for contravariant components and give correct results
(46), (66).
b. They vary in the same way as basis vectors (54).
c. We write contravariant with upper and covariant with lower indices
4. We have found a 2×2 matrix, the metric, which
a. Indicates the 'shape' (in our case scale) of the coordinate system,
b. Is a type (0,2) tensor,
c. Produces covariant from contravariant components (57),
d. In other words it lowers an index.
5. We briefly met the Minkowski metric (60).
6. We have got less vague about indices and seen that upper ones always sum with
lower ones.
7. Is it vectors or their components that are covariant or contravariant?
a. Our Wikipedia model says it's the components.
b. Carroll's model says it's both vectors and components.
c. Carroll's seemed more efficient.
17
Looking at the right angle triangle on the right, we see
QED
Consider Fig NA2. The dashed vectors are only shown to convince us that the third side
of the solid triangle is indeed the vector .
Fig NA2
QED
That immediately gives the required expression for
(78)
18
by calculating the area of the parallelogram in Fig NA3 1) from one side times the
separation of the equal side and 2) by subtracting the areas of all the small triangles and
rectangles from the big rectangle.
Fig NA3
From (79)
(80)
(81)
(82)
(83)
(84)
(85)
Which is
(86)
(87)
It is like the identity matrix. It is also the only tensor where the left-right order of the
indices does not matter.
19
Fig ND1
The metric is
(88)
The norms are
(89)
(90)
(91)
Note that
1. We had to write the vectors as column or row matrices.
2. We had to do the multiplication so that the contracted indices ( ) were next to
each other.
20
Fig 5b.1 Vector in slanted system
The axes are separated by an angle . There are two ways of describing the
components of the vector , whose Cartesian coordinates are :
1) by taking dotted lines parallel to the axes giving or
2) by taking dashed lines at right angles to the axes giving
Alone neither are satisfactory, but together they are good! Let's work out what each is.
The first is easy:
(92)
We add the dashed line ED perpendicular to the axis, hitting the axis at and we
see that
(93)
And
(94)
and using
(95)
we have
(96)
Neither set of coordinates we used in the primed system gives the right answer. E.g.
21
(98)
(99)
(100)
Using (92)-(96)
(101)
(102)
(103)
Hurrah!
(104)
(We show this in Note A.) is also a scalar and must be invariant. It gives the direction
of the vector. Therefore the dot product of two vectors must be invariant. Once again the
normal rules for calculating the dot product do not work in our slanted coordinate. In
Cartesian we have
(105)
(106)
(107)
(108)
(109)
Hurrah!
(111)
(112)
(113)
Hurrah!
22
5b3) Transformation matrices. Metric matrix.
We can now find the transformation matrices of components in the slanted system to
Cartesian.
(114)
and
(115)
This is where the video stops suddenly. We go further and calculate the inverse of
at Note B3, which gives us the transformation from to . The opposite (inverse) of
(114). Combining the inverse of (114) and (115) we can write
(116)
Note that we have to write all the matrices as column matrices here, whether
they are co- or contra-variant.
(118)
Now we note that in the slanted (primed) system, if , then it becomes Cartesian
and that where is the identity matrix. Also the contra- and co-variant
coordinates would be the same. Introducing covariant Cartesian coordinates we would
have . So in the example up to now, which always used upper indices, we can
replace by whenever necessary
At last we can get more grown up and write all those matrices as tensors and check that
they obey the rules.
For (114) we can either lower an index on and write
(119)
or
(120)
In either case are in the correct RC order and we are summing only upper and lower
indices but (119) is converting covariant on the RHS to contravariant on the LHS which is
probably not a good idea. So we prefer (120). is written as a column matrix.
(116) becomes
(122)
or
(123)
In (122) the term is
(124)
i.e. we would have to multiply the matrices the wrong way round (columns times
columns). Therefore we use (123) avoiding the problems that we have seen before.
We notice that or , as we shall call it now, lowers an index and conclude that it is
the metric tensor.
In Fig 5b.2 we have added in basis vectors . If we double the basis vectors to , we
must halve all our contra- and co-variant vector components . Both types of vector
contra-vary.
24
I submitted this problem to Physics Forums. First the diagram is wrong: the covariant
basis vectors are not the same as contravariant; second it's still complicated to find the ''
vector components. The full workings and solution are in Note F.
Summary 5b
What we found
1. We have fortuitously found the co / contravariant coordinates because
a. They give vector magnitude correctly (103)
b. They give the dot product correctly (113)
2. Both types vary correctly with a change of scale. Section 5b4 / Note F.
3. So we understand why they were labelled co- or contra-.
4. We found that co- and contra- vector components can be written as row or
column matrices (116).
5. We found the metric by combining transformation tensors (117)
6. The metric had non-zero off diagonal components. They show how much the
plane was slanted (117).
7. Its diagonal components were 1 which shows it was not scaled (117). (It is using
the arbitrary ruler from Example 5a.)
8. Summed indices should always be next to each other, when converting to
matrices (124).
9. The metric lowers an index. . (123)
10. The inverse metric raises an index. . (125)
11. The placement of and are vital!
Or
(128)
So
(129)
Using (96)
(130)
Using (94)
So
(131)
25
B2) Cartesian → slanting covariant
Similar to above using but using (92) and (93) covariant coordinates on LHS and the
matrix to convert Cartesian to slanting covariant is
(132)
So
(133)
(134)
So
(135)
(136)
and
(137)
(138)
Therefore
(139)
or
(140)
(141)
(142)
26
Checking (142) we should have
(143)
(144)
For using (143) then (94) and (96) and the ever useful :
(145)
Therefore
(148)
(150)
(151)
And the second two
(152)
(153)
(154)
or
27
(155)
(156)
(157)
(158)
(159)
(160)
(161)
Which is (96) Hurrah.
And
(162)
(163)
(164)
Which is (94) Hurrah.
(165)
from the definition of
(166)
We can calculate the derivatives easily because we already have the inverse metric from
(125)
(167)
or using (118)
(168)
(169)
28
assuming that are Cartesian basis vectors then
(170)
(170) are the same as (92), (93) the first two equations in our example of skewed
coordinates. I'm not sure how we would get from (169) to (170).
(171)
Which is that inverse metric again! Putting that into (166) we get the familiar
(172)
The inverse metric converts contravariant basis vectors to covariant. That explains why
they put the indices in the 'wrong' up / down position when talking about basis vectors
instead of vector components. Moreover contracting (172) with , we get
(173)
Where was the original vector in question. As we might expect it can be expressed in
terms of covariant components and bases as wells as the more common contravariant.
Using (170) we can now calculate the covariant basis vectors in terms of Cartesian:
(175)
(176)
(177)
(178)
and
(179)
(180)
(181)
29
We can now draw , or as they are shown there, on our diagram. From (178)
it is clear that is below the Cartesian X axis. Therefore the angle between
and is a right angle, and the projected line from is parallel to as we should have
guessed and Orodruin intimated. Likewise is parallel to the projected line from and
always on the Cartesian Y axis. In addition .
Fig NF1
There is a graph plotter in Commentary 1.1 Tensors matrices and indexes.xlsx which
shows the graph too. It is also fairly clear from the diagram that as per
(174).
We now modify our notation from earlier and the skewed system, which was primed,
becomes unprimed, and the stretched skewed system becomes primed. This just reduces
the quantity of primes. In addition the Cartesian components of are not .
So (96) becomes
(182)
which give us
(184)
and
(185)
Getting back to the original question, what happens to if we double the contravariant
basis to ? We could leave alone and they would still give . We could
30
double and halve . Neither option would co-vary. Both would give incorrect values
for . We need to find the metric of the primed system so we can just calculate the
primed covariant coordinates and bases. We also cannot use the technique in section
5b3, because we don't know how to transform primed covariant coordinates to Cartesian
- we don't know what the primed coordinates are. Therefore we use the pullback
operator which we met in 5c Polar coordinates.
(186)
which give
(187)
Sticking (187) into (184), (185) the transformations from primed to Cartesian is
(188)
and
(189)
The pullback operator is
(190)
Contracting with flat metric twice we get the skewed primed metric
(191)
(192)
(193)
or
(194)
(195)
31
5c) Examples of co / contra vectors: Polar coordinates
Another way to calculate the metric of a coordinate system is by pulling back the metric
from R². For this we use the pullback operator described in appendix A of the book and
in my 'Commentary App A Mapping S2 and R3.pdf'. The transformation from polar to
Cartesian coordinates is
(196)
(197)
rewriting (196)
(198)
(199)
We have to apply that twice to the type (0,2) metric for R² to get the polar metric
.
(200)
(201)
Using the swap index and make transpose rule on the last term we get the indices
correctly next to each other and also using (199)
(202)
(203)
(204)
32
Now we should be able to calculate the norm of a vector . It makes sense to
assume that are contravariant coordinates, because if we doubled the radial basis
vector we would have to halve the radial component. Likewise if we used degrees
instead of radians ( ), decreasing the basis radial vector the radial
components would increase.
(205)
Therefore
(206)
Note that we used instead of in (205) and we were liberal about writing the vector
as a row or column matrix. If we had not, we would have run into trouble when writing
the tensors as matrices in (206).
(207)
(205) becomes
(208)
Therefore
(209)
Incorrect again.
so (205) only works on a flat manifold - one where the metric is constant everywhere.
Polar coordinates are not flat because the metric varies with . To find between two
points we would need to integrate along the straight line between them. In
general we get
(211)
(212)
33
Fig 5c1
We can visualise (212) on the diagram, because is the hypotenuse of the small dotted
rectangle (it was too small to draw), so (212) is Pythagoras's theorem.
A simple special case in polar coordinates is a line from the origin when .
(213)
Therefore which is trivial to integrate along r and gives the right answer.
Summary 5c
1. Once again we had to write both vector types as row and column matrices (206),
(209)
2. Carroll's formula for the norm after (1.66) only works on flat
manifolds (a coordinate system where the metric, , is constant).
3. On a curved manifold we must use .
Links to resources
My blog: https://www.general-relativity.net/
Commentary 1.1 Tensors matrices and indexes.docx
Commentary 1.1 Tensors matrices and indexes.pptx
Multiplies and finds inverse of matrices: Matrix calculator.xlsx
Examples of matrix manipulation: Ex 1.07 Tensor and Vector.pdf
34
The 10 Commandments of Index Expressions and Tensor Calculus by Orodruin on Physics
Forums at https://www.physicsforums.com/insights/the-10-commandments-of-index-
expressions-and-tensor-calculus/
Wikipedia on Covariance and contravariance of vectors:
https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors
Polar metric: https://en.wikipedia.org/wiki/Metric_tensor
Using polar metric https://www.physicsforums.com/threads/is-the-metric-tensor-
constant-in-polar-coordinates.738548/
Law of Cosines https://proofwiki.org/wiki/Law_of_Cosines
Cosine formula for dot profuct:
https://proofwiki.org/wiki/Cosine_Formula_for_Dot_Product)
Physics forums discussion on skewed coordinate system
https://www.physicsforums.com/threads/covariant-coordinates-dont-co-vary.959888/
Videos
On contravariant / covariant vectors: https://www.youtube.com/watch?v=8vBfTyBPu-4
Tensors Explained Intuitively: Covariant, Contravariant, Rank:
https://www.youtube.com/watch?v=CliW7kSxxWU
Proof that the transformation matrix for basis vectors is the inverse of the transformation
matrix for vector components for all n-dimensional manifolds:
https://youtu.be/kCEqVWvu3JA (20 minutes)
Part 2 is at https://youtu.be/6ckM1-huB3k
Index
Contents
1) The rules .......................................................................................................... 1
2) Indices, rows, columns ....................................................................................... 2
Indices and multiplication .................................................................................... 3
2a) Summed indices must be next to each other.................................................. 3
2b) Transpose matrix same as swapping indices .................................................. 4
2c) RC Mnemonic ............................................................................................. 4
2d) Associative, commutative, ....................................................................... 4
2e) Swapping indices on partial derivative .......................................................... 5
3) Vectors ............................................................................................................ 6
Contravariant vectors .......................................................................................... 6
Covariant vectors (or covectors) ........................................................................... 6
3a) Up-Down Mnemonic .................................................................................... 6
4) Tensors ............................................................................................................ 7
4a) At the beginning of 1.6 ............................................................................... 7
4b) Tensor type / rank...................................................................................... 7
4c) Keep the indices straight ............................................................................. 7
4d) Tensor product .......................................................................................... 7
35
4e) Contracting a tensor ................................................................................... 7
4f) The metric ................................................................................................. 8
4g) Free and dummy indices ............................................................................. 8
4h) Raise and lower dummy indices simultaneously ............................................. 8
4i) Partial derivatives commute ......................................................................... 8
4j) More jargon ............................................................................................... 8
4k) The 10 Commandments of Index Expressions and Tensor Calculus ................... 9
5a) Examples of co / contra vectors : Orthogonal coordinate system ............................ 9
There's something wrong .................................................................................10
There's one more thing! ...................................................................................13
Carroll's way of looking at it .............................................................................14
Summary 5a ..................................................................................................17
Note A) Proof about cosine between two vectors ...................................................17
Note C) Kronecker delta .....................................................................................19
Note D) Light like or null vector ...........................................................................19
5b) Examples of co / contra vectors: A skewed coordinate system ..............................20
5b1) Length of vector must be invariant ............................................................21
5b2) Dot product of vector must be invariant .....................................................22
5b3) Transformation matrices. Metric matrix. .....................................................23
5b4) Basis vectors ..........................................................................................24
Summary 5b ..................................................................................................25
Note B) Calculating transformations ....................................................................25
B1) Cartesian → slanting contravariant ..............................................................25
B2) Cartesian → slanting covariant ....................................................................26
B3) Slanting contravariant → Cartesian ..............................................................26
B4) Slanting contravariant → covariant ..............................................................26
B5) Slanting covariant → contravariant ..............................................................27
Note F) Covariant bases in skewed coordinate system ............................................28
5c) Examples of co / contra vectors: Polar coordinates ..............................................32
Summary 5c...................................................................................................34
For the future ......................................................................................................34
Links to resources ................................................................................................34
Videos ...........................................................................................................35
Index ..................................................................................................................35
36