App Cart TEN
App Cart TEN
A.1
3
X
ai x i = p
(A.1.1)
i=1
As a simplification, we add a rule due to A. Einstein: If an index is repeated once (and only
once) in a term, summation over the full range of that index is implied but the summation
P
sign i is omitted. For example the previous equation for a plane is now written as ai xi = p.
Since tensor analysis is motivated by coordinate transformation, let us look at a transformation law for rotation.
A.1.1
Transformation of a vector
Refering to Figure A.1.1, let {x , y } axes be different from {x, y} by a simple rotation. Then
~ in the two coordinate systems are related by
the compoents of a vector A
Ax = Ax cos(x , x) + Ay cos(x , y)
Ay = Ax cos(y , x) + Ay cos(y , y)
(A.1.1)
where (x , y) denotes the angle between the x and y axies. Using the index notation, it can
also be written
A1 = A1 cos(x1 , x1 ) + A2 cos(x1 , x2 )
A2 = A1 cos(x2 , x1 ) + A2 cos(x2 , x2 )
(A.1.2)
(A.1.3)
(A.1.4)
Cik Ak = Cik Ak
(A.1.5)
where
(A.1.6)
is the transformation matrix. Some properties of the matrix Cik are derived below:
(a) Orthogonality of Cik . Since the length of vector A must be invariant, i.e., the same
in both coordinate systems,
Ai Ai = Ai Ai .
3
In view of (A.1.5)
Ai Ai = Cik Ak Cij Aj = Cik Cij Ak Aj = Aj Aj .
hence,
Cik Cij =
1, k = j
0, k =
6 j
(A.1.7)
1, k = j
0, k =
6 j
we have,
Cik Cij = kj
(A.1.8)
Notice that the summation is performed over the first indices. This property is called the
orthogonality of the transformation matrix C, which is a generalization of a similar concept in
vectors. Equation (A.1.8) for k, j, = 1, 2, 3 represents six constraints among nine components
of Cij , hence only three components of Cij are independent.
~ in both systems S and system
(b) The inverse transformation: Consider the vector A
S which are related by rotation. We may write
Ai = Ci A ,
Ci = cos (xi , x ) .
Ci = cos (x , xi )
(A.1.9)
Clearly,
= ij ,
Ckj
Cki
(A.1.10)
Comparing (A.1.8) and (A.1.10), we note that summation in the latter is performed over the
second indices.
Remark: A general transformation is equal to a translation plus a rotation, but a vector
is not affected by translation at all since only the end points matter.
A.1.2
Tijk...m
= Cis Cjt Cku Cmv Tstu...v .
(A.1.11)
A.1.3
A set of 3r numbers form the components of a tensor of rank r, if and only if its scalar
product with another arbitrary tensor is again a tensor. This is called the quotient law and
can be used as a litmus test whether a set of numbers form a tensor.
We only give a one-way proof for a third rank tensor. Consider a set of numbers Aijk .
Let be the components of an arbitrary vector. Then, if
Ajk = Bjk
is a tensor component, it must obey the transformation law of a second-order tensor, i.e.,
Bik
(= Aik ) = Ci Ckm Bm (= Ci Ckm Am ).
But,
= C = C
.
hence,
Aik = (Ci Ckm C Am ) .
Since is arbitrary
Aik = Ci Ckm C Am .
it follows that Am is a third rank tensor.
A.1.4
Tensor Algebra
(a) Addition. The sum of two tensors of equal rank is another tensor of the same rank. Let
us give the proof for second-rank tensors only.
Given two tensors Aij and Bij , we define the sum in S by
Eij = Aij + Bij .
In S system we have, by definition,
Eij = Aij + Bij = Ci Cjm Am + Ci Cjm Bm
= Ci Cjn (Am + Bm ) = Ci Cjn Em ,
hence, Eij is a tensor of second rank after using linearlity.
(b) Multiplication. (A tensor of rank b) times (a tensor of rank c) = a tensor of rank
b + c with 3b+c components
Eij...krs...t = Aij...k Brs...t .
We only give the proof for the special case Eijrs = Aij Brs . Define
Eijrs
= Aij Brs
in S
= (Cik Cj Ak ) (Crm Csn Bmn )
= Cik Cj Crm Csn Ak Bmn = Cik Cj Crm Csn Ekmn ,
A.1.5
Tensor Calculus
6
a.1) Gradient: Taking the gradient of a tensor of rank r gives rise to a tensor of rank
r + 1.
Let (x1 , x2 , x3 ) be a scalar (tensor of rank 0). Is Vi = /xi a tensor of rank 1 (a
vector)?
Since ~x = (xi ) is a vector, it transforms like one: xi = Cik xk . Now,
Vi
xj
=
= Cji
=
xi
xj xi
xj
=
C
by(A.1.8)
xj ij
= Cij Vj
by definition.
Hence, Vi is a vector component, and the gradient of a scalar is a vector. In other words,
the gradient of a tensor of rank zero is a tensor of rank 1.
In general
Rij...k
Tij...k =
.
(A.1.12)
x
(a.2) Divergence is contraction in differentiation.
Taking the divergence of a tensor of rank r gives rise to a tensor of rank r 1.
Consider = vi /xi . Is a scalar? Lets check the transformation law.
Cij Vj xk
Vi
Vi xk
=
=
=
xi
xk xi
xk
xi
Vj
Vj
Cki = Cij Cik
= Cij
xk
xk
V
Vk
= jk j =
= .
xk
xk
Hence, is a scalar and = .
Problem: Prove that the strain components
eij =
ui uj
+
xj
xi
ij =
ui
uj
xi
xj