0% found this document useful (0 votes)
2 views

tensors

The document introduces tensor notation as a unified system for describing physical quantities, including scalars, vectors, and tensors of various ranks. It explains operations on tensors such as contraction, divergence, and gradient, highlighting their mathematical properties and applications in fluid dynamics. Additionally, it discusses the Kronecker delta as a useful second rank tensor with specific properties.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

tensors

The document introduces tensor notation as a unified system for describing physical quantities, including scalars, vectors, and tensors of various ranks. It explains operations on tensors such as contraction, divergence, and gradient, highlighting their mathematical properties and applications in fluid dynamics. Additionally, it discusses the Kronecker delta as a useful second rank tensor with specific properties.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MCEN 5021: Introduction to Fluid Dynamics

Fall 2015, T.S. Lund

Introduction to Tensor Notation


Tensor notation provides a convenient and unified system for describing physical quantities.
Scalars, vectors, second rank tensors (sometimes referred to loosely as tensors), and higher
rank tensors can all be represented in tensor notation. In the most general representation, a
tensor is denoted by a symbol followed by a collection of subscripts, e.g.

Xj , σij , Ul , βklm , a, b, αijkl

The number of subscripts attached to a tensor defines the rank of the tensor. (Note that the
number of subscripts ranges from zero to four in the above examples.) A tensor of rank zero
(no subscripts) is nothing more than a familiar scalar. A tensor of rank one (one subscript)
is simply a common vector. If we are working a problem in three-dimensional space, then
the vector will contain three components. In the tensor notation, these three components are
represented by stepping the subscripted index through the values 1,2, and 3. As an example,
suppose we are given the velocity vector in its common vector notation

~ = ueˆx + v eˆy + weˆz


U

We may write this vector as a tensor of rank one as follows:

Uj (j = 1, 2, 3) where U1 = u U2 = v U3 = w

In most instances it is assumed that the problem takes place in three dimensions and clause
(j = 1, 2, 3) indicating the range of the index is omitted.
A tensor of rank two contains two free indices and thus bears some resemblance to a ma-
trix. The similarity between a tensor of rank two and a matrix should be limited to a visual
one, however, as each of the two objects have there own distinct mathematical properties.
A tensor of rank two is sometimes written in vector notation as a symbol with two arrows
above it. As we shall see, this usage should be limited to symmetric tensors. A symmetric
tensor is invariant under an interchange of indices. That is σij = σji for a symmetric tensor.
As an example take the surface stress tensor. Since the surface stress is symmetric we may
write the equivalence

*
)
σ≡ σij (i = 1, 2, 3 j = 1, 2, 3)

1
A tensor of rank two with a range of three on both subscripts contains nine elements. These
nine elements are formed by all possible permutations of the free indices. This is accom-
plished systematically by cycling the subscript i through 1, 2, and 3. At each value of the
subscript i, the subscript j is cycled through 1,2,and 3. As an illustration, again consider
the surface stress tensor

 

 σ11 σ12 σ13 

*
 
)  
σ≡ σij =  σ21 σ22 σ23 

σ31 σ32 σ33
 

 

The top row is obtained by holding i fixed at 1 and cycling j through 1, 2, and 3. Similarly the
second row is obtained by holding i fixed at 2 and then cycling through j. The procedure
for the third row should now be obvious. In this case symmetry implies that σ12 = σ21 ,
σ13 = σ31 , and σ23 = σ32 .

Operations on Tensors
Contraction
The first fundamental operation on tensors is the contraction. Consider the common defini-
tion of a sum
3
X
Ai Bi = A1 B1 + A2 B2 + A3 B3
i=1

If we take Ai and Bi to be tensors of rank one (i.e. vectors), then the above operation defines
a contraction over the free index i. Following a convention introduced by Einstein, the sum-
mation symbol is usually omitted. Under this so-called ”Einstein summation convention”,
whenever a repeated index appears a summation over that index is implied. Thus the above
example may be written as

Ai Bi = A1 B1 + A2 B2 + A3 B3

~ and B.
Note that the above operation is equivalent to the dot product of the vectors A ~

~·B
A ~ ≡ Ak Bk = A1 B1 + A2 B2 + A3 B3

The choice for the name of the index to be summed over is arbitrary. For this reason it is
sometimes referred to as a ”dummy” index.
The contraction is defined between two (or more) tensors of unequal rank. For example
consider the following example

2
wi = uj σij

where

w1 = u1 σ11 + u2 σ12 + u3 σ13


w2 = u1 σ21 + u2 σ22 + u3 σ23
w3 = u1 σ31 + u2 σ32 + u3 σ33

In general a contraction over the first index of a tensor of rank two will not equal a contrac-
tion over the second index. That is ui σij 6= uj σij . For the special case of a symmetric tensor
σij = σji and a contraction over either index yields the same result. That is ui σij = uj σij
provided σij is symmetric. When this condition of symmetry exists, a vector notation may
be used to represent the contraction as illustrated by the following example

*
)
~ · σ) ≡ Uj σij
(U (provided σij is symmetric)

It should be clear that the above definition relies on the symmetry of σij since the dot
product leaves an ambiguity over which of the two indices the contraction is to be made.

Divergence of a Tensor
The divergence of tensor is an application of index contraction. To see this, first define the
spatial vector

~x ≡ xi where x1 = x , x2 = y , and x3 = z

The divergence of the velocity vector may then be represented as

~ ≡ ∂ui = ∂u1 + ∂u2 + ∂u3


∇·U
∂xi ∂x1 ∂x2 ∂x3
∂u ∂v ∂w
= + +
∂x ∂y ∂z

The divergence of a higher rank tensor is likewise defined. If we restrict ourselves to sym-
metric tensors of rank two, the vector notation may also be used. Thus the divergence of
the stress tensor can be written as

*
) ∂σij
(∇· σ) ≡ ∂xj = ri

3
where
∂σ11 ∂σ12 ∂σ13
r1 = + +
∂x1 ∂x2 ∂x3
∂σ21 ∂σ22 ∂σ23
r2 = + +
∂x1 ∂x2 ∂x3
∂σ31 ∂σ32 ∂σ33
r3 = + +
∂x1 ∂x2 ∂x3

Note that the divergence operation lowers the rank of the tensor by one. Thus the divergence
of a vector is a scalar and the divergence of a tensor of rank two is a tensor of rank one,
which is a vector.
As another example of the contraction, consider the following work term from the energy
equation
*
~· ) ∂uj σij ∂ ∂
∇ · (U σ) ≡ = (u1 σ11 + u2 σ12 + u3 σ13 ) + (u1 σ21 + u2 σ22 + u3 σ23 ) +
∂xi ∂x1 ∂x2

(u1 σ31 + u2 σ32 + u3 σ33 )
∂x3

Gradient of a Tensor
Unlike the divergence operation, the gradient operation increases the rank of the tensor by
one. Thus the gradient of a scalar is a vector, the gradient of a first rank tensor is a second
rank tensor, and so on. The application of the gradient operator is straightforward. Study
the following examples

∂φ
∇φ ≡ ∂xi = Ai

where

∂φ ∂φ ∂φ
A1 = ∂x1 A2 = ∂x2 A3 = ∂x3

~ ≡
∇U ∂ui
= βij
∂xj

where

4
 
 ∂u1 ∂u1 ∂u1 
∂x1 ∂x2 ∂x3

 


 

 
∂u2 ∂u2 ∂u2
βij =  ∂x1 ∂x2 ∂x3 
 
∂u ∂u3 ∂u3

 


 3 

∂x1 ∂x2 ∂x3

As a final example of both the contraction and the gradient operations, consider the convec-
tive term of the momentum equation

~ · ∇U
U ~ ≡ uj ∂ui = si
∂xj

where
∂u1 ∂u1 ∂u1
s1 = u1 + u2 + u3
∂x1 ∂x2 ∂x3
∂u2 ∂u2 ∂u2
s2 = u1 + u2 + u3
∂x1 ∂x2 ∂x3
∂u3 ∂u3 ∂u3
s3 = u1 + u2 + u3
∂x1 ∂x2 ∂x3

The Kronecker Delta


A second rank tensor of great utility is known as the Kronecker delta. It is defined as follows


*
)  1 i=j
δ ≡ δij = 
6 j
0 i=

The Kronecker delta has some interesting and useful unitary properties. One such property
is that the divergence of the Kronecker delta yields the gradient operator. That is

*
) ∂δij ∂
(∇· δ ) ≡ ∂xj = ∂xj ≡∇

You might also like