Tensor Notation
Tensor Notation
Operations on Cartesian components of vectors and tensors may be expressed very efficiently and clearly using index
notation.
2.1. Vector and tensor components.
Let x be a (three dimensional) vector and let S be a second order tensor. Let
be a Cartesian basis. Denote the components of x in this basis by
the components of S by
, and denote
and
. The symbol
. The symbol
denotes nine
and
denote the
component
thus
thus
Note that
has properties
identity matrix.
are meaningless
are meaningless.
3. Free and dummy indices may be changed without altering the meaning of an expression, provided that
rules 1 and 2 are not violated. Thus
Dot Product
Vector
Product
Dyadic
Product
be a Cartesian basis,
. Let
. Then,
define
where
Then
Transpose
Scalar Products
and
Product
Product
of
of
tensor
two
and
vector
tensors
Determinant
Inverse
be a
. Let
where
Then
. Therefore
and
Express the left hand side of the equation using index notation (check the rules for cross products and dot products of
vectors to see how this is done)
where
and
are the components of the stress and strain tensor, and
modulus and Poissons ratio. Find an expression for strain in terms of stress.
Set i=j to see that
and
denote Youngs
Recall that
for
in terms of
and
to see that
to see that
4. Let
. Calculate
We can just apply the usual chain and product rules of differentiation
5. Let
. Calculate
This is a general property of all second order tensors. A tensor is a linear mapping of a vector onto another
vector. Two examples, together with the vectors they operate on, are:
The stress tensor
where n is a unit vector normal to a surface,
surface.
be a vector field,
Now, let
be a Cartesian basis, and express both du and dx as components. Then,
calculate the components ofdu in terms of dx using the usual rules of calculus
You have probably already seen the matrix representation of stress and strain components in introductory courses.
Since S can be represented as a matrix, all operations that can be performed on a
matrix can also be
performed on S. Examples include sums and products, the transpose, inverse, and determinant. One can also
compute eigenvalues and eigenvectors for tensors, and thus define the log of a tensor, the square root of a tensor, etc.
These tensor operations are summarized below.
Note that the numbers
,
,
depend on the basis
,
just as the components of a vector depend on the basis used to represent the vector. However, just as the magnitude
and direction of a vector are independent of the basis, so the properties of a tensor are independent of the basis. That
is to say, if S is a tensor and u is a vector, then the vector
has the same magnitude and direction, irrespective of the basis used to represent u, v, and S.
1.3 The difference between a matrix and a tensor
If a tensor is a matrix, why is a matrix not the same thing as a tensor? Well, although you can multiply the three
components of a vector u by any
matrix,
how
change due to a change of basis. That is to say, choose a new basis, calculate
the new components of u in this basis, and calculate the new matrix in this basis (the new elements of the matrix will
depend on how the matrix was defined. The elements may or may not change
if they dont, then the matrix cannot
be the components of a tensor). Then, evaluate the matrix product to find a new left hand side, say
. If
are related to
by the same transformation that was used to calculate the new components of u, then
are the components of a vector, and, therefore, the matrix represents the components of a tensor.
1.4 Formal definition
Tensors are rather more general objects than the preceding discussion suggests. There are various ways to define a
tensor formally. One way is the following:
A tensor is a linear vector valued function defined on the set of all vectors
More specifically, let
vectors
and scalars
denote a tensor operating on a vector. Linearity then requires that, for all
Alternatively, one can define tensors as sets of numbers that transform in a particular way under a change of
coordinate system. In this case we suppose that n dimensional space can be parameterized by a set of n real
numbers
Higher rank tensors can be defined in similar ways. In solid and fluid mechanics we nearly always use Cartesian
tensors, (i.e. we work with the components of tensors in a Cartesian coordinate system) and this level of generality is
not needed (and is rather mysterious). We might occasionally use a curvilinear coordinate system, in which we do
express tensors in terms of covariant or contravariant components
this gives some sense of what these quantities
mean. But since solid and fluid mechanics live in Euclidean space we dont see some of the subtleties that arise, e.g.
in the theory of general relativity.
1.5 Creating a tensor using a dyadic product of two vectors.
Let a and b be two vectors. The dyadic product of a and b is a second order tensor S denoted by
.
with the property
for all vectors u. (Clearly, this maps u onto a vector parallel to a with magnitude
The components of
in a basis
)
are
Note that not all tensors can be constructed using a dyadic product of only two vectors (this is because
always has to be parallel to a, and therefore the representation cannot map a vector onto an
arbitrary vector). However, if a, b, and c are three independent vectors (i.e. no two of them are parallel) then all
tensors can be constructed as a sum of scalar multiples of the nine possible dyadic products of these vectors.
where
The representation of a tensor in terms of its components can also be expressed in dyadic form as
This representation is particularly convenient when using polar coordinates, or when using a general non-orthogonal
coordinate system.
Addition
Let S and T be two tensors. Then
is also a tensor.
Denote the Cartesian components of U, S and T by matrices as defined above. The components of U are then related
to the components of S and T by
The product
is also a vector. In component form
and
denote
the
components
of
or
Observe that
(unless S is symmetric).
is also a tensor.
matrices. Then,
Note that tensor products, like matrix products, are not commutative; i.e.
Transpose
Let S be a tensor. The transpose of S is denoted by
are then
Trace
Let S be a tensor, and denote the components of S by a
matrix. The trace of S is denoted by tr(S) or
trace(S), and can be computed by summing the diagonals of the matrix of components
In index notation
Observe that
where I is the identity tensor.
Outer product: Let S and T be two second order tensors. The outer product of S and T is a scalar, denoted
by
. Represent S and T by their components in a basis. Then
In index notation
Observe that
Determinant
The determinant of a tensor is defined as the determinant of the matrix of its components in a basis. For a second
order tensor
Inverse
Let S be a second order tensor. The inverse of S exists if and only if
by
where
, and is defined
The inverse of a tensor may be computed by calculating the inverse of the matrix of its components. Formally, the
inverse of a second order tensor can be written in a simple form using index notation as
In practice it is usually faster to compute the inverse using methods such as Gaussian elimination.
Change of Basis.
Let S be a tensor, and let
of S in the basis
Now, suppose that we wish to compute the components of S in a second Cartesian basis,
. Denote these components by
(this is the same matrix you would use to transform vector components from
to
or, written out in full
). Then,
and
Substitute for
Recall that
from above into the second of these two relations, we see that
Another, perhaps cleaner, way to derive this result is to expand the two tensors as the appropriate dyadic products of
the basis vectors
Invariants
Invariants of a tensor are scalar functions of the tensor components which remain constant under a basis change. That
is to say, the invariant has the same value when computed in two arbitrary bases
and
invariants.
are known as the eigenvalues and eigenvectors of S, or the principal values and principal directions of S. Note that
may be complex. For a second order tensor in three dimensions, there are generally three values of
and three
unique unit vectors m which satisfy this equation. Occasionally, there may be only two or one value of
. If this
is the case, there are infinitely many possible vectors m that satisfy the equation. The eigenvalues of a tensor, and the
components of the eigenvectors, may be computed by finding the eigenvalues and eigenvectors of the matrix of
components.
The eigenvalues of a symmetric tensor are always real, and its eigenvectors are mutually perpendicular (these two
results are important and are proved below). The eigenvalues of a skew tensor are always pure imaginary or zero.
The eigenvalues of a second order tensor are computed using the condition
. This yields a cubic equation, which can be expressed as
There are various ways to solve the resulting cubic equation explicitly
a solution for symmetric S is given below,
but the results for a general tensor are too messy to be given here. The eigenvectors are then computed from the
condition
Assume that T=
shows that
into
3 SPECIAL TENSORS
Identity tensor The identity tensor I is the tensor such that, for any tensor S or vector v
so that there are only six independent components of the tensor, instead of nine. Symmetric tensors have some nice
properties:
The eigenvectors of a symmetric tensor with distinct eigenvalues are orthogonal. To see this, let
be two eigenvectors, with corresponding eigenvalues
Then
The eigenvalues of a symmetric tensor are real. To see this, suppose that
eigenvalue/eigenvector
pair,
and
let
denote
their
complex
definition
are a complex
conjugates.
.
Then,
by
And
hence
.
But
note
that
for
symmetric
tensor
.
Thus
.
The eigenvalues of a symmetric tensor can be computed as
. To
Since the determinant of the matrix is zero, we can discard any row in the equation system and take any column over
to the right hand side. For example, if the tensor has at least one eigenvector with
values of
then the
for this eigenvector can be found by discarding the third row, and writing
Spectral decomposition of a symmetric tensor Let S be a symmetric second order tensor, and let
be the three eigenvalues and eigenvectors of S. Then S can be expressed as
To see this, note that S can always be expanded as a sum of 9 dyadic products of an orthogonal
basis.
. But since
that
all
vectors u.
You
can
see
this
by
noting
that
.
Orthogonal Tensors An orthogonal tensor R has the property
; a tensor with
is known as a proper orthogonal tensor. Orthogonal tensors also have some interesting and useful
properties:
Orthogonal tensors map a vector onto another vector with the same length. To see this, let u be an arbitrary
vector. Then, note that
. By definition,
Hence,
.
Similarly,
. Since the characteristic equation is cubic, there must be at most three
eigenvalues, and at least one eigenvalue must be real.
Proper orthogonal tensors can be visualized physically as rotations. A rotation can also be represented in several other
forms besides a proper orthogonal tensor. For example
The Rodriguez representation quantifies a rotation as an angle of rotation
(in radians) about some
axis n (specified by a unit vector). Given R, there are various ways to compute n and
. For example,
one way would be find the eigenvalues and the real eigenvector. The real eigenvector (suitably normalized)
must correspond to n;the complex eigenvalues give
where W is the skew tensor that has n as its dual vector, i.e.
. In index notation, this formula is
Another useful result is the Polar Decomposition Theorem, which states that invertible second order tensors can be
expressed as a product of a symmetric tensor with an orthogonal tensor:
is symmetric and has positive eigenvalues (to see that its symmetric, simply take
the transpose, and to see that the eigenvalues are positive, note that
for all vectors dx).
Let
and
. Since the
and
define
then
. Then
has a unique