0% found this document useful (0 votes)
472 views

Tensor Notation

1. Index notation provides an efficient way to express operations on vectors and tensors using their Cartesian components. Lower case subscripts denote the components, and repeated indices imply summation. 2. A tensor can be represented by its components in a Cartesian basis as a multi-dimensional array. Common operations like addition, multiplication, and transpose can be performed on tensors. 3. While a matrix can represent the components of a tensor, not all matrices are tensors. For a matrix to represent a tensor, its application to vectors must result in another vector that transforms properly under a change of basis.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
472 views

Tensor Notation

1. Index notation provides an efficient way to express operations on vectors and tensors using their Cartesian components. Lower case subscripts denote the components, and repeated indices imply summation. 2. A tensor can be represented by its components in a Cartesian basis as a multi-dimensional array. Common operations like addition, multiplication, and transpose can be performed on tensors. 3. While a matrix can represent the components of a tensor, not all matrices are tensors. For a matrix to represent a tensor, its application to vectors must result in another vector that transforms properly under a change of basis.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

2.

2 Index Notation for Vector and Tensor Operations

Operations on Cartesian components of vectors and tensors may be expressed very efficiently and clearly using index
notation.
2.1. Vector and tensor components.
Let x be a (three dimensional) vector and let S be a second order tensor. Let
be a Cartesian basis. Denote the components of x in this basis by
the components of S by

, and denote

Using index notation, we would express x and S as

2.2. Conventions and special symbols for index notation


Range Convention: Lower case Latin subscripts (i, j, k) have the range
denotes three components of a vector

and

. The symbol
. The symbol

denotes nine

components of a second order tensor,

Summation convention (Einstein convention): If an index is repeated in a product of vectors or tensors,


summation is implied over the repeated index. Thus

In the last two equations,


matrices of A, B and C.
The Kronecker Delta: The symbol

and

denote the

is known as the Kronecker delta, and has the properties

component

thus

You can also think of

as the components of the identity tensor, or a

Observe the following useful results

The Permutation Symbol: The symbol

thus

Note that

has properties

identity matrix.

2.3. Rules of index notation


1. The same index (subscript) may not appear more than twice in a product of two (or more) vectors or
tensors. Thus

are valid, but

are meaningless

2. Free indices on each term of an equation must agree. Thus

are valid, but

are meaningless.
3. Free and dummy indices may be changed without altering the meaning of an expression, provided that
rules 1 and 2 are not violated. Thus

2.4. Vector operations expressed using index notation


Addition.

Dot Product
Vector

Product

Dyadic

Product

be a Cartesian basis,

Change of Basis. Let a be a vector. Let

and denote the components of ain this basis by

. Let

be a second basis, and denote the components of a in this basis by

. Then,

define

where
Then

denotes the angle between the unit vectors

2.5. Tensor operations expressed using index notation.


Addition.

Transpose

Scalar Products

and

Product

Product

of

of

tensor

two

and

vector

tensors

Determinant

Inverse

be a

Change of Basis. Let A be a second order tensor. Let

Cartesian basis, and denote the components of A in this basis by

. Let

be a second basis, and denote the components of A in this basis by


. Then, define

where
Then

denotes the angle between the unit vectors

2.6. Calculus using index notation


The derivative

can be deduced by noting that


and

. Therefore

and

The same argument can be used for higher order tensors

2.7. Examples of algebraic manipulations using index notation


1. Let a, b, c, d be vectors. Prove that

Express the left hand side of the equation using index notation (check the rules for cross products and dot products of
vectors to see how this is done)

Recall the identity


so

Multiply out, and note that


(multiplying by a Kronecker delta has the effect of switching indices) so

Finally, note that


and similarly for other products with the same index, so that

2. The stressstrain relation for linear elasticity may be expressed as

where
and
are the components of the stress and strain tensor, and
modulus and Poissons ratio. Find an expression for strain in terms of stress.
Set i=j to see that

and

denote Youngs

Recall that

, and notice that we can replace the remaining ii by kk

Now, substitute for

in the given stressstrain relation

3. Solve the equation

for

in terms of

Multiply both sides by

and
to see that

Substitute back into the equation given for

to see that

4. Let

. Calculate

We can just apply the usual chain and product rules of differentiation

5. Let

. Calculate

Using the product rule

A Brief Introduction to Tensors and their properties

1. BASIC PROPERTIES OF TENSORS


1.1 Examples of Tensors
The gradient of a vector field is a good example of a second-order tensor. Visualize a vector field: at every point in
space, the field has a vector value
. Let
represent
the gradient of u. By definition, G enables you to calculate the change in u when you move from a point x in space to
a nearby point at
:
G is a second order tensor. From this example, we see that when you multiply a vector by a tensor, the result is
another vector.

This is a general property of all second order tensors. A tensor is a linear mapping of a vector onto another
vector. Two examples, together with the vectors they operate on, are:
The stress tensor
where n is a unit vector normal to a surface,
surface.

is the stress tensor and t is the traction vector acting on the

The deformation gradient tensor


where dx is an infinitesimal line element in an undeformed solid, and dw is the vector representing the
deformed line element.

1.2 Matrix representation of a tensor


To evaluate and manipulate tensors, we express them as components in a basis, just as for vectors. We can use the
displacement gradient to illustrate how this is done. Let
and let
represent the gradient of u. Recall the definition of G

be a vector field,

Now, let
be a Cartesian basis, and express both du and dx as components. Then,
calculate the components ofdu in terms of dx using the usual rules of calculus

We could represent this as a matrix product

Alternatively, using index notation

From this example we see that G can be represented as a

matrix. The elements of the matrix are known

as thecomponents of G in the basis


. All second order tensors can be
represented in this form. For example, a general second order tensor S could be written as

You have probably already seen the matrix representation of stress and strain components in introductory courses.
Since S can be represented as a matrix, all operations that can be performed on a
matrix can also be
performed on S. Examples include sums and products, the transpose, inverse, and determinant. One can also
compute eigenvalues and eigenvectors for tensors, and thus define the log of a tensor, the square root of a tensor, etc.
These tensor operations are summarized below.
Note that the numbers
,
,
depend on the basis
,
just as the components of a vector depend on the basis used to represent the vector. However, just as the magnitude
and direction of a vector are independent of the basis, so the properties of a tensor are independent of the basis. That
is to say, if S is a tensor and u is a vector, then the vector
has the same magnitude and direction, irrespective of the basis used to represent u, v, and S.
1.3 The difference between a matrix and a tensor
If a tensor is a matrix, why is a matrix not the same thing as a tensor? Well, although you can multiply the three
components of a vector u by any
matrix,

the resulting three numbers


may or may not represent the components of a vector.
If they are the components of a vector, then the matrix represents the components of a tensor A, if not, then the matrix
is just an ordinary old matrix.
To check whether

are the components of a vector, you need to check

how
change due to a change of basis. That is to say, choose a new basis, calculate
the new components of u in this basis, and calculate the new matrix in this basis (the new elements of the matrix will
depend on how the matrix was defined. The elements may or may not change
if they dont, then the matrix cannot
be the components of a tensor). Then, evaluate the matrix product to find a new left hand side, say
. If

are related to

by the same transformation that was used to calculate the new components of u, then
are the components of a vector, and, therefore, the matrix represents the components of a tensor.
1.4 Formal definition
Tensors are rather more general objects than the preceding discussion suggests. There are various ways to define a
tensor formally. One way is the following:
A tensor is a linear vector valued function defined on the set of all vectors
More specifically, let
vectors
and scalars

denote a tensor operating on a vector. Linearity then requires that, for all

Alternatively, one can define tensors as sets of numbers that transform in a particular way under a change of
coordinate system. In this case we suppose that n dimensional space can be parameterized by a set of n real
numbers

. We could change coordinate system by introducing a second set of real numbers

which are invertible functions of


. Tensors can then be defined as sets of real numbers that transform in
a particular way under this change in coordinate system. For example

A tensor of zeroth rank is a scalar that is independent of the coordinate system.

A covariant tensor of rank 1 is a vector that transforms as

A contravariant tensor of rank 1 is a vector that transforms as

A covariant tensor of rank 2 transforms as

A contravariant tensor of rank 2 transforms as

A mixed tensor of rank 2 transforms as

Higher rank tensors can be defined in similar ways. In solid and fluid mechanics we nearly always use Cartesian
tensors, (i.e. we work with the components of tensors in a Cartesian coordinate system) and this level of generality is
not needed (and is rather mysterious). We might occasionally use a curvilinear coordinate system, in which we do
express tensors in terms of covariant or contravariant components
this gives some sense of what these quantities
mean. But since solid and fluid mechanics live in Euclidean space we dont see some of the subtleties that arise, e.g.
in the theory of general relativity.
1.5 Creating a tensor using a dyadic product of two vectors.

Let a and b be two vectors. The dyadic product of a and b is a second order tensor S denoted by

.
with the property

for all vectors u. (Clearly, this maps u onto a vector parallel to a with magnitude
The components of

in a basis

)
are

Note that not all tensors can be constructed using a dyadic product of only two vectors (this is because
always has to be parallel to a, and therefore the representation cannot map a vector onto an
arbitrary vector). However, if a, b, and c are three independent vectors (i.e. no two of them are parallel) then all
tensors can be constructed as a sum of scalar multiples of the nine possible dyadic products of these vectors.

2. OPERATIONS ON SECOND ORDER TENSORS


Tensor components.
Let
of S in

be a Cartesian basis, and let S be a second order tensor. The components


may be represented as a matrix

where

The representation of a tensor in terms of its components can also be expressed in dyadic form as

This representation is particularly convenient when using polar coordinates, or when using a general non-orthogonal
coordinate system.

Addition
Let S and T be two tensors. Then

is also a tensor.

Denote the Cartesian components of U, S and T by matrices as defined above. The components of U are then related
to the components of S and T by

In index notation we would write

Product of a tensor and a vector


Let u be a vector and S a second order tensor. Then
is a vector.
Let
vectors u and v in a Cartesian basis
described above. Then

Alternatively, using index notation

The product
is also a vector. In component form

and

denote

the

components

of

, and denote the Cartesian components of S as

or
Observe that

(unless S is symmetric).

Product of two tensors


Let T and S be two second order tensors. Then
Denote the components of U, S and T by

is also a tensor.
matrices. Then,

Alternatively, using index notation

Note that tensor products, like matrix products, are not commutative; i.e.

Transpose
Let S be a tensor. The transpose of S is denoted by

and is defined so that

Denote the components of S by a 3x3 matrix. The components of

are then

i.e. the rows and columns of the matrix are switched.


Note that, if A and B are two tensors, then

Trace
Let S be a tensor, and denote the components of S by a
matrix. The trace of S is denoted by tr(S) or
trace(S), and can be computed by summing the diagonals of the matrix of components

More formally, let

be any Cartesian basis. Then

The trace of a tensor is an example of an invariant of the tensor


you use to define the matrix of components of S.

you get the same value for trace(S) whatever basis

In index notation, the trace is written


Contraction.
Inner Product: Let S and T be two second order tensors. The inner product of S and T is a scalar, denoted by
. Represent S and T by their components in a basis. Then

In index notation
Observe that
where I is the identity tensor.

, and also that

Outer product: Let S and T be two second order tensors. The outer product of S and T is a scalar, denoted
by
. Represent S and T by their components in a basis. Then

In index notation

Observe that
Determinant
The determinant of a tensor is defined as the determinant of the matrix of its components in a basis. For a second
order tensor

In index notation this would read

Note that if S and T are two tensors, then

Inverse
Let S be a second order tensor. The inverse of S exists if and only if
by

where

, and is defined

denotes the inverse of S and I is the identity tensor.

The inverse of a tensor may be computed by calculating the inverse of the matrix of its components. Formally, the
inverse of a second order tensor can be written in a simple form using index notation as

In practice it is usually faster to compute the inverse using methods such as Gaussian elimination.
Change of Basis.
Let S be a tensor, and let
of S in the basis

be a Cartesian basis. Suppose that the components


are known to be

Now, suppose that we wish to compute the components of S in a second Cartesian basis,
. Denote these components by

To do so, first compute the components of the transformation matrix [Q]

(this is the same matrix you would use to transform vector components from
to
or, written out in full

). Then,

To prove this result, let u and v be vectors satisfying


Denote the components of u and v in the two bases by

and

, respectively. Recall that the vector components are related by

Now, we could express the tensor-vector product in either basis

Substitute for

Recall that

so multiplying both sides by [Q] shows that

so, comparing with the first of equation (1)


as stated.
In index notation, we would write

from above into the second of these two relations, we see that

Another, perhaps cleaner, way to derive this result is to expand the two tensors as the appropriate dyadic products of
the basis vectors

Invariants
Invariants of a tensor are scalar functions of the tensor components which remain constant under a basis change. That
is to say, the invariant has the same value when computed in two arbitrary bases
and
invariants.

. A symmetric second order tensor always has three independent

Examples of invariants are


1. The three eigenvalues
2. The determinant
3. The trace
4. The inner and outer products
These are not all independent

for example any of 2-4 can be calculated in terms of 1.

In practice, the most commonly used invariants are:

Eigenvalues and Eigenvectors (Principal values and direction)


Let S be a second order tensor. The scalars

and unit vectors m which satisfy

are known as the eigenvalues and eigenvectors of S, or the principal values and principal directions of S. Note that
may be complex. For a second order tensor in three dimensions, there are generally three values of
and three
unique unit vectors m which satisfy this equation. Occasionally, there may be only two or one value of
. If this
is the case, there are infinitely many possible vectors m that satisfy the equation. The eigenvalues of a tensor, and the
components of the eigenvectors, may be computed by finding the eigenvalues and eigenvectors of the matrix of
components.
The eigenvalues of a symmetric tensor are always real, and its eigenvectors are mutually perpendicular (these two
results are important and are proved below). The eigenvalues of a skew tensor are always pure imaginary or zero.
The eigenvalues of a second order tensor are computed using the condition
. This yields a cubic equation, which can be expressed as

There are various ways to solve the resulting cubic equation explicitly
a solution for symmetric S is given below,
but the results for a general tensor are too messy to be given here. The eigenvectors are then computed from the
condition

The Cayley-Hamilton Theorem


Let S be a second order tensor and let
be the three invariants.
Then
(i.e. a tensor satisfies its characteristic equation). There is an obscure trick to show this Consider the
tensor
(where
is an arbitrary scalar), and let T be the adjoint of
, (the
adjoint is just the inverse multiplied by the determinant) which satisfies

Assume that T=
shows that

Use these to substitute for

. Substituting in the preceding equation

into

3 SPECIAL TENSORS
Identity tensor The identity tensor I is the tensor such that, for any tensor S or vector v

In any basis, the identity tensor has components

Symmetric Tensor A symmetric tensor S has the property


The components of a symmetric tensor have the form

so that there are only six independent components of the tensor, instead of nine. Symmetric tensors have some nice
properties:

The eigenvectors of a symmetric tensor with distinct eigenvalues are orthogonal. To see this, let
be two eigenvectors, with corresponding eigenvalues
Then

The eigenvalues of a symmetric tensor are real. To see this, suppose that
eigenvalue/eigenvector

pair,

and

let

denote

their

complex

definition

are a complex
conjugates.
.

Then,

by
And

hence
.

But

note

that

for

symmetric

tensor
.

Thus

.
The eigenvalues of a symmetric tensor can be computed as

The eigenvectors can then be found by back-substitution into


do this, note that the matrix equation can be written as

. To

Since the determinant of the matrix is zero, we can discard any row in the equation system and take any column over
to the right hand side. For example, if the tensor has at least one eigenvector with
values of

then the

for this eigenvector can be found by discarding the third row, and writing

Spectral decomposition of a symmetric tensor Let S be a symmetric second order tensor, and let
be the three eigenvalues and eigenvectors of S. Then S can be expressed as

To see this, note that S can always be expanded as a sum of 9 dyadic products of an orthogonal
basis.

. But since

are eigenvectors it follows

that

Skew Tensor. A skew tensor S has the property


The components of a skew tensor have the form

Every second-order skew tensor has a dual vector w that satisfies


for

all

vectors u.

You

can

see

this

by

noting

that

and expanding out the tensor and cross products


explicitly. In index notation, we can also write

.
Orthogonal Tensors An orthogonal tensor R has the property

An orthogonal tensor must have

; a tensor with

is known as a proper orthogonal tensor. Orthogonal tensors also have some interesting and useful
properties:
Orthogonal tensors map a vector onto another vector with the same length. To see this, let u be an arbitrary
vector. Then, note that

The eigenvalues of an orthogonal tensor are


be an eigenvector, with corresponding eigenvalue

for some value of

. To see this, let

. By definition,

Hence,
.
Similarly,
. Since the characteristic equation is cubic, there must be at most three
eigenvalues, and at least one eigenvalue must be real.
Proper orthogonal tensors can be visualized physically as rotations. A rotation can also be represented in several other
forms besides a proper orthogonal tensor. For example
The Rodriguez representation quantifies a rotation as an angle of rotation
(in radians) about some
axis n (specified by a unit vector). Given R, there are various ways to compute n and
. For example,
one way would be find the eigenvalues and the real eigenvector. The real eigenvector (suitably normalized)
must correspond to n;the complex eigenvalues give

Alternatively, given n and

. A faster method is to note that

, R can be computed from

where W is the skew tensor that has n as its dual vector, i.e.
. In index notation, this formula is

Another useful result is the Polar Decomposition Theorem, which states that invertible second order tensors can be
expressed as a product of a symmetric tensor with an orthogonal tensor:

Moreover, the tensors

are unique. To see this, note that

is symmetric and has positive eigenvalues (to see that its symmetric, simply take
the transpose, and to see that the eigenvalues are positive, note that
for all vectors dx).

Let

and

be the three eigenvalues and eigenvectors of

. Since the

eigenvectors are orthogonal, we can write

We can then set

and

define

. U is clearly symmetric, and also


. To see that R is orthogonal note that:
.

Given that U and R exist we can write


so if we define
. It is easy to show that V is symmetric.

then

To see that the decomposition is unique, suppose that


tensors
square root so

. Then

for some other


. But

. The uniqueness of R follows immediately.

has a unique

You might also like