Tensor 1 10
Tensor 1 10
Mikhail Itskov
Series editors
Claus Hillermeier, Neubiberg, Germany
Jörg Schröder, Essen, Germany
Bernhard Weigand, Stuttgart, Germany
More information about this series at http://www.springer.com/series/8445
Mikhail Itskov
123
Mikhail Itskov
Department of Continuum Mechanics
RWTH Aachen University
Aachen
Germany
In this edition some new examples dealing with the inertia tensor and the propa-
gation of compression and shear waves in an isotropic linear-elastic medium are
incorporated. Section 3.3 is completely revised and enriched by an example of thin
membranes under hydrostatic pressure. The so derived Laplace law is illustrated
there by a thin wall vessel of torus form under internal pressure. In Chap. 8
I introduced a section concerned with the deformation of a line, area and volume
element and some accompanying kinematic identities. Similar to the previous
edition some new exercises and solutions are added.
vii
Preface to the Third Edition
This edition is enriched by some new examples, problems and solutions, in par-
ticular, concerned with simple shear. I also added an example with the derivation
of constitutive relations and tangent moduli for hyperelastic materials with the
isochoric-volumetric split of the strain energy function. Besides, Chap. 2 is com-
pleted with some new figures, for instance, illustrating spherical coordinates. These
figures have again been prepared by Uwe Navrath. I also gratefully acknowledge
Khiêm Ngoc Vu for careful proofreading of the manuscript. At this opportunity,
I would also like to thank Springer-Verlag and in particular Jan-Philip Schmidt for
the fast and friendly support in getting this edition published.
ix
Preface to the Second Edition
xi
Preface to the First Edition
Like many other textbooks the present one is based on a lecture course given by the
author for master students of the RWTH Aachen University. In spite of a somewhat
difficult matter those students were able to endure and, as far as I know, are still
fine. I wish the same for the reader of the book.
Although the present book can be referred to as a textbook one finds only little
plain text inside. I tried to explain the matter in a brief way, nevertheless going into
detail where necessary. I also avoided tedious introductions and lengthy remarks
about the significance of one topic or another. A reader interested in tensor algebra
and tensor analysis but preferring, however, words instead of equations can close
this book immediately after having read the preface.
The reader is assumed to be familiar with the basics of matrix algebra and
continuum mechanics and is encouraged to solve at least some of the numerous
exercises accompanying every chapter. Having read many other texts on mathe-
matics and mechanics, I was always upset vainly looking for solutions to the
exercises which seemed to be the most interesting for me. For this reason, all the
exercises here are supplied with solutions amounting a substantial part of the book.
Without doubt, this part facilitates a deeper understanding of the subject.
As a research work this book is open for discussion which will certainly con-
tribute to improving the text for further editions. In this sense, I am very grateful for
comments, suggestions and constructive criticism from the reader. I already expect
such criticism, for example, with respect to the list of references which might be far
from complete. Indeed, throughout the book I only quote the sources indispensable
to follow the exposition and notation. For this reason, I apologize to colleagues
whose valuable contributions to the matter are not cited.
Finally, a word of acknowledgment is appropriate. I would like to thank Uwe
Navrath for having prepared most of the figures for the book. Further, I am grateful
to Alexander Ehret who taught me the first steps as well as some “dirty” tricks in
LaTeX, which were absolutely necessary to bring the manuscript to a printable
xiii
xiv Preface to the First Edition
form. He and Tran Dinh Tuyen are also acknowledged for careful proofreading and
critical comments to an earlier version of the book. My special thanks go to
Springer-Verlag and in particular to Eva Hestermann-Beyerle and Monika Lempe
for their friendly support in getting this book published.
xv
xvi Contents
9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.1 Exercises of Chap. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.2 Exercises of Chap. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.3 Exercises of Chap. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
9.4 Exercises of Chap. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.5 Exercises of Chap. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.6 Exercises of Chap. 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.7 Exercises of Chap. 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.8 Exercises of Chap. 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Chapter 1
Vectors and Tensors in a Finite-Dimensional
Space
We start with the definition of the vector space over the field of real numbers R.
Definition 1.1 A vector space is a set V of elements called vectors satisfying the
following axioms.
x +y =y +x
x x
−x
y
vector addition negative vector
2.5x
2x
zero vector
Indeed, the axioms (A) and (B) apply to the n-tuples if one defines addition,
multiplication by a scalar and finally the zero tuple, respectively, by
⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎪ a1 + b1 ⎪
⎪
⎪ ⎪
⎪
⎪
⎪
⎪
αa1 ⎪
⎪
⎪
⎪
⎪
⎪
0⎪⎪
⎪
⎪
⎨ a2 + b2 ⎪
⎬ ⎪
⎨ αa2 ⎪
⎬ ⎨0⎪
⎪ ⎬
a+b= . , αa = . , 0= . .
⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪ ⎪
⎪
⎪ . ⎪
⎪ ⎪
⎪ . ⎪⎪
⎪
⎪
⎪
⎪ .⎪⎪
⎩ ⎭ ⎩ ⎭ ⎩ ⎪ ⎭
an + bn αan 0
n
αi x i = 0. (1.1)
i=1
n
x= αi x i (1.2)
i=1
n
αi x i = 0,
i=1
where not all αi are zero. Let αk (2 ≤ k ≤ n) be the last non-zero number, so that
αi = 0 (i = k + 1, . . . , n). Then,
k
k−1
−αi
αi x i = 0 ⇒ x k = xi .
αk
i=1 i=1
Theorem 1.2 All the bases of a finite-dimensional vector space V contain the same
number of vectors.
Proof Let G = g 1 , g 2 , . . . , g n and F = f 1 , f 2 , . . . , f m be two arbitrary bases
of V with different numbers of elements, say m > n. Then, every vector in V is a
linear combination of the following vectors:
f 1 , g1 , g2 , . . . , gn . (1.3)
These vectors are non-zero and linearly dependent. Thus, according to Theorem 1.1
we can find such a vector g k , which is a linear combination of the preceding ones.
Excluding this vector we obtain the set G by
f 1 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n
again with the property that every vector in V is a linear combination of the elements
of G . Now, we consider the following vectors
f 1 , f 2 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n
and repeat the excluding procedure just as before. We see that none of the vectors
f i can be eliminated in this way because they are linearly independent. As soon as
all g i (i = 1, 2, . . . , n) are exhausted we conclude that the vectors
f 1 , f 2 , . . . , f n+1
are linearly dependent. This contradicts, however, the previous assumption that they
belong to the basis F.
Definition 1.5 The dimension of a finite-dimensional vector space V is the number
of elements in a basis of V.
Theorem 1.3 Every set F = f 1 , f 2 , . . . , f n of linearly independent vectors in
an n-dimensional vectors space V forms a basis of V. Every set of more than n
vectors is linearly dependent.
Proof The proof of this theorem is similar to the preceding one. Let G = g 1 , g 2 ,
. . . , g n be a basis of V. Then, the vectors (1.3) are linearly dependent and non-
zero. Excluding a vector g k we obtain a set of vectors, say G , with the property
that every vector in V is a linear combination of the elements of G . Repeating this
procedure we finally end up with the set F with the same property. Since the vectors
f i (i = 1, 2, . . . , n) are linearly independent they form a basis of V. Any further
vectors in V, say f n+1 , f n+2 , . . . are thus linear combinations of F. Hence, any set
of more than n vectors is linearly dependent.
Theorem 1.4 Every set F = f 1 , f 2 , . . . , f m of linearly independent vectors in
an n-dimensional vector space V can be extended to a basis.
1.2 Basis and Dimension of the Vector Space 5
αx + α1 f 1 + α2 f 2 + . . . + αm+k f m+k = 0,
n
x= x i g i , ∀x ∈ V. (1.4)
i=1
Theorem 1.5 The representation (1.4) with respect to a given basis G is unique.
Proof Let
n
n
x= x i gi and x = y i gi
i=1 i=1
be two different representations of a vector x, where not all scalar coefficients x i and
y i (i = 1, 2, . . . , n) are pairwise identical. Then,
n n
n
0 = x + (−x) = x + (−1) x = x i gi + −y i g i = x i − y i gi ,
i=1 i=1 i=1
where we use the identity −x = (−1) x (Exercise 1.1). Thus, either the numbers
x i and y i are pairwise equal x i = y i (i = 1, 2, . . . , n) or the vectors g i are lin-
early dependent. The latter one is likewise impossible because these vectors form a
basis of V.
The summation of the form (1.4) is often used in tensor algebra. For this reason
it is usually represented without the summation symbol in a short form by
n
x= x i gi = x i gi (1.5)
i=1
The scalar product plays an important role in vector and tensor algebra. The properties
of the vector space essentially depend on whether and how the scalar product is
defined in this space.
Definition 1.6 The scalar (inner) product is a real-valued function x· y of two vectors
x and y in a vector space V, satisfying the following conditions.
C. (C.1) x · y = y · x (commutative rule),
x · y = 0. (1.7)
ei · e j = δij , i, j = 1, 2, . . . , n, (1.8)
where
1 for i = j,
δij = δ =
ij
δ ij = (1.9)
0 for i = j
e2 = x 2 − (x 2 · e1 ) e1 (1.11)
orthogonal to e1 . This holds for the unit vector e2 = e2 /e2 as well. It is also seen
that e2 = e2 · e2 = 0 because otherwise e2 = 0 and thus x 2 = (x 2 · e1 ) e1 =
(x 2 · e1 )
x 1
−1 x 1 . However, the latter result contradicts the fact that the vectors
x 1 and x 2 are linearly independent.
Further, we proceed to construct the vectors
e
e3 = x 3 − (x 3 · e2 ) e2 − (x 3 · e1 ) e1 , e3 = 3 (1.12)
e
3
orthogonal to e1 and e2 . Repeating this procedure we finally obtain the set of ortho-
normal vectors e1 , e2 , . . . , em . Since these vectors are non-zero and mutually orthog-
onal, they are linearly independent (see Exercise 1.6). In the case m = n, this set
represents, according to Theorem 1.3, the orthonormal basis (1.8) in En .
8 1 Vectors and Tensors in a Finite-Dimensional Space
x · y = x 1 y1 + x 2 y2 + · · · + x n yn . (1.13)
For the length of the vector x (1.6) we thus obtain the Pythagoras formula
x
= x 1 x 1 + x 2 x 2 + · · · + x n x n , x ∈ En . (1.14)
j
g i · g j = δi , i, j = 1, 2, . . . , n. (1.15)
In the following we show that a set of vectors G = g 1 , g 2 , . . . , g n satisfying the
conditions (1.15) always exists, is unique and forms a basis in En .
Let E = {e1 , e2 , . . . , en } be an orthonormal basis in En . Since G also represents
a basis, we can write
j j
ei = αi g j , g i = βi e j , i = 1, 2, . . . , n, (1.16)
j j
where αi and βi (i = 1, 2, . . . , n) denote the components of ei and g i , respectively.
Inserting the first relation (1.16) into the second one yields
j j
g i = βi αkj g k , ⇒ 0 = βi αkj − δik g k , i = 1, 2, . . . , n. (1.17)
j
βi αkj = δik , i, k = 1, 2, . . . , n. (1.18)
Let further
g i = αij e j , i = 1, 2, . . . , n, (1.19)
ai g i = 0,
where not all scalars ai (i = 1, 2, . . . , n) are zero. Multiplying both sides of this
relation scalarly by the vectors g j ( j = 1, 2, . . . , n) leads to a contradiction. Indeed,
using (1.170) (see Exercise 1.5) we obtain
0 = ai g i · g j = ai δ ij = a j , j = 1, 2, . . . , n.
The next important question is whether the dual basis is unique. Let G = g 1 , g 2 , . . . ,
1 2
g n and H =
h , h , . . . , h be two arbitrary non-coinciding bases in E , both
n n
dual to G = g 1 , g 2 , . . . , g n . Then,
hi = h ij g j , i = 1, 2, . . . , n.
g i = g ij g j , g i = gij g j , i = 1, 2, . . . , n. (1.21)
Inserting the second relation (1.21) into the first one yields
g i = g ij gjk g k , i = 1, 2, . . . , n. (1.22)
Now, multiplying scalarly the first and second relation (1.21) by the vectors g j and
g j ( j = 1, 2, . . . , n), respectively, we obtain with the aid of (1.15) the following
important identities:
j
ei = ei , ei · e j = δi , i, j = 1, 2, . . . , n. (1.26)
With the aid of the dual bases one can represent an arbitrary vector in En by
x = x i g i = xi g i , ∀x ∈ En , (1.27)
where
x i = x · g i , xi = x · g i , i = 1, 2, . . . , n. (1.28)
The components of a vector with respect to the dual bases are suitable for calculating
the scalar product. For example, for two arbitrary vectors x = x i g i = xi g i and
y = y i g i = yi g i we obtain
x · y = x i y j gij = xi y j g ij = x i yi = xi y i . (1.29)
[abc] = (a × b) · c = (b × c) · a = (c × a) · b, (1.32)
where “×” denotes the vector (also called cross or outer) product of vectors. Consider
the following set of vectors: