75% found this document useful (4 votes)
1K views24 pages

Tensor 1 10

tensor-for-engineers

Uploaded by

sujay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
75% found this document useful (4 votes)
1K views24 pages

Tensor 1 10

tensor-for-engineers

Uploaded by

sujay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Mathematical Engineering

Mikhail Itskov

Tensor Algebra and


Tensor Analysis for
Engineers
With Applications to Continuum
Mechanics
Fourth Edition
Mathematical Engineering

Series editors
Claus Hillermeier, Neubiberg, Germany
Jörg Schröder, Essen, Germany
Bernhard Weigand, Stuttgart, Germany
More information about this series at http://www.springer.com/series/8445
Mikhail Itskov

Tensor Algebra and Tensor


Analysis for Engineers
With Applications to Continuum Mechanics
Fourth Edition

123
Mikhail Itskov
Department of Continuum Mechanics
RWTH Aachen University
Aachen
Germany

ISSN 2192-4732 ISSN 2192-4740 (electronic)


Mathematical Engineering
ISBN 978-3-319-16341-3 ISBN 978-3-319-16342-0 (eBook)
DOI 10.1007/978-3-319-16342-0

Library of Congress Control Number: 2015934223

Springer Cham Heidelberg New York Dordrecht London


© Springer International Publishing Switzerland 2015
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

Springer International Publishing AG Switzerland is part of Springer Science+Business Media


(www.springer.com)
Preface to the Fourth Edition

In this edition some new examples dealing with the inertia tensor and the propa-
gation of compression and shear waves in an isotropic linear-elastic medium are
incorporated. Section 3.3 is completely revised and enriched by an example of thin
membranes under hydrostatic pressure. The so derived Laplace law is illustrated
there by a thin wall vessel of torus form under internal pressure. In Chap. 8
I introduced a section concerned with the deformation of a line, area and volume
element and some accompanying kinematic identities. Similar to the previous
edition some new exercises and solutions are added.

Aachen, December 2014 Mikhail Itskov

vii
Preface to the Third Edition

This edition is enriched by some new examples, problems and solutions, in par-
ticular, concerned with simple shear. I also added an example with the derivation
of constitutive relations and tangent moduli for hyperelastic materials with the
isochoric-volumetric split of the strain energy function. Besides, Chap. 2 is com-
pleted with some new figures, for instance, illustrating spherical coordinates. These
figures have again been prepared by Uwe Navrath. I also gratefully acknowledge
Khiêm Ngoc Vu for careful proofreading of the manuscript. At this opportunity,
I would also like to thank Springer-Verlag and in particular Jan-Philip Schmidt for
the fast and friendly support in getting this edition published.

Aachen, February 2012 Mikhail Itskov

ix
Preface to the Second Edition

This second edition is completed by a number of additional examples and exercises.


In response to comments and questions of students using this book, solutions of
many exercises have been improved for better understanding. Some changes and
enhancements are concerned with the treatment of skew-symmetric and rotation
tensors in the first chapter. Besides, the text and formulae have been thoroughly
reexamined and improved where necessary.

Aachen, January 2009 Mikhail Itskov

xi
Preface to the First Edition

Like many other textbooks the present one is based on a lecture course given by the
author for master students of the RWTH Aachen University. In spite of a somewhat
difficult matter those students were able to endure and, as far as I know, are still
fine. I wish the same for the reader of the book.
Although the present book can be referred to as a textbook one finds only little
plain text inside. I tried to explain the matter in a brief way, nevertheless going into
detail where necessary. I also avoided tedious introductions and lengthy remarks
about the significance of one topic or another. A reader interested in tensor algebra
and tensor analysis but preferring, however, words instead of equations can close
this book immediately after having read the preface.
The reader is assumed to be familiar with the basics of matrix algebra and
continuum mechanics and is encouraged to solve at least some of the numerous
exercises accompanying every chapter. Having read many other texts on mathe-
matics and mechanics, I was always upset vainly looking for solutions to the
exercises which seemed to be the most interesting for me. For this reason, all the
exercises here are supplied with solutions amounting a substantial part of the book.
Without doubt, this part facilitates a deeper understanding of the subject.
As a research work this book is open for discussion which will certainly con-
tribute to improving the text for further editions. In this sense, I am very grateful for
comments, suggestions and constructive criticism from the reader. I already expect
such criticism, for example, with respect to the list of references which might be far
from complete. Indeed, throughout the book I only quote the sources indispensable
to follow the exposition and notation. For this reason, I apologize to colleagues
whose valuable contributions to the matter are not cited.
Finally, a word of acknowledgment is appropriate. I would like to thank Uwe
Navrath for having prepared most of the figures for the book. Further, I am grateful
to Alexander Ehret who taught me the first steps as well as some “dirty” tricks in
LaTeX, which were absolutely necessary to bring the manuscript to a printable

xiii
xiv Preface to the First Edition

form. He and Tran Dinh Tuyen are also acknowledged for careful proofreading and
critical comments to an earlier version of the book. My special thanks go to
Springer-Verlag and in particular to Eva Hestermann-Beyerle and Monika Lempe
for their friendly support in getting this book published.

Aachen, November 2006 Mikhail Itskov


Contents

1 Vectors and Tensors in a Finite-Dimensional Space . . . . . . . . . . . 1


1.1 Notion of the Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Basis and Dimension of the Vector Space . . . . . . . . . . . . . . . 3
1.3 Components of a Vector, Summation Convention . . . . . . . . . . 5
1.4 Scalar Product, Euclidean Space, Orthonormal Basis . . . . . . . . 6
1.5 Dual Bases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Second-Order Tensor as a Linear Mapping. . . . . . . . . . . . . . . 13
1.7 Tensor Product, Representation of a Tensor
with Respect to a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8 Change of the Basis, Transformation Rules . . . . . . . . . . . . . . 21
1.9 Special Operations with Second-Order Tensors . . . . . . . . . . . . 22
1.10 Scalar Product of Second-Order Tensors . . . . . . . . . . . . . . . . 28
1.11 Decompositions of Second-Order Tensors . . . . . . . . . . . . . . . 30
1.12 Tensors of Higher Orders. . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2 Vector and Tensor Analysis in Euclidean Space . . . . . . . . ...... 37


2.1 Vector- and Tensor-Valued Functions,
Differential Calculus . . . . . . . . . . . . . . . . . . . . . . . . ...... 37
2.2 Coordinates in Euclidean Space, Tangent Vectors . . . . ...... 39
2.3 Coordinate Transformation. Co-, Contra-
and Mixed Variant Components . . . . . . . . . . . . . . . . ...... 43
2.4 Gradient, Covariant and Contravariant Derivatives . . . ...... 45
2.5 Christoffel Symbols, Representation of the Covariant
Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 51
2.6 Applications in Three-Dimensional Space: Divergence
and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... 54

3 Curves and Surfaces in Three-Dimensional Euclidean Space. . . . . 69


3.1 Curves in Three-Dimensional Euclidean Space . . . . . . . . . . . . 69
3.2 Surfaces in Three-Dimensional Euclidean Space . . . . . . . . . . . 76
3.3 Application to Shell Theory . . . . . . . . . . . . . . . . . . . . . . . . . 84

xv
xvi Contents

4 Eigenvalue Problem and Spectral Decomposition


of Second-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.1 Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2 Eigenvalue Problem, Eigenvalues and Eigenvectors. . . . . . . . . 99
4.3 Characteristic Polynomial. . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.4 Spectral Decomposition and Eigenprojections . . . . . . . . . . . . . 104
4.5 Spectral Decomposition of Symmetric Second-Order
Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 109
4.6 Spectral Decomposition of Orthogonal and Skew-Symmetric
Second-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 112
4.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . .. 116

5 Fourth-Order Tensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 121


5.1 Fourth-Order Tensors as a Linear Mapping . . . . . . . . . . .... 121
5.2 Tensor Products, Representation of Fourth-Order Tensors
with Respect to a Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.3 Special Operations with Fourth-Order Tensors . . . . . . . . . . . . 125
5.4 Super-Symmetric Fourth-Order Tensors . . . . . . . . . . . . . . . . . 128
5.5 Special Fourth-Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . 130

6 Analysis of Tensor Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 135


6.1 Scalar-Valued Isotropic Tensor Functions. . . . . . . . . . . . . . . . 135
6.2 Scalar-Valued Anisotropic Tensor Functions. . . . . . . . . . . . . . 139
6.3 Derivatives of Scalar-Valued Tensor Functions . . . . . . . . . . . . 142
6.4 Tensor-Valued Isotropic and Anisotropic Tensor Functions . . . 152
6.5 Derivatives of Tensor-Valued Tensor Functions . . . . . . . . . . . 159
6.6 Generalized Rivlin’s Identities . . . . . . . . . . . . . . . . . . . . . . . 164

7 Analytic Tensor Functions. . . . . . . . . . . . . . . . . . . . .......... 169


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .......... 169
7.2 Closed-Form Representation for Analytic Tensor
Functions and Their Derivatives . . . . . . . . . . . . .......... 173
7.3 Special Case: Diagonalizable Tensor Functions. . .......... 176
7.4 Special Case: Three-Dimensional Space. . . . . . . .......... 179
7.5 Recurrent Calculation of Tensor Power Series
and Their Derivatives . . . . . . . . . . . . . . . . . . . .......... 185

8 Applications to Continuum Mechanics . . . . . . . . . . . . ......... 191


8.1 Deformation of a Line, Area and Volume Element ......... 191
8.2 Polar Decomposition of the Deformation Gradient ......... 193
8.3 Basis-Free Representations for the Stretch
and Rotation Tensor . . . . . . . . . . . . . . . . . . . . . ......... 194
8.4 The Derivative of the Stretch and Rotation Tensor
with Respect to the Deformation Gradient . . . . . . ......... 197
Contents xvii

8.5 Time Rate of Generalized Strains . . . . . . . . . . . . . . . . . . . . . 201


8.6 Stress Conjugate to a Generalized Strain . . . . . . . . . . . . . . . . 204
8.7 Finite Plasticity Based on the Additive Decomposition
of Generalized Strains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.1 Exercises of Chap. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.2 Exercises of Chap. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.3 Exercises of Chap. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
9.4 Exercises of Chap. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9.5 Exercises of Chap. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.6 Exercises of Chap. 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.7 Exercises of Chap. 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.8 Exercises of Chap. 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Chapter 1
Vectors and Tensors in a Finite-Dimensional
Space

1.1 Notion of the Vector Space

We start with the definition of the vector space over the field of real numbers R.

Definition 1.1 A vector space is a set V of elements called vectors satisfying the
following axioms.

A. To every pair, x and y of vectors in V there corresponds a vector x + y, called


the sum of x and y, such that
(A.1) x + y = y + x (addition is commutative),
(A.2) (x + y) + z = x + ( y + z) (addition is associative),
(A.3) there exists in V a unique vector zero 0, such that 0 + x = x, ∀x ∈ V,
(A.4) to every vector x in V there corresponds a unique vector −x such that
x + (−x) = 0.
B. To every pair α and x, where α is a scalar real number and x is a vector in V,
there corresponds a vector αx, called the product of α and x, such that
(B.1) α (βx) = (αβ) x (multiplication by scalars is associative),
(B.2) 1x = x,
(B.3) α (x + y) = αx + α y (multiplication by scalars is distributive with respect
to vector addition),
(B.4) (α + β) x = αx + βx (multiplication by scalars is distributive with respect
to scalar addition),
∀α, β ∈ R, ∀x, y ∈ V.

Examples of vector spaces.


(1) The set of all real numbers R.
(2) The set of all directional arrows in two or three dimensions. Applying the usual
definitions for summation, multiplication by a scalar, the negative and zero vector
(Fig. 1.1) one can easily see that the above axioms hold for directional arrows.

© Springer International Publishing Switzerland 2015 1


M. Itskov, Tensor Algebra and Tensor Analysis for Engineers,
Mathematical Engineering, DOI 10.1007/978-3-319-16342-0_1
2 1 Vectors and Tensors in a Finite-Dimensional Space

x +y =y +x

x x
−x

y
vector addition negative vector

2.5x

2x

zero vector

multiplication by a real scalar

Fig. 1.1 Geometric illustration of vector axioms in two dimensions

(3) The set of all n-tuples of real numbers R:


⎧ ⎫

⎪ a1 ⎪


⎪ a2 ⎪

⎨ ⎬
a= . .

⎪ ⎪

⎪ . ⎪

⎩ ⎪ ⎭
an

Indeed, the axioms (A) and (B) apply to the n-tuples if one defines addition,
multiplication by a scalar and finally the zero tuple, respectively, by
⎧ ⎫ ⎧ ⎫ ⎧ ⎫
⎪ a1 + b1 ⎪

⎪ ⎪




αa1 ⎪





0⎪⎪


⎨ a2 + b2 ⎪
⎬ ⎪
⎨ αa2 ⎪
⎬ ⎨0⎪
⎪ ⎬
a+b= . , αa = . , 0= . .

⎪ ⎪
⎪ ⎪
⎪ ⎪ ⎪ ⎪

⎪ . ⎪
⎪ ⎪
⎪ . ⎪⎪



⎪ .⎪⎪
⎩ ⎭ ⎩ ⎭ ⎩ ⎪ ⎭
an + bn αan 0

(4) The set of all real-valued functions defined on a real line.


1.2 Basis and Dimension of the Vector Space 3

1.2 Basis and Dimension of the Vector Space

Definition 1.2 A set of vectors x 1 , x 2 , . . . , x n is called linearly dependent if there


exists a set of corresponding scalars α1 , α2 , . . . , αn ∈ R, not all zero, such that


n
αi x i = 0. (1.1)
i=1

Otherwise, the vectors x 1 , x 2 , . . . , x n are called linearly independent. In this case,


none of the vectors x i is the zero vector (Exercise 1.2).

Definition 1.3 The vector


n
x= αi x i (1.2)
i=1

is called linear combination of the vectors x 1 , x 2 , . . . , x n , where αi ∈ R (i =


1, 2, . . . , n).

Theorem 1.1 The set of n non-zero vectors x 1 , x 2 , . . . , x n is linearly dependent if


and only if some vector x k (2 ≤ k ≤ n) is a linear combination of the preceding ones
x i (i = 1, . . . , k − 1).

Proof If the vectors x 1 , x 2 , . . . , x n are linearly dependent, then


n
αi x i = 0,
i=1

where not all αi are zero. Let αk (2 ≤ k ≤ n) be the last non-zero number, so that
αi = 0 (i = k + 1, . . . , n). Then,


k
k−1
−αi
αi x i = 0 ⇒ x k = xi .
αk
i=1 i=1

Thereby, the case k = 1 is avoided because α1 x 1 = 0 implies that x 1 = 0


(Exercise 1.1). Thus, the sufficiency is proved. The necessity is evident.

Definition 1.4 A basis in a vector space V is a set G ⊂ V of linearly independent


vectors such that every vector in V is a linear combination of elements of G. A vector
space V is finite-dimensional if it has a finite basis.

Within this book, we restrict our attention to finite-dimensional vector spaces.


Although one can find for a finite-dimensional vector space an infinite number of
bases, they all have the same number of vectors.
4 1 Vectors and Tensors in a Finite-Dimensional Space

Theorem 1.2 All the bases of a finite-dimensional vector space V contain the same
number of vectors.



Proof Let G = g 1 , g 2 , . . . , g n and F = f 1 , f 2 , . . . , f m be two arbitrary bases
of V with different numbers of elements, say m > n. Then, every vector in V is a
linear combination of the following vectors:

f 1 , g1 , g2 , . . . , gn . (1.3)

These vectors are non-zero and linearly dependent. Thus, according to Theorem 1.1
we can find such a vector g k , which is a linear combination of the preceding ones.
Excluding this vector we obtain the set G  by

f 1 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n

again with the property that every vector in V is a linear combination of the elements
of G  . Now, we consider the following vectors

f 1 , f 2 , g 1 , g 2 , . . . , g k−1 , g k+1 , . . . , g n

and repeat the excluding procedure just as before. We see that none of the vectors
f i can be eliminated in this way because they are linearly independent. As soon as
all g i (i = 1, 2, . . . , n) are exhausted we conclude that the vectors

f 1 , f 2 , . . . , f n+1

are linearly dependent. This contradicts, however, the previous assumption that they
belong to the basis F.
Definition 1.5 The dimension of a finite-dimensional vector space V is the number
of elements in a basis of V.


Theorem 1.3 Every set F = f 1 , f 2 , . . . , f n of linearly independent vectors in
an n-dimensional vectors space V forms a basis of V. Every set of more than n
vectors is linearly dependent.

Proof The proof of this theorem is similar to the preceding one. Let G = g 1 , g 2 ,
. . . , g n be a basis of V. Then, the vectors (1.3) are linearly dependent and non-
zero. Excluding a vector g k we obtain a set of vectors, say G  , with the property
that every vector in V is a linear combination of the elements of G  . Repeating this
procedure we finally end up with the set F with the same property. Since the vectors
f i (i = 1, 2, . . . , n) are linearly independent they form a basis of V. Any further
vectors in V, say f n+1 , f n+2 , . . . are thus linear combinations of F. Hence, any set
of more than n vectors is linearly dependent.


Theorem 1.4 Every set F = f 1 , f 2 , . . . , f m of linearly independent vectors in
an n-dimensional vector space V can be extended to a basis.
1.2 Basis and Dimension of the Vector Space 5

Proof If m = n, then F is already a basis according to Theorem 1.3. If m < n,


then we try to find n − m vectors f m+1 , f m+2 , . . . , f n , such that all the vectors f i ,
that is, f 1 , f 2 , . . . , f m , f m+1 , . . . , f n are linearly independent and consequently
form a basis. Let us assume, on the contrary, that only k < n − m such vectors can
be found. In this case, for all x ∈ V there exist scalars α, α1 , α2 , . . . , αm+k , not all
zero, such that

αx + α1 f 1 + α2 f 2 + . . . + αm+k f m+k = 0,

where α = 0 since otherwise the vectors f i (i = 1, 2, . . . , m + k) would be


linearly dependent. Thus, all the vectors x of V are linear combinations of f i
(i = 1, 2, . . . , m + k). Then, the dimension of V is m + k < n, which contradicts
the assumption of this theorem.

1.3 Components of a Vector, Summation Convention




Let G = g 1 , g 2 , . . . , g n be a basis of an n-dimensional vector space V. Then,


n
x= x i g i , ∀x ∈ V. (1.4)
i=1

Theorem 1.5 The representation (1.4) with respect to a given basis G is unique.

Proof Let


n
n
x= x i gi and x = y i gi
i=1 i=1

be two different representations of a vector x, where not all scalar coefficients x i and
y i (i = 1, 2, . . . , n) are pairwise identical. Then,


n n
n

0 = x + (−x) = x + (−1) x = x i gi + −y i g i = x i − y i gi ,
i=1 i=1 i=1

where we use the identity −x = (−1) x (Exercise 1.1). Thus, either the numbers
x i and y i are pairwise equal x i = y i (i = 1, 2, . . . , n) or the vectors g i are lin-
early dependent. The latter one is likewise impossible because these vectors form a
basis of V.

The scalar numbers x i (i = 1, 2, . . . , n) in the representation



(1.4) are called
components of the vector x with respect to the basis G = g 1 , g 2 , . . . , g n .
6 1 Vectors and Tensors in a Finite-Dimensional Space

The summation of the form (1.4) is often used in tensor algebra. For this reason
it is usually represented without the summation symbol in a short form by


n
x= x i gi = x i gi (1.5)
i=1

referred to as Einstein’s summation convention. Accordingly, the summation is


implied if an index appears twice in a multiplicative term, once as a superscript and
once as a subscript. Such a repeated index (called dummy index) takes the values
from 1 to n (the dimension of the vector space in consideration). The sense of the
index changes (from superscript to subscript or vice versa) if it appears under the
fraction bar.

1.4 Scalar Product, Euclidean Space, Orthonormal Basis

The scalar product plays an important role in vector and tensor algebra. The properties
of the vector space essentially depend on whether and how the scalar product is
defined in this space.
Definition 1.6 The scalar (inner) product is a real-valued function x· y of two vectors
x and y in a vector space V, satisfying the following conditions.
C. (C.1) x · y = y · x (commutative rule),

(C.2) x · ( y + z) = x · y + x · z (distributive rule),

(C.3) α (x · y) = (αx) · y = x · (α y) (associative rule for the multiplication


by a scalar), ∀α ∈ R, ∀x, y, z ∈ V,

(C.4) x · x ≥ 0 ∀x ∈ V, x · x = 0 if and only if x = 0.


An n-dimensional vector space furnished by the scalar product with properties (C.1–
C.4) is called Euclidean space En . On the basis of this scalar product one defines the
Euclidean length (also called norm) of a vector x by


x
= x · x. (1.6)

A vector whose length is equal to 1 is referred to as unit vector.


Definition 1.7 Two non-zero vectors x and y are called orthogonal (perpendicular),
denoted by x⊥ y, if

x · y = 0. (1.7)

Of special interest is the so-called orthonormal basis of the Euclidean space.


1.4 Scalar Product, Euclidean Space, Orthonormal Basis 7

Definition 1.8 A basis E = {e1 , e2 , . . . , en } of an n-dimensional Euclidean space


En is called orthonormal if

ei · e j = δij , i, j = 1, 2, . . . , n, (1.8)

where

1 for i = j,
δij = δ =
ij
δ ij = (1.9)
0 for i = j

denotes the Kronecker delta.


Thus, the elements of an orthonormal basis represent pairwise orthogonal unit
vectors. Of particular interest is the question of the existence of an orthonormal
basis. Now, we are going to demonstrate that every set of m ≤ n linearly inde-
pendent vectors in En can be orthogonalized and normalized by means of a linear
transformation (Gram-Schmidt procedure). In other words, starting from linearly
independent vectors x 1 , x 2 , . . . , x m one can always construct their linear combi-
nations e1 , e2 , . . . , em such that ei · e j = δij (i, j = 1, 2, . . . , m). Indeed, since
the vectors x i (i = 1, 2, . . . , m) are linearly independent they are all non-zero (see
Exercise 1.2). Thus, we can define the first unit vector by
x1
e1 = . (1.10)

x 1

Next, we consider the vector

e2 = x 2 − (x 2 · e1 ) e1 (1.11)
 
orthogonal to e1 . This holds for the unit vector e2 = e2 /e2  as well. It is also seen
  
that e2  = e2 · e2 = 0 because otherwise e2 = 0 and thus x 2 = (x 2 · e1 ) e1 =
(x 2 · e1 )
x 1
−1 x 1 . However, the latter result contradicts the fact that the vectors
x 1 and x 2 are linearly independent.
Further, we proceed to construct the vectors

e
e3 = x 3 − (x 3 · e2 ) e2 − (x 3 · e1 ) e1 , e3 =  3  (1.12)
e 
3

orthogonal to e1 and e2 . Repeating this procedure we finally obtain the set of ortho-
normal vectors e1 , e2 , . . . , em . Since these vectors are non-zero and mutually orthog-
onal, they are linearly independent (see Exercise 1.6). In the case m = n, this set
represents, according to Theorem 1.3, the orthonormal basis (1.8) in En .
8 1 Vectors and Tensors in a Finite-Dimensional Space

With respect to an orthonormal basis the scalar product of two vectors x = x i ei


and y = y i ei in En takes the form

x · y = x 1 y1 + x 2 y2 + · · · + x n yn . (1.13)

For the length of the vector x (1.6) we thus obtain the Pythagoras formula


x
= x 1 x 1 + x 2 x 2 + · · · + x n x n , x ∈ En . (1.14)

1.5 Dual Bases




Definition 1.9 Let G = g 1 , g 2 , . . . , g n be a basis in the n-dimensional Euclidean


space En . Then, a basis G  = g 1 , g 2 , . . . , g n of En is called dual to G, if

j
g i · g j = δi , i, j = 1, 2, . . . , n. (1.15)


In the following we show that a set of vectors G  = g 1 , g 2 , . . . , g n satisfying the
conditions (1.15) always exists, is unique and forms a basis in En .
Let E = {e1 , e2 , . . . , en } be an orthonormal basis in En . Since G also represents
a basis, we can write
j j
ei = αi g j , g i = βi e j , i = 1, 2, . . . , n, (1.16)

j j
where αi and βi (i = 1, 2, . . . , n) denote the components of ei and g i , respectively.
Inserting the first relation (1.16) into the second one yields

j j
g i = βi αkj g k , ⇒ 0 = βi αkj − δik g k , i = 1, 2, . . . , n. (1.17)

Since the vectors g i are linearly independent we obtain

j
βi αkj = δik , i, k = 1, 2, . . . , n. (1.18)

Let further

g i = αij e j , i = 1, 2, . . . , n, (1.19)

where and henceforth we set e j = e j ( j = 1, 2, . . . , n) in order to take the advantage


of Einstein’s summation convention. By virtue of (1.8), (1.16) and (1.18) one finally
finds

j j j j
g i · g j = βik ek · αl el = βik αl δkl = βik αk = δi , i, j = 1, 2, . . . , n. (1.20)
1.5 Dual Bases 9

Next, we show that the vectors g i (i = 1, 2, . . . , n) (1.19) are linearly independent


and for this reason form a basis of En . Assume on the contrary that

ai g i = 0,

where not all scalars ai (i = 1, 2, . . . , n) are zero. Multiplying both sides of this
relation scalarly by the vectors g j ( j = 1, 2, . . . , n) leads to a contradiction. Indeed,
using (1.170) (see Exercise 1.5) we obtain

0 = ai g i · g j = ai δ ij = a j , j = 1, 2, . . . , n.

The next important question is whether the dual basis is unique. Let G  = g 1 , g 2 , . . . ,

1 2
g n and H =
h , h , . . . , h be two arbitrary non-coinciding bases in E , both
n n

dual to G = g 1 , g 2 , . . . , g n . Then,

hi = h ij g j , i = 1, 2, . . . , n.

Forming the scalar product with the vectors g j ( j = 1, 2, . . . , n) we can conclude


that the bases G  and H coincide:

δ ij = hi · g j = h ik g k · g j = h ik δ kj = h ij ⇒ hi = g i , i = 1, 2, . . . , n.

Thus, we have proved the following theorem.


Theorem 1.6 To every basis in an Euclidean space En there exists a unique dual
basis.
Relation (1.19) enables to determine the dual basis. However, it can also be obtained
without any orthonormal basis. Indeed, let g i be a basis dual to g i (i = 1, 2, . . . , n).
Then

g i = g ij g j , g i = gij g j , i = 1, 2, . . . , n. (1.21)

Inserting the second relation (1.21) into the first one yields

g i = g ij gjk g k , i = 1, 2, . . . , n. (1.22)

Multiplying scalarly with the vectors gl we have by virtue of (1.15)

δli = g ij gjk δlk = g ij g jl , i, l = 1, 2, . . . , n. (1.23)


   
Thus, we see that the matrices gk j and g k j are inverse to each other such that
   
−1
g k j = gk j . (1.24)
10 1 Vectors and Tensors in a Finite-Dimensional Space

Now, multiplying scalarly the first and second relation (1.21) by the vectors g j and
g j ( j = 1, 2, . . . , n), respectively, we obtain with the aid of (1.15) the following
important identities:

g ij = g ji = g i · g j , gij = gji = g i · g j , i, j = 1, 2, . . . , n. (1.25)

By definition (1.8) the orthonormal basis in En is self-dual, so that

j
ei = ei , ei · e j = δi , i, j = 1, 2, . . . , n. (1.26)

With the aid of the dual bases one can represent an arbitrary vector in En by

x = x i g i = xi g i , ∀x ∈ En , (1.27)

where

x i = x · g i , xi = x · g i , i = 1, 2, . . . , n. (1.28)

Indeed, using (1.15) we can write



x · g i = x j g j · g i = x j δ ij = x i ,

j
x · g i = x j g j · g i = x j δi = xi , i = 1, 2, . . . , n.

The components of a vector with respect to the dual bases are suitable for calculating
the scalar product. For example, for two arbitrary vectors x = x i g i = xi g i and
y = y i g i = yi g i we obtain

x · y = x i y j gij = xi y j g ij = x i yi = xi y i . (1.29)

The length of the vector x can thus be written by


  

x
= xi x j g ij = x i x j gij = xi x i . (1.30)


Example 1.1 Dual basis in E3 . Let G = g 1 , g 2 , g 3 be a basis of the three-
dimensional Euclidean space and
 
g = g1 g2 g3 , (1.31)

where [• • •] denotes the mixed product of vectors. It is defined by

[abc] = (a × b) · c = (b × c) · a = (c × a) · b, (1.32)

where “×” denotes the vector (also called cross or outer) product of vectors. Consider
the following set of vectors:

You might also like