Tensor Algebra Linear Transformations: August 2019
Tensor Algebra Linear Transformations: August 2019
net/publication/335397359
CITATIONS READS
0 298
1 author:
Antonio Marmo
Instituto Tecnologico de Aeronautica
39 PUBLICATIONS 34 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Antonio Marmo on 26 August 2019.
Sometimes, we will leave out the parenthesis and use the notation Tu instead of Tu. The definition
says that a mapping from V into W is a linear transformation if it respects the basic operations in vector
spaces, namely, addition and multiplication by a scalar. Examples of such transformations abound in
mathematics. For instance, the so called projection from R 3 to R 2 , defined by
Pv 1 , v 2 , v 3 : v 1 , v 2 , ∀ v 1 , v 2 , v 3 ∈ R 3
and some other well known transformations on the plane, like reflection in a line and rotation about a
point, are linear transformations. These are geometric examples. Two very important examples of
linear transformations between infinite dimensional spaces are the derivative and integration. More
precisely, if D maps a continuously differentiable function on a, b onto its derivative, that is, if
D : C 1 a, b → Ca, b is the derivative map, then
Df g Df Dg , ∀ f, g ∈ C 1 a, b
Df Df , ∀ f ∈ C 1 a, b , ∀ ∈ R
and S : Ca, b → Ca, b defined by
x
Sf x : a ftdt , ∀ f ∈ Ca, b , with a ≤ x ≤ b
is such that, for all f, g ∈ Ca, b and ∈ R,
x
Sf gx a ft gtdt
x x
a ftdt a gtdt
and
x x
Sfx a ftdt a ftdt
Theorem 3.1 Let V and W be (real) vector spaces. If T ∈ LV, W is bijective, then T −1 ∈ LW, V.
Theorem 3.2 Let V , W and U be (real) vector spaces. If T ∈ LU, W and S ∈ LV, U are invertible,
then TS ∈ LV, W is invertible and
TS −1 S −1 T −1
Proof. It is an easy task to check that TSS −1 T −1 S −1 T −1 TS I.
Given the linear transformation T, the real numbers T s , s 1, 2, … , n and 1, 2, … m, that appear in
(ref: trescinco) are called the components of T in the basis E s of LV, W. Note that, from
(ref: tresquatro)
Ta k T k b
These components can be used to create a n m matrix
T 11 T 21 T m1
T 12 T 22 T m2
T ij nm ,
T 1n T 2n T mn
called the matrix of the linear transformation T relative to the bases a 1 , a 2 , … , a n and
b 1 , b 2 , … , b m . In the special case of the identity map I V ∈ LV, V defined by I V v v, its
corresponding matrix relative to any basis for V is the identity matrix
1 0 0 0
0 1 0 0
I nn
0 0 1 0
0 0 0 1
j
Exercise 3.1 Let a 1 , a 2 , … , a n be a basis for V and take v i i a j . Then, v 1 , v 2 , … , v n is linearly
j
independent if and only if the matrix i nn is invertible.
The concept of linear transformations allows us to classify all finite dimensional vector spaces by
telling that every abstract (real) vector space V with dim V n can be identified, in a precise sense,
with the more concrete space R n . In order to do this characterization, first we need to introduce some
definitions. Let V and W be (real) vector spaces and consider T ∈ LV, W. It can easily be shown that
ker T v ∈ V ; Tv 0, the kernel of T, and im T Tv ; v ∈ V, the image of T, are
subspaces of V and W, respectively, and that dim ker T dim im T n (v. Hoffmann-Kunze[]).
Definition 3.2
(i) A linear transformation T ∈ LV, W is called an isomorphism if T is bijective. In this case, we say
that V is isomorphic to W.
(ii) T ∈ LV, W is called a nonsingular transformation if ker T 0.
Theorem 3.4 Let V and W be (real) vector spaces with dim V dim W n and consider T ∈ LV, W.
Then, the following statements are equivalent:
(i) T is an isomorphism.
(ii) T is invertible.
(iii) T is injective.
(iv) T is non-singular.
(v) rank T dim imT n.
(vi) T is surjective.
(vii) If v 1 , v 2 , … , v n is a basis for V, then Tv 1 , Tv 2 , … , Tv n is a basis for W.
Proof. (i)(ii): If T is an isomorphism of V onto W, then clearly T −1 is a bijective map from W onto
V and is linear by theorem 3.1.
(ii)(iii): Consider v 1 and v 2 , two arbitrary elements of V, and suppose that Tv 1 Tv 2 . Applying
T −1 to both sides of this equation gives T −1 Tv 1 T −1 Tv 2 and, since T −1 T I V , we have that v 1
v2.
(iii) (iv): Notice that T nonsingular, that is, ker T 0, is equivalent to the statement: Tu 0
implies u 0. Now, suppose T ∈ LV, W is injective and Tu 0. But T0 0, from which we have
that Tu T0. Since T is injective, this implies that u 0.
(iv) (v): If ker T 0, then dim ker T 0 and dim im T n − dim ker T n.
(v) (vi): Since dim im T n, we have im T W.
(vi) (vii): Given any w ∈ W, there exists v ∈ V such that w Tv. Now, v i v i for some
1 , … , n ∈ R. Hence, w Tv T i v i i Tv i . So, Tv 1 , Tv 2 , … , Tv n is a basis for W
because it generates W, has n elements and dim W n.
(vii) (i): Let u i v i and v i v i be any two vectors in V such that Tu Tv. Thus
i − i Tv i 0. Since Tv 1 , Tv 2 , … , Tv n is a basis for W, it follows that i i so that u v
and T is injective.
On the other hand, given any w ∈ W, there exist scalars 1 , … , n ∈ R such that
w i Tv i T i v i . Since i v i ∈ V, we have that T is surjective.
Now we are in condition to have the characterization of all finite dimensional (real) vector spaces.
Indeed, let v 1 , v 2 , … , v n be a basis for a n-dimensional (real) vector space V. Define a transformation
T : V → R n by putting, for each v ∈ V , v i v i , where i ∈ R, i 1, 2, … , n, are uniquely
determined,
Tv : 1 , 2 , … , n #
Theorem 3.5 If V is any (real) vector space with dim V n, then V is isomorphic to R n .
Proof. T defined by (ref: isot) is a well-defined map and one can easily check that it is a surjective
linear transfomation. So, since dim V n, we have by theorem 3.4 that T is an isomorphism.
cos sin
A −1
− sin cos
.
DUAL SPACE
Let V be a vector space over the field R. The vector space LV, R of all linear transformations from
V into the vector space R turns up to be of great importance in the most diverse applications of
mathematics because sometimes we can profit from converting relations in V into statements applying
to transformations in LV, R. Such transformations are called linear functionals and we refer to this
process as dualization.
Definition 3.3 Let V be vector space (over the field R). We say that f is a linear functional on V if
f ∈ LV, R.
We say that LV, R is the dual space of V. It is usual to denote LV, R by V ∗ and their elements by
u ∗ , v ∗ , w ∗ , … . Also, we may call contravariant vectors the elements of V to distinguish them from the
elements of V ∗ , which we may call covariant vectors. Or else, we can call vectors the elements of V
and covectors the elements of V ∗ .
It is easy to check that the following maps are examples of linear functionals:
(i) f : R n → R defined by
fx 1 , … , x n : c 1 x 1 c n x n
where c 1 , … , c n ∈ R.
(ii) f : C0, 1 → R defined by
1
fx : 0 xtdt
where we remind that C0, 1 stands for the vector space of all continuous functions from the
interval 0, 1 into R and . is a function of bounded variation on 0, 1.
(iii) Let y ∈ R n . Define f y : R n → R by
f y x : x y
where denotes the dot product in R . n
In the case that V is finite dimensional, say, dim V n, we have from theorem 3.3 that
dim V ∗ dim LV, R n. 1 n, from which it follows that:
Proof. Since dim V ∗ dim V n, we have by theorem 3.5 that both V and V ∗ are isomorphic to R n .
Let be an isomorphism between V and R n and be an isomorphism between V ∗ and R n . Then,
obviously, −1 is a mapping from V to V ∗ that is both one-to-one and onto because it is the
composition of two bijective mappings.
Theorem 3.7 Let V be a n-dimensional vector space and e 1 , … , e n a basis in V. Then, to any given
set of scalars 1 , … , n , there corresponds uniquely a mapping w ∗ : V → R, defined by
w ∗ v : v 1 1 v n n , ∀ v v 1 e 1 v n e n ∈ V #
such that
w∗ ∈ V ∗ and w ∗ e i i , i 1, 2, … , n
Proof. Equation (ref: theo371) defines a linear functional, since, for all u, v ∈ Vand ∈ R, we have
w ∗ u v u 1 v 1 1 u n v n n
u 1 1 u n n v 1 1 v n n
w ∗ u w ∗ v
and
w ∗ u u 1 1 u n n
u 1 1 u n n
w ∗ u
So, for any given v v 1 e 1 v n e n ∈ V,
w ∗ v w ∗ v 1 e 1 v n e n
v 1 w ∗ e 1 v n w ∗ e n
and (ref: theo371) gives
v 1 1 v n n v 1 w ∗ e 1 v n w ∗ e n
v 1 w ∗ e 1 − 1 v n w ∗ e n − n 0
This implies that
w ∗ e i i , i 1, 2, … , n
Indeed, if
w ∗ e i o − i o ≠ 0 for some i o ∈ 1, 2, … , n
then, for v e i o we have
1. w ∗ e i o − i o 0
which is a contradiction.
On the other hand, since v 1 , … , v n is uniquely determined by the given basis e 1 , … , e n , it follows
that w ∗ is determined in a unique manner as well.
Theorem 3.8 Let e 1 , … , e n a basis of a n-dimensional (real) vector space V. Then, there exists a
unique basis e ∗1 , … , e ∗n in V ∗ satisfying (ref: dfndualb).
Proof. It remains to prove that e ∗1 , … , e ∗n is indeed a basis in V ∗ . For this, suppose 1 , … , n are
scalars such that
w ∗ 1 e ∗1 n e ∗n 0
Hence,
w ∗ v j e ∗j v 0 , ∀ v ∈ V
In particular, for v e i , i 1, … , n , we have
j
w ∗ e i j e ∗j e i j i i , i 1, … , n
so that each i 0 and we have that e ∗1 , … , e ∗n is linearly independent. On the other hand, given
any w ∗ ∈ V ∗ , we set w ∗ e i i , i 1, … , n. So, for all v v i e i ∈ V, we have
w ∗ v v i i
and, in particular, from (ref: dfndualb), it follows that
j
e ∗j v v i i v j
so that
w ∗ v v i i e ∗i v i i e ∗i v , ∀ v ∈ V
and, consequently, w ∗ i e ∗i . Thus, we also have that e ∗1 , … , e ∗n generates V ∗ and the proof is
complete.
From what we have just seen, to any basis in V there corresponds in a unique manner a basis in its dual
space by means of (ref: dfndualb):
Definition 3.4 Let e 1 , … , e n a basis of a n-dimensional real vector space V. We define the dual basis
of e 1 , … , e n to be the basis e ∗1 , … , e ∗n of V ∗ such that the conditions (ref: dfndualb ) hold.
We can have another useful representation for linear functionals. Let us start with the concept of
bilinear functions.
Definition 3.5 Let V,U and W be (real) vector spaces. A map f : V U → W is bilinear if
fv 1 v 2 , w fv 1 , w fv 2 , w
fv, w 1 w 2 fv, w 1 fv, w 2
and
fv, w fv, w fv, w for all scalar
In other words, a map f fv, w is bilinear if it is linear in v when w is held fixed, and is linear in w
when v is held fixed. The vector product in ordinary three dimensional spaces is a familiar example of
a bilinear map when V U W. Another example of a bilinear functional that we have already come
across is the inner product.
If in the definition we replace the vector space W by the scalar field R, we say that f is a bilinear
functional and we denote by LV U, R the vector space of all bilinear functionals defined on V U,
with the usual operations of addition of functions and multiplication of functions by scalars.
A bilinear functional f defined on V V is called symmetric if fu, v fv, u for all u, v in V and
antisymmetric if fu, v −fv, u for all u, v in V. We have that a bilinear functional f : V V → R
with the property that fu, u 0 for all u in V is necessarily antisymmetric. Indeed,
0 fu v, u v fu, u fu, v fv, u fv, v fu, v fv, u
for all u, v ∈ V.
Theorem 3.9 Let V be a Euclidean vector space. Then, V and V ∗ are isomorphic.
Proof: We prove that G defined by (ref: isomorphg) is an isomorphism. For this, suppose v, w ∈V
are such that Gv Gw. Then,
Gv, u Gw, u , ∀ u ∈V
so that
vu wu
v − w u 0
Since u is arbitrary, we have that v w. So, G is injective. By theorem 3.4, it follows that G is an
isomorphism.
The following straightforward theorem relates the reciprocal and dual bases, showing that they can be
identified.
Theorem 3.10 Suppose e 1 , … , e n and e ∗1 , … , e ∗n are, respectively, the reciprocal and the dual
bases of an arbitrary basis e 1 , … , e n in a Euclidean vector space V. Then
Ge i e ∗i , i 1, … , n #
where G is the isomorphism defined by (ref: isomorphg).
Moreover, given v ∗ v ∗i e ∗i ∈ V ∗ , there exists a v v i e i ∈ V such that v ∗i v i .
Proof. From the definitions of reciprocal and dual bases, we have that e i e j ij and e ∗i , e j ij .
So, by (ref: isomorphg),
Ge i , e j e i e j ij e ∗i , e j
and (ref: gei) follows.
Furthermore, since G is surjective, to any v ∗ v ∗i e ∗i ∈ V ∗ there corresponds a v v i e i ∈ V such that
v ∗ v ∗i e ∗i Gv Gv i e i . Hence, by (ref: gei),
v ∗i e ∗i Gv i e i v i Ge i v i e ∗i
so that v ∗i v i .
Recall that for inner product spaces we have that . , . is indeed the inner product . Thus, we can
write
v ∗ , e i v ∗i e ∗i e i v ∗i e ∗i e i v ∗i
e ∗i , v e ∗i v i e i v i e ∗i e i v i
v ∗ , v v ∗i e ∗i v i e i v ∗i v i e ∗i e i v ∗i v i
and we have the following generalization from inner product spaces to vector spaces in general:
v ∗ , e i v ∗i , e ∗i , v v i , v ∗ , v v ∗i v i
for v ∗ v ∗i e ∗i and v v i e i .
We finish this section with exercise that shows that the second dual space V ∗∗ can always be identified
as V, regardless of any extra structure to that of a vector space.
Definition 3.6 Let V and U be two inner product spaces. Given a linear transformation T ∈ LV, U,
we say that the transpose of T, denoted T T , is the linear transformation in LU, V such that
Tv u v T T u , ∀v ∈ V , ∀u ∈ U
Note that the inner products on the left-side and on the right-side are the ones in U and V, respectively.
Proof. Suppose T T1 , T T2 ∈ LU, V are transposes of T ∈ LV, U. Then, for all v ∈ V and u ∈ U,
Tv, u v, T T1 u v, T T2 u
Thus, v, T T1 − T T2 u 0 and, since v and u are arbitrary, it follows that T T1 − T T2 u 0 and
T T1 − T T2 0.
Theorem 3.12
(i) For all T, S ∈ LV, U and ∈ R, we have that T S T , T T ∈ LU, V are such that
T S T T T S T and T T T T
(ii) For all T ∈ LV, U and S ∈ LU, W, we have that ST T ∈ LW, V is such that
ST T T T S T
(iii) For all T ∈ LV, U,
T T T T
Definition 3.7
(i) T ∈ LV, V is said to be symmetric if T T T .
(ii) T ∈ LV, V is said to be antisymmetric if T −T T .
Note that any linear transformation T ∈ LV, V can always be expressed as a sum of a symmetric
linear transformation and a antisymmetric one, that is,
T SA
where
S 1 T T T
2
is symmetric and
A 1 T − T T
2
is antisymmetric.
TENSOR PRODUCT
One kind of bilinear mapping that will be of interest for us is what we call the tensor product.
Definition 3.8 Let V and U be two finite dimensional vector spaces. For any two vectors v ∈ V and
u ∈ U, the mapping v ⊗ u ∈ LU ∗ , V defined by
v⊗u : U∗ V
#
u ∗ v ⊗ u u ∗ : u ∗ u v u ∗ , u v
is called the tensor product of v and u.
The notion of tensor product can be given a meaning in terms of bilinear functionals because we have
this nice isomorphism.
Theorem 3.13 Let V and U be two finite dimensional vector spaces. Then LU ∗ , V and LV ∗ U ∗ , R
are cannonically isomorphic.
The next step is to characterize a basis for V ⊗ U out of the bases for the individual spaces involved.
Theorem 3.14 Let e i and e ∗i , i 1, 2, … , m, be bases for V and V ∗ , respectively. Also, let f j
and f ∗j , j 1, 2, … , n , be bases for U and U ∗ , respectively. Then, the set e i ⊗ f j ; i 1, … , m and
j 1, … , n is a basis for the tensor product space V ⊗ U.
The basis e i ⊗ f j ; i 1, … , m and j 1, … , n from theorem 3.13 above is called the product basis
for V ⊗ U and the coefficients T ij are called the components of T with respect to this basis. Moreover,
by theorem 3.3,
dimV ⊗ U dim LV ∗ U ∗ , R mn. 1 mn dim LU ∗ , V
Hence, LV ∗ U ∗ , R and LU ∗ , V have the same dimension, which we should have expected since
they are canonically isomorphic.
Solution.
(i) We can write
V ∗ ⊗ U ∗ ≡ LV U, R ≡ LU, V ∗
v ∗ ⊗ u ∗ v, u v ∗ , v u ∗ , u
v ∗ ⊗ u ∗ u u ∗ , u v ∗
T T ij e i ⊗ f j , ∀ T ∈ V ∗ ⊗ U ∗
where e i ⊗ f j ; i 1, … , m and j 1, … , n is a basis for V ∗ ⊗ U ∗ .
(ii)
V ⊗ U ∗ ≡ LV ∗ U, R ≡ LU, V
v ⊗ u ∗ v ∗ , u v ∗ , v u ∗ , u
v ⊗ u ∗ u u ∗ , u v
T T ij e i ⊗ f j , ∀ T ∈ V ⊗ U∗
where e i ⊗ f j ; i 1, … , m and j 1, … , n is a basis for V ⊗ U ∗ .
(iii)
V ∗ ⊗ U ≡ LV U ∗ , R ≡ LU ∗ , V ∗
v ∗ ⊗ uv, u ∗ v ∗ , v u ∗ , u
v ∗ ⊗ uu ∗ u ∗ , u v ∗
j
T Tiei ⊗ fj , ∀ T ∈ V∗ ⊗ U
where e i ⊗ f j ; i 1, … , m and j 1, … , n is a basis for V ∗ ⊗ U.
We can generalize the concept of tensor products to more than two vector spaces. For this, let
V 1 , V 2 , … , V n be (real) vector spaces. A multilinear functional (also called n-linear functional) is a
mapping T : V 1 V 2 V n R that is linear in each of its variable whilst the others are held
constant. We denote LV 1 V 2 V n , R the set of all multilinear functionals defined on
V 1 V 2 V n with the vector space structure given by the obvious addition and multiplication by
scalar.
Definition 3.10 The vector space LV ∗1 V ∗2 V ∗n , R is called the tensor product space of
V 1 , V 2 , … , V n and we denote
LV ∗1 V ∗2 V ∗n , R V 1 ⊗ V 2 ⊗ ⊗ V n
We define the tensor product v 1 ⊗ v 2 ⊗ ⊗ v n of n vectors v i ∈ V i , i 1, 2, … , n as
v 1 ⊗ v 2 ⊗ ⊗ v n v ∗1 , v ∗2 , … , v ∗n : v ∗1 , v 1 v ∗2 , v 2 v ∗n , v n ,
for all v ∗i ∈ V ∗i , i 1, 2, … , n.
TENSORS
When the vector spaces V i in definition 3.10 are all equal to either the same space V or its dual, we
say that the multilinear functional is a tensor. More precisely,
Definition 3.11 Let V be a n-dimensional vector space and suppose p and q are positive integers.
(i) A tensor of order (p,q) on V is a multilinear functional T of the form
T : V∗ V∗ V∗ V V V R
p q
p
We denote by T q V the space of all tensors of order (p,q) on V.
(ii) A contravariant tensor of order p, or simply a tensor of order p, 0 is a p-linear functional
T : V∗ V∗ V∗ R
p
We denote by T p V the space of all such functionals.
(iii) A covariant tensor of order q, or simply a tensor of order 0, q is a q-linear functional
T : VVV R
q
We denote by T q V the space of all such functionals.
(iv) We define
T 00 V R
T 1 V LV ∗ , R V ∗∗ ≈ V
T 1 V LV, R V ∗
In the case U V we have, according to the isomorphism between LU ∗ , V and LV ∗ U ∗ , R
established by theorem 3.13,
LV ∗ , V ≈ LV ∗ V ∗ , R T 2 V
V⊗V
Thus, the tensor product is an example of a tensor of order 2, 0, or equivalently, a contravariant
tensor of order 2 in T 2 V.
p
If v 1 , … , v p ∈ V and v 1 , … , v q ∈ V ∗ , an example of a tensor in T q V is the tensor product of v 1 , … , v p
and v 1 , … , v q
T v1 ⊗ ⊗ vp ⊗ v1 ⊗ ⊗ vq : V∗ V∗ V V R
,
p q
defined by
Tu 1 , … , u p , u 1 , … , u q u 1 , v 1 u p , v p v 1 , u 1 v q , u q
for all u 1 , … , u q ∈ V and u 1 , … , u p ∈ V ∗ .
Exercise 3.5 Let e i and e i , i 1, … , n , be dual bases of V and V ∗ , respectively. Then, the set
e i 1 ⊗ ⊗ e i p ⊗ e j 1 ⊗ ⊗ e j q
p
is a basis for T q V, called the product basis, and hence
p
dim T q V n pq
Equating the component forms (ref: eq10)and (ref: eq13) and taking into account the formulas of
transformation (ref: eq11) and (ref: eq12), we obtain
i ′ i ′ i′ i′ j j i i
T j 1′ j p′q Q i 11 Q i pp P j 1′ P j q′q T j 11 j pq
1 1
and
i i i j′ j′ i ′ i ′
T j 11 j pq P ii 1′ P i p′p Q j 11 Q j qq T j 1′ j p′q
1 1
p
which are the transformation formula for the components of any tensor T ∈ T q V.
Finally, we can introduce some algebraic operations for components of tensors. It turns out that
addition, tensorial multiplication, self contraction, product contraction, symmetrization and alternation
are the main operations when one deals with algebraic aspects of components of tensors. Here, we have
some words to say about each of these operations.
1. Addition. It applies only to components of tensors of the same order and type. If we are given two
systems of components of tensors of the same order and type, and if we add each component of the
first tensor to the corresponding component of the second, we obviously arrive at a set of components
of a tensor of the same order and type as the original components. This process is the operation of
addition and the resulting components are called the sum of the two components.Thus, if S abcde and P cde
ab
are two sets of components of tensors of order 5, then the components of the sum T S P of S and P
are defined by the equations
..cde S ..cde P ..cde
T ab ab ab
2. Tensorial multiplication. If we take two systems of components of tensors of any kind and multiply
each component of the first by each component of the second, we get a system of components of
tensors, called their product, whose order equals the sum of the orders of the two original systems. For
example, if S ab
cd are the components of a tensor S and P ef are the components of a tensor P, then their
product is the tensor T with coomponents given by
..cdef S ..cd P ef
T ab ab
This process can be extended to define the product of any number of systems of components of tensors.
3. Self-contraction. This operation allows one to create new tensors of lower order for any given
tensor. Such process can be better explained by means of an example: Let us take a system of
components of tensor of order k, say T a.bc…k , which has both upper and lower indices. If we put a b,
we get the set T b.bc…k . Since b is now a repeated index, it must be summed from 1 to n, in accordance to
our sum convention. Thus, the new system obtained in this way is S c…k T b.bc…k defines a tensor S of
order k − 2. of course, the operation can be repeated several times because we can contract with respect
to any pair of indices, one being sa ubscript and the other, a superscript.
4. Product contraction. For this operation, we first take the product of two systems of components of
tensors and then contract the result with respect to a subscript of one and a superscript of the other.
Thus, a product contraction of two systems P abd and S cd is the tensor A whose system of components is
…c P
given by A ab S cd .
abd
5. Symmetrization. This process is applied always to a number of upper and lower indices. In order to
perform such an operation over k indices, we form k! scalars by permuting these indices in all possible
ways and then we average by taking the sum of these scalars and divide by k!. The operation of
symmetrization is denoted Sym. For example,
T ab T ba
SymT ab 2!
6. Alternation. This operation, denoted either by Alt or by a pair of square brackets , is perfomed in
the same way as the process of symmetrization, only that the averaging is taken with the positive sign
if the permutation is even and the negative sign if the permutation is odd. Here are two examples:
T abc T bca T cab −T acb −T cba −T bac
AltT abc T abc 3!
T ab −T ba
AltT ab T ab 2!
EXERCISES
1. Let T ∈ V ⊗ W , T ≠ 0. Show that in general there exist several representations of the form
T vi ⊗ wi , vi ∈ V , wi ∈ W
with v 1 , … , v n and w 1 , … , w n each linearly independent.
2. Let a ∈ V and b ∈ W be such that b ⊗ a ≠ 0. Prove that
b⊗a w⊗v if and only if v a and w b , for some ≠ 0
3. Show that a self-contraction over two contravariant or two covariant indices of a tensor does not
give another tensor. For example, T abc is not a tensor.
4. Show that LV, LV, V is isomorphic to T 12 V.
5. Prove that g ij A i B j is invariant, i.e., g ij A i B j g ij A i B j .
6. Associated to vectors a a i e i , b b j e j and c c k e k it is possible to define the following
trilinear functional T by putting
ij
Ta, b, c a i b j c k T k
Show that this functional defines a tensor.
7. Associated to the standard basis e 1 . e 2 . e 3 of R 3 ,
1 0 0
e1 0 , e2 1 , e3 0
0 0 1
there exists a set of scalars defined by
T ijk Te i , e j , e k
where T is a given ... Suppose another basis is defined as
e 1 e 1 − 2e 2
e 2 2e 1 e 2
e3 e1 e2
Find T ijk Te i , e j , e k in terms of T ijk .
8. Prove that T ij v i w j is invariant.
9. Prove that if a j T ij v i are the components if a civariant vector a for all contravariant vectors v,
then T ij is a covariant tensor of order 2.
10. If v i are the components of a contravariant vector, when can you say that a i defined by
a i v i v i , ∈ R , are components of a vector a?
REFERENCES
[1] Morris, A.O.: Linear Algebra: an introduction. 2nd. ed. Chapman and Hall, 1982.
[2] Lipschutz, S.: Schaum ’s Outline of Theory and Problems of Linear Algebra. Schaum
Publishing Co., 1968.