0% found this document useful (0 votes)
62 views21 pages

Tensor Algebra Linear Transformations: August 2019

This document discusses linear transformations between vector spaces. Some key points: - A linear transformation T from vector space V to W preserves vector addition and scalar multiplication. - The set of all linear transformations from V to W forms a vector space L(V,W). - Composition of linear transformations T and S is defined as TS(v)=T(S(v)) and is also linear. - The inverse of an invertible linear transformation is also linear. - For finite-dimensional spaces V and W, the dimension of L(V,W) is equal to the product of the dimensions of V and W. - A linear transformation T can be represented by a matrix

Uploaded by

antonio oliveira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views21 pages

Tensor Algebra Linear Transformations: August 2019

This document discusses linear transformations between vector spaces. Some key points: - A linear transformation T from vector space V to W preserves vector addition and scalar multiplication. - The set of all linear transformations from V to W forms a vector space L(V,W). - Composition of linear transformations T and S is defined as TS(v)=T(S(v)) and is also linear. - The inverse of an invertible linear transformation is also linear. - For finite-dimensional spaces V and W, the dimension of L(V,W) is equal to the product of the dimensions of V and W. - A linear transformation T can be represented by a matrix

Uploaded by

antonio oliveira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335397359

TENSOR ALGEBRA LINEAR TRANSFORMATIONS

Conference Paper · August 2019

CITATIONS READS

0 298

1 author:

Antonio Marmo
Instituto Tecnologico de Aeronautica
39 PUBLICATIONS   34 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Nonpotential operators View project

Mechanics of solids View project

All content following this page was uploaded by Antonio Marmo on 26 August 2019.

The user has requested enhancement of the downloaded file.


TENSOR ALGEBRA
LINEAR TRANSFORMATIONS
Definition 3.1 Let V and W be (real) vector spaces. A transformation T : V → W is said to be linear if,
for all u, v ∈ V and  ∈ R,
Tu  v  Tu  Tv #
Tu  Tu #

Sometimes, we will leave out the parenthesis and use the notation Tu instead of Tu. The definition
says that a mapping from V into W is a linear transformation if it respects the basic operations in vector
spaces, namely, addition and multiplication by a scalar. Examples of such transformations abound in
mathematics. For instance, the so called projection from R 3 to R 2 , defined by
Pv 1 , v 2 , v 3  : v 1 , v 2  , ∀ v 1 , v 2 , v 3  ∈ R 3
and some other well known transformations on the plane, like reflection in a line and rotation about a
point, are linear transformations. These are geometric examples. Two very important examples of
linear transformations between infinite dimensional spaces are the derivative and integration. More
precisely, if D maps a continuously differentiable function on a, b onto its derivative, that is, if
D : C 1 a, b → Ca, b is the derivative map, then
Df  g  Df  Dg , ∀ f, g ∈ C 1 a, b
Df  Df , ∀ f ∈ C 1 a, b , ∀ ∈ R
and S : Ca, b → Ca, b defined by
x
Sf x :  a ftdt , ∀ f ∈ Ca, b , with a ≤ x ≤ b
is such that, for all f, g ∈ Ca, b and  ∈ R,
x
Sf  gx   a ft  gtdt
x x
  a ftdt   a gtdt
and
x x
Sfx   a ftdt    a ftdt

Two basic properties of linear transformations are obtained by putting   0 and   −1 in


(ref: tresdois):
T0  0 and T−u  −Tu , ∀ u ∈ V
Observe that we are using the same symbol for the zero vector in the space V as well as W.
It can be easily shown that the set of all linear transformations from V to W, denoted LV, W, is itself a
(real) vector space, with addition and multiplication by a scalar defined by
T  Su : Tu  Su , ∀T, S ∈ LV, W and u ∈ V
Tu : Tu , ∀T ∈ LV, W , u ∈ V and  ∈ R
We can define an operation that takes each T, S ∈ LX, W  LV, X into TS ∈ LV, W, called
composition or product, by putting
TS  T ∘ S
that is, for all S ∈ LV, X and all T ∈ LX, W,
TSv  TSv  TSv  TSv , ∀v ∈ V
One can easily check that TS so defined is indeed linear. Also, it is immediate to show that the product
satisfies the following properties:
P1 For all linear transformations S ∈ LV, X , T ∈ LX, Y , U ∈ LY, W on vector spaces,
we have
UTS  UTS
P2 Consider S, T ∈ LV, X and U ∈ LX, V. Then,
US  T  US  UT
P3 Consider U ∈ LV, X and S, T ∈ LX, W. Then,
S  TU  SU  TU
In general, ST ≠ TS, as shown by the case of S ∈ LR 3 , R 2  and T ∈ LR 2 , R 3  given, respectively,
by
Sv 1 , v 2 , v 3   v 1 , v 2  and Tv 1 , v 2   v 1 , v 2 , v 2 
for all v 1 , v 2 , v 3 ∈ R. Indeed, to start with, ST : R 2 → R 2 and TS : R 3 → R 3 . Moreover,
STv 1 , v 2   Sv 1 , v 2 , v 2   v 1 , v 2 
and
TSv 1 , v 2 , v 3   Tv 1 , v 2   v 1 , v 2 , v 2 
We know that if a map T : V → W is bijective (both injective and surjective), then it is invertible, i.e.,
there exists the inverse map T −1 : W → V, uniquely determined, such that T −1 T is the identity map on
V, denoted I V , and TT −1 is the identity map on W, denoted I W . We have that if a linear transformation
is invertible, then its inverse is also linear. More precisely,

Theorem 3.1 Let V and W be (real) vector spaces. If T ∈ LV, W is bijective, then T −1 ∈ LW, V.

 Proof. Let w 1 , w 2 ∈ W and  ∈ R. Let v i  T −1 w i  , i  1, 2 , that is, v i is the unique vector in V


such that Tv i  w i . Since T is linear, we have
Tv 1  v 2   Tv 1  Tv 2  w 1  w 2
and
Tv 1   Tv 1  w 1
So v 1  v 2 and v 1 are the unique vectors in V that are taken by T into w 1  w 2 and w 1 , respectively.
Thus,
T −1 w 1  w 2   v 1  v 2  T −1 w 1   T −1 w 2 
T −1 w 1   v 1  T −1 w 1 
and T −1 is linear. 

Theorem 3.2 Let V , W and U be (real) vector spaces. If T ∈ LU, W and S ∈ LV, U are invertible,
then TS ∈ LV, W is invertible and
TS −1  S −1 T −1
 Proof. It is an easy task to check that TSS −1 T −1   S −1 T −1 TS  I. 

Theorem 3.3 Suppose V and W are finite dimensional spaces. Then,


dim LV, W  dim V . dim W

Proof: Let a 1 , a 2 , … , a n  be a basis for V and b 1 , b 2 , … , b m  be a basis for W. We define n  m


linear transformations E k ∈ LV, W by putting
E k a p :  kp b  #
for all k, p  1, 2, … , n and   1, 2, … , m.
On the other hand, given any T ∈ LV, W, we may expand Ta k  ∈ W as
Ta k   T k b  #
Substituting (ref: trestres) into (ref: tresquatro), we obtain
Ta k   T s E s a k
Since a 1 , a 2 , … , a n  is a basis for V, we can write
T − T s E s u  0 , ∀u ∈ V
which means that
T  T s E s #
that is, the n  m linear transformations E s generate LV, W.
It remains to show that they are linearly independent. For this, T s E s  0 implies that
0  T s E s a k   T k b 
so that T k must be all equal to zero, since b 1 , b 2 , … , b m are linearly independent. 

Given the linear transformation T, the real numbers T s , s  1, 2, … , n and   1, 2, … m, that appear in
(ref: trescinco) are called the components of T in the basis E s of LV, W. Note that, from
(ref: tresquatro)
Ta k   T k b 
These components can be used to create a n  m matrix
T 11 T 21  T m1
T 12 T 22  T m2
T ij  nm  ,
  
T 1n T 2n  T mn
called the matrix of the linear transformation T relative to the bases a 1 , a 2 , … , a n  and
b 1 , b 2 , … , b m . In the special case of the identity map I V ∈ LV, V defined by I V v  v, its
corresponding matrix relative to any basis for V is the identity matrix
1 0 0  0
0 1 0  0
I nn      
0 0  1 0
0 0  0 1
j
Exercise 3.1 Let a 1 , a 2 , … , a n  be a basis for V and take v i   i a j . Then, v 1 , v 2 , … , v n  is linearly
j
independent if and only if the matrix  i  nn is invertible.

Solution. Suppose  i ∈ R , i  1, 2, … n , are such that  i v i  0. Then,


j j
0  1 1aj    n naj
  i  1i a 1     i  ni a n
j
Since a 1 , a 2 , … , a n  is linearly independent, we have  i  i  0 , j  1, 2, … , n. This is a system of n
linear equations in n unknowns  1 ,  2 , … ,  n which, by Cramer’s rule, has a non-trivial solution if and
j
only if the matrix  i  nn is nonsingular and the result follows. 

The concept of linear transformations allows us to classify all finite dimensional vector spaces by
telling that every abstract (real) vector space V with dim V  n can be identified, in a precise sense,
with the more concrete space R n . In order to do this characterization, first we need to introduce some
definitions. Let V and W be (real) vector spaces and consider T ∈ LV, W. It can easily be shown that
ker T  v ∈ V ; Tv  0, the kernel of T, and im T  Tv ; v ∈ V, the image of T, are
subspaces of V and W, respectively, and that dim ker T  dim im T  n (v. Hoffmann-Kunze[]).

Definition 3.2
(i) A linear transformation T ∈ LV, W is called an isomorphism if T is bijective. In this case, we say
that V is isomorphic to W.
(ii) T ∈ LV, W is called a nonsingular transformation if ker T  0.

Note that if T ∈ LV, W is an isomorphism, then T −1 ∈ LW, V is also an isomorphism. So V is


isomorphic to W if and only if W is isomorphic to V and we can say simply that V and W are
isomorphic.
For a linear transformation on a finite dimensional vector space the concepts of isomorphism,
invertible and nonsingular are all equivalent, so that to prove that such a linear transformation is
invertible we only need verify that it is either injective or surjective. This is a direct consequence of the
following stronger result:

Theorem 3.4 Let V and W be (real) vector spaces with dim V  dim W  n and consider T ∈ LV, W.
Then, the following statements are equivalent:
(i) T is an isomorphism.
(ii) T is invertible.
(iii) T is injective.
(iv) T is non-singular.
(v) rank T  dim imT  n.
(vi) T is surjective.
(vii) If v 1 , v 2 , … , v n  is a basis for V, then Tv 1 , Tv 2 , … , Tv n  is a basis for W.

 Proof. (i)(ii): If T is an isomorphism of V onto W, then clearly T −1 is a bijective map from W onto
V and is linear by theorem 3.1.
(ii)(iii): Consider v 1 and v 2 , two arbitrary elements of V, and suppose that Tv 1  Tv 2 . Applying
T −1 to both sides of this equation gives T −1 Tv 1  T −1 Tv 2 and, since T −1 T  I V , we have that v 1 
v2.
(iii)  (iv): Notice that T nonsingular, that is, ker T  0, is equivalent to the statement: Tu  0
implies u  0. Now, suppose T ∈ LV, W is injective and Tu  0. But T0  0, from which we have
that Tu  T0. Since T is injective, this implies that u  0.
(iv)  (v): If ker T  0, then dim ker T  0 and dim im T  n − dim ker T  n.
(v)  (vi): Since dim im T  n, we have im T  W.
(vi)  (vii): Given any w ∈ W, there exists v ∈ V such that w  Tv. Now, v   i v i for some
 1 , … ,  n ∈ R. Hence, w  Tv  T i v i    i Tv i . So, Tv 1 , Tv 2 , … , Tv n  is a basis for W
because it generates W, has n elements and dim W  n.
(vii)  (i): Let u   i v i and v   i v i be any two vectors in V such that Tu  Tv. Thus
 i − i Tv i  0. Since Tv 1 , Tv 2 , … , Tv n  is a basis for W, it follows that  i  i so that u  v
and T is injective.
On the other hand, given any w ∈ W, there exist scalars  1 , … ,  n ∈ R such that
w   i Tv i  T i v i . Since  i v i ∈ V, we have that T is surjective. 

Now we are in condition to have the characterization of all finite dimensional (real) vector spaces.
Indeed, let v 1 , v 2 , … , v n  be a basis for a n-dimensional (real) vector space V. Define a transformation
T : V → R n by putting, for each v ∈ V , v   i v i , where  i ∈ R, i  1, 2, … , n, are uniquely
determined,
Tv :  1 ,  2 , … ,  n  #

Theorem 3.5 If V is any (real) vector space with dim V  n, then V is isomorphic to R n .

 Proof. T defined by (ref: isot) is a well-defined map and one can easily check that it is a surjective
linear transfomation. So, since dim V  n, we have by theorem 3.4 that T is an isomorphism. 

Exercise 3.2 Consider  ∈ R and T  ∈ LR 2 , R 2  defined by


T  v 1 , v 2   v 1 cos  − v 2 sin , v 1 sin   v 2 cos 
Show that T  is an isomorphism.

Solution. It is more convenient to express T  in the following manner:


v1 v 1 cos  − v 2 sin  v1
T   A
v2 v 1 sin   v 2 cos  v2
where A  is the 2  2 matrix
cos  − sin 
A 
sin  cos 
To show that T  is an isomorphism is equivalent to show that the matrix A  is invertible. But
det A   cos 2   sin 2   1 ≠ 0, so that A  is invertible and A  A −1
  I 22 gives

cos  sin 
A −1
 
− sin  cos 
. 
DUAL SPACE
Let V be a vector space over the field R. The vector space LV, R of all linear transformations from
V into the vector space R turns up to be of great importance in the most diverse applications of
mathematics because sometimes we can profit from converting relations in V into statements applying
to transformations in LV, R. Such transformations are called linear functionals and we refer to this
process as dualization.

Definition 3.3 Let V be vector space (over the field R). We say that f is a linear functional on V if
f ∈ LV, R.

We say that LV, R is the dual space of V. It is usual to denote LV, R by V ∗ and their elements by
u ∗ , v ∗ , w ∗ , … . Also, we may call contravariant vectors the elements of V to distinguish them from the
elements of V ∗ , which we may call covariant vectors. Or else, we can call vectors the elements of V
and covectors the elements of V ∗ .
It is easy to check that the following maps are examples of linear functionals:
(i) f : R n → R defined by
fx 1 , … , x n  : c 1 x 1    c n x n
where c 1 , … , c n ∈ R.
(ii) f : C0, 1 → R defined by
1
fx :  0 xtdt
where we remind that C0, 1 stands for the vector space of all continuous functions from the
interval 0, 1 into R and .  is a function of bounded variation on 0, 1.
(iii) Let y ∈ R n . Define f y : R n → R by
f y x : x  y
where  denotes the dot product in R . n

In the case that V is finite dimensional, say, dim V  n, we have from theorem 3.3 that
dim V ∗  dim LV, R  n. 1  n, from which it follows that:

Theorem 3.6 The spaces V and V ∗ are isomorphic.

 Proof. Since dim V ∗  dim V  n, we have by theorem 3.5 that both V and V ∗ are isomorphic to R n .
Let  be an isomorphism between V and R n and  be an isomorphism between V ∗ and R n . Then,
obviously,  −1  is a mapping from V to V ∗ that is both one-to-one and onto because it is the
composition of two bijective mappings. 

Theorem 3.7 Let V be a n-dimensional vector space and e 1 , … , e n  a basis in V. Then, to any given
set of scalars  1 , … ,  n , there corresponds uniquely a mapping w ∗ : V → R, defined by
w ∗ v : v 1  1  v n  n , ∀ v  v 1 e 1    v n e n ∈ V #
such that
w∗ ∈ V ∗ and w ∗ e i    i , i  1, 2, … , n

 Proof. Equation (ref: theo371) defines a linear functional, since, for all u, v ∈ Vand  ∈ R, we have
w ∗ u  v  u 1  v 1  1    u n  v n  n
 u 1  1    u n  n   v 1  1    v n  n 
 w ∗ u  w ∗ v
and
w ∗ u  u 1  1    u n  n
 u 1  1    u n  n 
 w ∗ u
So, for any given v  v 1 e 1    v n e n ∈ V,
w ∗ v  w ∗ v 1 e 1    v n e n 
 v 1 w ∗ e 1     v n w ∗ e n 
and (ref: theo371) gives
v 1  1  v n  n  v 1 w ∗ e 1     v n w ∗ e n 
v 1 w ∗ e 1  −  1     v n w ∗ e n  −  n   0
This implies that
w ∗ e i    i , i  1, 2, … , n
Indeed, if
w ∗ e i o  −  i o ≠ 0 for some i o ∈ 1, 2, … , n
then, for v  e i o we have
1. w ∗ e i o  −  i o   0
which is a contradiction.
On the other hand, since v 1 , … , v n  is uniquely determined by the given basis e 1 , … , e n , it follows
that w ∗ is determined in a unique manner as well. 

Given any basis e 1 , … , e n  of a n-dimensional real vector space V, we consider e ∗1 , … , e ∗n  ⊂ V ∗


by putting
j
e ∗j e i    i #
for all i, j  1, … , n. It follows from theorem 3.7 that these conditions determine each e ∗j uniquely.
Moreover, we have the following result.

Theorem 3.8 Let e 1 , … , e n  a basis of a n-dimensional (real) vector space V. Then, there exists a
unique basis e ∗1 , … , e ∗n  in V ∗ satisfying (ref: dfndualb).

 Proof. It remains to prove that e ∗1 , … , e ∗n  is indeed a basis in V ∗ . For this, suppose  1 , … ,  n are
scalars such that
w ∗   1 e ∗1     n e ∗n  0
Hence,
w ∗ v   j e ∗j v  0 , ∀ v ∈ V
In particular, for v  e i , i  1, … , n , we have
j
w ∗ e i    j e ∗j e i    j  i   i , i  1, … , n
so that each  i  0 and we have that e ∗1 , … , e ∗n  is linearly independent. On the other hand, given
any w ∗ ∈ V ∗ , we set w ∗ e i    i , i  1, … , n. So, for all v  v i e i ∈ V, we have
w ∗ v  v i  i
and, in particular, from (ref: dfndualb), it follows that
j
e ∗j v  v i  i  v j
so that
w ∗ v  v i  i  e ∗i v i   i e ∗i v , ∀ v ∈ V
and, consequently, w ∗   i e ∗i . Thus, we also have that e ∗1 , … , e ∗n  generates V ∗ and the proof is
complete. 

From what we have just seen, to any basis in V there corresponds in a unique manner a basis in its dual
space by means of (ref: dfndualb):

Definition 3.4 Let e 1 , … , e n  a basis of a n-dimensional real vector space V. We define the dual basis
of e 1 , … , e n  to be the basis e ∗1 , … , e ∗n  of V ∗ such that the conditions (ref: dfndualb ) hold.

We can have another useful representation for linear functionals. Let us start with the concept of
bilinear functions.

Definition 3.5 Let V,U and W be (real) vector spaces. A map f : V  U → W is bilinear if
fv 1  v 2 , w  fv 1 , w  fv 2 , w
fv, w 1  w 2   fv, w 1   fv, w 2 
and
fv, w  fv, w  fv, w for all scalar 

In other words, a map f  fv, w is bilinear if it is linear in v when w is held fixed, and is linear in w
when v is held fixed. The vector product in ordinary three dimensional spaces is a familiar example of
a bilinear map when V  U  W. Another example of a bilinear functional that we have already come
across is the inner product.
If in the definition we replace the vector space W by the scalar field R, we say that f is a bilinear
functional and we denote by LV  U, R the vector space of all bilinear functionals defined on V  U,
with the usual operations of addition of functions and multiplication of functions by scalars.
A bilinear functional f defined on V  V is called symmetric if fu, v  fv, u for all u, v in V and
antisymmetric if fu, v  −fv, u for all u, v in V. We have that a bilinear functional f : V  V → R
with the property that fu, u  0 for all u in V is necessarily antisymmetric. Indeed,
0  fu  v, u  v  fu, u  fu, v  fv, u  fv, v  fu, v  fv, u
for all u, v ∈ V.

DUALITY AND INNER PRODUCT


The linear property, essencial to any element v ∗ ∈ V ∗ , allows us to view v ∗ u , for any u ∈ V, as the
result of a bilinear functional, conveniently denoted . , . , acting on the pair v ∗ , u ∈ V ∗  V by
means of the following identification:
. , .  : V∗  V → R
#
v ∗ , u   v ∗ , u : v ∗ u
The bracket notation . , .  for the bilinear functional constructed above is convenient because it brings
out the product structure tha comes from the following properties enjoyed by the functional:
(i)  v ∗  u ∗ , v    v ∗ , v    u ∗ , v .
(ii)  v ∗ , v  u    v ∗ , v    v ∗ , u .
(iii) For any given v ∗ ∈ V ∗ ,
 v ∗ , v  0 , ∀ v ∈ V , if and only if v ∗  0
(iv) For any given v ∈ V,
 v ∗ , v  0 , ∀ v ∗ ∈ V ∗ , if and only if v0

Because of this connection between the elements of V and V , we say that these spaces are in duality.
We should note that it is important to distinguish between the bracket operation . , .  on V ∗  V and
the inner product on V, which is a scalar valued bilinear mapping on V  V.
Now, let V be a (real) vector space with an inner product  and v be a fixed element in V. It is an easy
task to prove that the functional f v : V → R defined by
f v u  v  u , ∀ u ∈V
is linear, so that f v ∈ V . More conveniently, we can say that f v  v ∗ for some v ∗ ∈ V ∗ . Taking into

account (ref: bracket), we have


f v u  v ∗ u v ∗ , u 
so that
 v ∗ , u  v  u #
Therefore, to each given v ∈ V there corresponds an element v ∗ ∈ V ∗ such that (ref: bracdot) is
satisfied for all u ∈ V. More than this, we can define the mapping G : V → V ∗ by putting
 Gv, u  v  u , ∀ v, u ∈V #
More precisely, (ref: isomorphg) means that G is defined in the following way: for any v ∈V, we have
that Gv ∈ V ∗ is the functional
Gv : V → R

u  Gvu  Gv, u : v  u
So, we have:

Theorem 3.9 Let V be a Euclidean vector space. Then, V and V ∗ are isomorphic.

 Proof: We prove that G defined by (ref: isomorphg) is an isomorphism. For this, suppose v, w ∈V
are such that Gv  Gw. Then,
 Gv, u  Gw, u  , ∀ u ∈V
so that
vu  wu
v − w  u  0
Since u is arbitrary, we have that v  w. So, G is injective. By theorem 3.4, it follows that G is an
isomorphism. 
The following straightforward theorem relates the reciprocal and dual bases, showing that they can be
identified.

Theorem 3.10 Suppose e 1 , … , e n  and e ∗1 , … , e ∗n  are, respectively, the reciprocal and the dual
bases of an arbitrary basis e 1 , … , e n  in a Euclidean vector space V. Then
Ge i  e ∗i , i  1, … , n #
where G is the isomorphism defined by (ref: isomorphg).
Moreover, given v ∗  v ∗i e ∗i ∈ V ∗ , there exists a v  v i e i ∈ V such that v ∗i  v i .

Proof. From the definitions of reciprocal and dual bases, we have that e i  e j   ij and  e ∗i , e j   ij .
So, by (ref: isomorphg),
 Ge i , e j  e i  e j   ij  e ∗i , e j 
and (ref: gei) follows.
Furthermore, since G is surjective, to any v ∗  v ∗i e ∗i ∈ V ∗ there corresponds a v  v i e i ∈ V such that
v ∗  v ∗i e ∗i  Gv  Gv i e i . Hence, by (ref: gei),
v ∗i e ∗i  Gv i e i   v i Ge i  v i e ∗i
so that v ∗i  v i . 

Recall that for inner product spaces we have that  . , .  is indeed the inner product . Thus, we can
write
 v ∗ , e i  v ∗i e ∗i   e i  v ∗i e ∗i  e i   v ∗i
 e ∗i , v  e ∗i  v i e i   v i e ∗i  e i   v i
 v ∗ , v  v ∗i e ∗i   v i e i   v ∗i v i e ∗i  e i   v ∗i v i
and we have the following generalization from inner product spaces to vector spaces in general:
 v ∗ , e i  v ∗i ,  e ∗i , v  v i ,  v ∗ , v  v ∗i v i
for v ∗  v ∗i e ∗i and v  v i e i .

We finish this section with exercise that shows that the second dual space V ∗∗ can always be identified
as V, regardless of any extra structure to that of a vector space.

Exercise 3.3 Show that V and V ∗∗ are isomorphic.

Solution. The mapping


G : V  V ∗∗  LLV, R, R ,
v  Gv
where Gv is defined as
Gv : LV, R  R
,
v∗  Gvv ∗  : v ∗ v
is easily seen to be a isomorphism. 
TRANSPOSE
Any linear transformation T ∈ LV, U induces, in a unique manner, another transformation, which
we call the transpose of T, by means of the following procedure. Define T T on U ∗ by setting, for all
u∗ ∈ U∗,
T T u ∗ : u ∗ T
that is,
T T u ∗ v  u ∗ Tv , ∀ v ∈ V #
Note that T T u ∗ ∈ V ∗ , because it is the composition of two linear transformations. Moreover,
T T ∈ LU ∗ , V ∗ , since
T T u ∗1  u ∗2   u ∗1  u ∗2 T
 u ∗1 T  u ∗2 T
 T T u ∗1  T T u ∗2
for all u ∗1 , u ∗2 ∈ U ∗ and  ∈ R.
In view of the bracket notation introduced by (ref: bracket), the expression (ref: eqtranspo) can be
rewritten as
 u ∗ , Tv  T T u ∗ , v  , ∀ v ∈ V , ∀ u ∗ ∈ U ∗
When V and U are spaces with inner products, we have the following definition :

Definition 3.6 Let V and U be two inner product spaces. Given a linear transformation T ∈ LV, U,
we say that the transpose of T, denoted T T , is the linear transformation in LU, V such that
Tv  u  v  T T u , ∀v ∈ V , ∀u ∈ U

Note that the inner products on the left-side and on the right-side are the ones in U and V, respectively.

Theorem 3.11 The transpose of a linear transformation is unique.

Proof. Suppose T T1 , T T2 ∈ LU, V are transposes of T ∈ LV, U. Then, for all v ∈ V and u ∈ U,
 Tv, u  v, T T1 u  v, T T2 u 
Thus,  v, T T1 − T T2 u  0 and, since v and u are arbitrary, it follows that T T1 − T T2 u  0 and
T T1 − T T2  0. 

We have the following properties:

Theorem 3.12
(i) For all T, S ∈ LV, U and  ∈ R, we have that T  S T , T T ∈ LU, V are such that
T  S T  T T  S T and T T  T T
(ii) For all T ∈ LV, U and S ∈ LU, W, we have that ST T ∈ LW, V is such that
ST T  T T S T
(iii) For all T ∈ LV, U,
T T  T  T

Proof. (i) We have


 v, T  S T u    T  Sv, u 
  Tv, u    Sv, u 
  v, T T u    v, S T u 
  v, T T  S T u 
and
 v, T T u    Tv, u 
   Tv, u 
   v, T T u 
  v, T T u 
(ii) For any w ∈ W and v ∈ V, we have
 v, ST T w    STv, w 
  STv, w 
  Tv, S T w 
  v, T T S T w 
(iii) For any v ∈ V and u ∈ U, we have  u, T T  T v  T T u, v  u, Tv . 

Definition 3.7
(i) T ∈ LV, V is said to be symmetric if T  T T .
(ii) T ∈ LV, V is said to be antisymmetric if T  −T T .

Note that any linear transformation T ∈ LV, V can always be expressed as a sum of a symmetric
linear transformation and a antisymmetric one, that is,
T  SA
where
S  1 T  T T 
2
is symmetric and
A  1 T − T T 
2
is antisymmetric.
TENSOR PRODUCT
One kind of bilinear mapping that will be of interest for us is what we call the tensor product.

Definition 3.8 Let V and U be two finite dimensional vector spaces. For any two vectors v ∈ V and
u ∈ U, the mapping v ⊗ u ∈ LU ∗ , V defined by
v⊗u : U∗  V

#
u ∗  v ⊗ u u ∗  : u ∗ u v  u ∗ , u  v
is called the tensor product of v and u.

The notion of tensor product can be given a meaning in terms of bilinear functionals because we have
this nice isomorphism.

Theorem 3.13 Let V and U be two finite dimensional vector spaces. Then LU ∗ , V and LV ∗  U ∗ , R
are cannonically isomorphic.

 Proof. Consider the mapping


 : LU ∗ , V  LV ∗  U ∗ , R
F  F : f
where
f : V∗  U∗  R
v ∗ , u ∗   fv ∗ , u ∗  : v ∗ Fu ∗ 
We shall prove that  is an isomorphism.
For each F, G ∈ LU ∗ , V and  ∈ R, we have that F  G  f with
fv ∗ , u ∗   v ∗ F  Gu ∗ 
 v ∗ Fu ∗   Gu ∗ 
 v ∗ Fu ∗   v ∗ Gu ∗  for all v ∗ , u ∗  ∈ V ∗  U ∗
Thus, F  G  F  G.
On the other hand, suppose F, G ∈ LU ∗ , V are such that f  g , where f  F and g  G. So, for
each v ∗ , u ∗  ∈ V ∗  U ∗ we have that
v ∗ Fu ∗   v ∗ Gu ∗ 
v ∗ Fu ∗  − Gu ∗   0
Since v ∗ is linear, it follows that F  G and, hence,  is injective. By theorem 3.4, we have that  is an
isomorphism. Finally, because our proof was independent upon the choice of any basis and other
intrinsical resources of the spaces, the isomorphism is cannonical. 

Now, applying the isomorphism  to v ⊗ u ∈ LU ∗ , V we have


v ⊗ u  f ∈ LV ∗  U ∗ , R
where f is given by
fv ∗ , u ∗   v ∗ v ⊗ u u ∗ 
 v ∗ u ∗ uv
 u ∗ u . v ∗ v
Thus, we can have the alternative definition for tensor product.
Definition 3.9 Let V and U be two finite dimensional vector spaces. Equivalently, the tensor product
v ⊗ u of two vectors v ∈ V and u ∈ U is the bilinear functional v ⊗ u ∈ LV ∗  U ∗ , R defined by
v ⊗ uv ∗ , u ∗  : v ∗ v. u ∗ u , ∀ v ∗ , u ∗  ∈ V ∗  U ∗ #
Moreover, the space LV ∗  U ∗ , R of all bilinear functionals over V ∗  U ∗ is called the tensor product
space of V and U and is denoted V ⊗ U.

The next step is to characterize a basis for V ⊗ U out of the bases for the individual spaces involved.

Theorem 3.14 Let e i  and e ∗i  , i  1, 2, … , m, be bases for V and V ∗ , respectively. Also, let f j 
and f ∗j  , j  1, 2, … , n , be bases for U and U ∗ , respectively. Then, the set e i ⊗ f j ; i  1, … , m and
j  1, … , n is a basis for the tensor product space V ⊗ U.

 Proof: Assume  ij e i ⊗ f j  0. Then, for each k  1, … , m and ℓ  1, … , n , we have


 ij e i ⊗ f j e k , f ℓ   0
 ij  e k , e i .  f ℓ , f j   0
 ij  ki  ℓj  0
 kℓ  0
so that e i ⊗ f j  is linearly independent.
Given any T ∈ V ⊗ U, let Te ∗i , f ∗j   T ij . Then, for all v ∗ ∈ V ∗ and u ∗ ∈ U ∗ , we have
Tv ∗ , u ∗   Tv i e ∗i , u j f ∗j   v i u j Te ∗i , f ∗j   T ij v i u j
On the other hand,
e i ⊗ f j v ∗ , u ∗   e i ⊗ f j v k e ∗k , u ℓ f ∗ℓ   v k u ℓ e i ⊗ f j e ∗k , f ∗ℓ   v i u j
From this we have that T  T ij e i ⊗ f j  and e i ⊗ f j  generates V ⊗ U. 

The basis e i ⊗ f j ; i  1, … , m and j  1, … , n from theorem 3.13 above is called the product basis
for V ⊗ U and the coefficients T ij are called the components of T with respect to this basis. Moreover,
by theorem 3.3,
dimV ⊗ U  dim LV ∗  U ∗ , R  mn. 1  mn  dim LU ∗ , V
Hence, LV ∗  U ∗ , R and LU ∗ , V have the same dimension, which we should have expected since
they are canonically isomorphic.

Exercise 3.4 Obtain spaces that are isomorphic to


i V ∗ ⊗ U ∗ , ii V ⊗ U ∗ , iii V∗ ⊗ U
and define the corresponding tensor products.

Solution.
(i) We can write
V ∗ ⊗ U ∗ ≡ LV  U, R ≡ LU, V ∗ 
v ∗ ⊗ u ∗ v, u  v ∗ , v  u ∗ , u 
v ∗ ⊗ u ∗ u  u ∗ , u  v ∗
T  T ij e i ⊗ f j , ∀ T ∈ V ∗ ⊗ U ∗
where e i ⊗ f j ; i  1, … , m and j  1, … , n is a basis for V ∗ ⊗ U ∗ .
(ii)
V ⊗ U ∗ ≡ LV ∗  U, R ≡ LU, V
v ⊗ u ∗ v ∗ , u  v ∗ , v  u ∗ , u 
v ⊗ u ∗ u  u ∗ , u  v
T  T ij e i ⊗ f j , ∀ T ∈ V ⊗ U∗
where e i ⊗ f j ; i  1, … , m and j  1, … , n is a basis for V ⊗ U ∗ .
(iii)
V ∗ ⊗ U ≡ LV  U ∗ , R ≡ LU ∗ , V ∗ 
v ∗ ⊗ uv, u ∗   v ∗ , v  u ∗ , u 
v ∗ ⊗ uu ∗   u ∗ , u  v ∗
j
T  Tiei ⊗ fj , ∀ T ∈ V∗ ⊗ U
where e i ⊗ f j ; i  1, … , m and j  1, … , n is a basis for V ∗ ⊗ U. 

We can generalize the concept of tensor products to more than two vector spaces. For this, let
V 1 , V 2 , … , V n be (real) vector spaces. A multilinear functional (also called n-linear functional) is a
mapping T : V 1  V 2    V n  R that is linear in each of its variable whilst the others are held
constant. We denote LV 1  V 2    V n , R the set of all multilinear functionals defined on
V 1  V 2    V n with the vector space structure given by the obvious addition and multiplication by
scalar.

Definition 3.10 The vector space LV ∗1  V ∗2    V ∗n , R is called the tensor product space of
V 1 , V 2 , … , V n and we denote

LV ∗1  V ∗2    V ∗n , R  V 1 ⊗ V 2 ⊗  ⊗ V n
We define the tensor product v 1 ⊗ v 2 ⊗  ⊗ v n of n vectors v i ∈ V i , i  1, 2, … , n as
v 1 ⊗ v 2 ⊗  ⊗ v n v ∗1 , v ∗2 , … , v ∗n  : v ∗1 , v 1  v ∗2 , v 2    v ∗n , v n  ,
for all v ∗i ∈ V ∗i , i  1, 2, … , n.
TENSORS
When the vector spaces V i in definition 3.10 are all equal to either the same space V or its dual, we
say that the multilinear functional is a tensor. More precisely,

Definition 3.11 Let V be a n-dimensional vector space and suppose p and q are positive integers.
(i) A tensor of order (p,q) on V is a multilinear functional T of the form
T : V∗  V∗    V∗  V  V    V  R
p q
p
We denote by T q V the space of all tensors of order (p,q) on V.
(ii) A contravariant tensor of order p, or simply a tensor of order p, 0 is a p-linear functional
T : V∗  V∗    V∗  R
p
We denote by T p V the space of all such functionals.
(iii) A covariant tensor of order q, or simply a tensor of order 0, q is a q-linear functional
T : VVV  R
q
We denote by T q V the space of all such functionals.
(iv) We define
T 00 V  R
T 1 V  LV ∗ , R  V ∗∗ ≈ V
T 1 V  LV, R  V ∗

In the case U  V we have, according to the isomorphism between LU ∗ , V and LV ∗  U ∗ , R
established by theorem 3.13,

LV ∗ , V ≈ LV ∗  V ∗ , R  T 2 V

V⊗V
Thus, the tensor product is an example of a tensor of order 2, 0, or equivalently, a contravariant
tensor of order 2 in T 2 V.

p
If v 1 , … , v p ∈ V and v 1 , … , v q ∈ V ∗ , an example of a tensor in T q V is the tensor product of v 1 , … , v p
and v 1 , … , v q
T  v1 ⊗  ⊗ vp ⊗ v1 ⊗  ⊗ vq : V∗    V∗  V    V  R
,
p q
defined by
Tu 1 , … , u p , u 1 , … , u q   u 1 , v 1    u p , v p  v 1 , u 1    v q , u q 
for all u 1 , … , u q ∈ V and u 1 , … , u p ∈ V ∗ .

Exercise 3.5 Let e i  and e i  , i  1, … , n , be dual bases of V and V ∗ , respectively. Then, the set
e i 1 ⊗  ⊗ e i p ⊗ e j 1 ⊗  ⊗ e j q 
p
is a basis for T q V, called the product basis, and hence
p
dim T q V  n pq

Solution: This is a straightforward generalization of theorem 3.14.


p
Thus, according to exercise 3.4, a tensor T ∈ T q V has the following component representation
i i
T  T j 11 j pq e i 1 ⊗  ⊗ e i p ⊗ e j 1 ⊗  ⊗ e j q #
Now, suppose we have new dual bases e i  and e i  of V and V ∗ related to the previous ones by
j q
e k1  P k1 e j , e k  Q k 1 e q1 #
and
q q
e q1  Q j 1 e j , e q  P k1 e k1 #
where the coefficients satisfy the relations
j j q j q
P k 1 Q kk 1   k and Q j 1 P k 1   k 11
The component form of the tensor T relative to the new bases reads
i ′ i ′ ′ ′
T  T j 1′ j p′q e i ′1 ⊗  ⊗ e i ′p ⊗ e j 1 ⊗  ⊗ e j q #
1

Equating the component forms (ref: eq10)and (ref: eq13) and taking into account the formulas of
transformation (ref: eq11) and (ref: eq12), we obtain
i ′ i ′ i′ i′ j j i i
T j 1′ j p′q  Q i 11 Q i pp P j 1′ P j q′q T j 11 j pq
1 1

and
i i i j′ j′ i ′ i ′
T j 11 j pq  P ii 1′ P i p′p Q j 11 Q j qq T j 1′ j p′q
1 1
p
which are the transformation formula for the components of any tensor T ∈ T q V.

Finally, we can introduce some algebraic operations for components of tensors. It turns out that
addition, tensorial multiplication, self contraction, product contraction, symmetrization and alternation
are the main operations when one deals with algebraic aspects of components of tensors. Here, we have
some words to say about each of these operations.

1. Addition. It applies only to components of tensors of the same order and type. If we are given two
systems of components of tensors of the same order and type, and if we add each component of the
first tensor to the corresponding component of the second, we obviously arrive at a set of components
of a tensor of the same order and type as the original components. This process is the operation of
addition and the resulting components are called the sum of the two components.Thus, if S abcde and P cde
ab

are two sets of components of tensors of order 5, then the components of the sum T  S  P of S and P
are defined by the equations
..cde  S ..cde  P ..cde
T ab ab ab

2. Tensorial multiplication. If we take two systems of components of tensors of any kind and multiply
each component of the first by each component of the second, we get a system of components of
tensors, called their product, whose order equals the sum of the orders of the two original systems. For
example, if S ab
cd are the components of a tensor S and P ef are the components of a tensor P, then their
product is the tensor T with coomponents given by
..cdef  S ..cd P ef
T ab ab

This process can be extended to define the product of any number of systems of components of tensors.

3. Self-contraction. This operation allows one to create new tensors of lower order for any given
tensor. Such process can be better explained by means of an example: Let us take a system of
components of tensor of order k, say T a.bc…k , which has both upper and lower indices. If we put a  b,
we get the set T b.bc…k . Since b is now a repeated index, it must be summed from 1 to n, in accordance to
our sum convention. Thus, the new system obtained in this way is S c…k  T b.bc…k defines a tensor S of
order k − 2. of course, the operation can be repeated several times because we can contract with respect
to any pair of indices, one being sa ubscript and the other, a superscript.

4. Product contraction. For this operation, we first take the product of two systems of components of
tensors and then contract the result with respect to a subscript of one and a superscript of the other.
Thus, a product contraction of two systems P abd and S cd is the tensor A whose system of components is
…c  P
given by A ab S cd .
abd

5. Symmetrization. This process is applied always to a number of upper and lower indices. In order to
perform such an operation over k indices, we form k! scalars by permuting these indices in all possible
ways and then we average by taking the sum of these scalars and divide by k!. The operation of
symmetrization is denoted Sym. For example,
T ab T ba
SymT ab   2!

T abc T bca T cab T acb T cba T bac


SymT abc   3!

6. Alternation. This operation, denoted either by Alt or by a pair of square brackets  , is perfomed in
the same way as the process of symmetrization, only that the averaging is taken with the positive sign
if the permutation is even and the negative sign if the permutation is odd. Here are two examples:
T abc T bca T cab −T acb −T cba −T bac
AltT abc   T abc  3!

T ab −T ba
AltT ab   T ab  2!
EXERCISES
1. Let T ∈ V ⊗ W , T ≠ 0. Show that in general there exist several representations of the form
T  vi ⊗ wi , vi ∈ V , wi ∈ W
with v 1 , … , v n  and w 1 , … , w n  each linearly independent.
2. Let a ∈ V and b ∈ W be such that b ⊗ a ≠ 0. Prove that
b⊗a  w⊗v if and only if v  a and w  b , for some  ≠ 0
3. Show that a self-contraction over two contravariant or two covariant indices of a tensor does not
give another tensor. For example, T abc is not a tensor.
4. Show that LV, LV, V is isomorphic to T 12 V.
5. Prove that g ij A i B j is invariant, i.e., g ij A i B j  g ij A i B j .
6. Associated to vectors a  a i e i , b  b j e j and c  c k e k it is possible to define the following
trilinear functional T by putting
ij
Ta, b, c  a i b j c k T k
Show that this functional defines a tensor.
7. Associated to the standard basis e 1 . e 2 . e 3  of R 3 ,
1 0 0
e1  0 , e2  1 , e3  0
0 0 1
there exists a set of scalars defined by
T ijk  Te i , e j , e k 
where T is a given ... Suppose another basis is defined as
e 1  e 1 − 2e 2
e 2  2e 1  e 2
e3  e1  e2
Find T ijk  Te i , e j , e k  in terms of T ijk .
8. Prove that T ij v i w j is invariant.
9. Prove that if a j  T ij v i are the components if a civariant vector a for all contravariant vectors v,
then T ij is a covariant tensor of order 2.
10. If v i are the components of a contravariant vector, when can you say that a i defined by
a i  v i  v i ,  ∈ R , are components of a vector a?
REFERENCES
[1] Morris, A.O.: Linear Algebra: an introduction. 2nd. ed. Chapman and Hall, 1982.
[2] Lipschutz, S.: Schaum ’s Outline of Theory and Problems of Linear Algebra. Schaum
Publishing Co., 1968.

View publication stats

You might also like