Tensor Algebra Removed
Tensor Algebra Removed
head of department of Mathematics Who blessed me with his valuable suggestion, constant
encouragement, invaluable support and parental affection and providing the necessary facilities
during the course of project.
I consider myself auspicious enough to complete project report under the guidance of term paper
in charge of Dr. Vivek Mehrotra .
His leadership quality. cooperativeness, absolving nature and beneficial attitude fill me with
power of standing tall. I bowin respect to my sir.
[2]
ABSTRACT
The abstract for a topic like tensor algebra would typically encapsulate the core concepts and
applications of the subject in a concise manner. Here's an example:
[3]
Table of contents
1. Certificate
2. Acknowledgement
3. Abstract
4. Introduction
5. Vector space
6. Basis of a vector space
7. Definition n-Dimensional Space
8. Definition Superscript And Subscript
9. Einstein‟s Summation Convention
10. Definition Dummy Suffix
11. Definition Real Suffix
12. Kronecker Delta
13. Properties of Kronecker Delta
14. Determinant
15. Differentiation of a Determinant
16. Multiplication of Determinants
17. Linear Equations Cramer‟s Rule
18. Functional Determinants
19. Tangent Vector
20. Contavariant Vectors
21. Law of Transformation of the Components of Contravariant Vectors
22. Covariant Vector
23. Law of Transformation of the Components of Covariant Vectors
24. Tensor Product of Two Vector Spaces
25. Law of Transformation of component of Contravariant Vector of Rank 2
26. Law of Transformations of Component of a Covariant Tensor of Rank 2
27. Law of Transformation of Components of a Contravariant Tensor of Rank r
28. Law of Transformation of Components of a Covariant Tensor of Rank r
29. Law of Transformation of Components of a Mixed Tensor of Rank r,s
[4]
30. Transformation Formula
31. Contraction
32. Inner product
33. Quotient Law
34. Special tensors
(a) Symmetrc Tensor.
(b) Skew Symmetrc Tensor.
35. Reciprocal Symmetric Tensor
36. Associated Tensors
37. Conclusion
38. Reference
[5]
INTRODUCTION
Tensor algebra is a branch of mathematics that deals with the study of tensors, which are objects
that describe linear relations between vectors, scalars, and other tensors. In simpler terms, tensors
are multidimensional arrays of numbers that follow specific transformation rules under
coordinate changes. Tensor algebra involves operations such as addition, multiplication,
contraction, and outer products on tensors, making it a fundamental tool in various fields like
physics, engineering, and computer science for describing and analyzing complex systems with
multiple dimensions and degrees of freedom.
Tensors are mathematical objects that generalize scalars, vectors and matrices to higher
dimensions. If you are familiar with basic linear algebra, you should have no trouble
understanding what tensors are. In short, a single-dimensional tensor can be represented as a
vector. The term rank of a tensor extends the notion of the rank of a matrix in linear algebra,
although the term is also often used to mean the order (or degree) of a tensor. The rank of a
matrix is the minimum number of column vectors needed to span the range of the matrix. A
tensor is a vector or matrix of n-dimensions that represents all types of data. All values in a
tensor hold identical data type with a known (or partially known) shape. The shape of the data is
the dimensionality of the matrix or array. A tensor can be originated from the input data or the
result of a computation. Tensors are simply mathematical objects that can be used to describe
physical properties, just like scalars and vectors. In fact tensors are merely a generalization of
scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor. Pressure
itself is looked upon as a scalar quantity; the related tensor quantity often talked about is the
stress tensor. that is, pressure is the negative one-third of the sum of the diagonal components of
the stress tensor (Einstein summation convention is implied here in which repeated indices imply
a sum) Tensors are to multilinker functions as linear maps are to single variable functions. If you
want to apply techniques in linear algebra to problems depending on more than one variable
linearly (usually something like problems that are more than one-dimensional), the objects you
are studying are tensors.
[6]
HISTORY
Tensor algebra has a rich history that spans several centuries, with roots in both mathematics
and physics. Here's a brief overview:
1. 18th Century: The concept of tensors began to emerge in the 18th century with the work
of mathematicians such as Euler and Cauchy, who developed the theory of surfaces and
introduced the concept of tensors as multidimensional arrays of numbers.
2. 19th Century: The groundwork for tensor algebra was laid in the 19th century by
mathematicians like Gauss, Riemann, and Christoffel. They developed the theory of
curved spaces and introduced the notion of covariant and contravariant vectors.
3. Late 19th to Early 20th Century: The theory of tensors was further developed by
mathematicians like Ricci-Curbastro and Levi-Civita, who introduced the tensor calculus
notation and formalism. This culminated in the creation of the Levi-Civita symbol, which
is used to define the cross product in three dimensions.
4. Special and General Relativity: Tensor algebra gained prominence in the early 20th
century with the development of Albert Einstein's theory of general relativity. Einstein's
field equations, which describe the curvature of spacetime due to matter and energy, are
expressed using tensor notation.
5. Quantum Mechanics: Tensor algebra found applications in quantum mechanics,
particularly in the formulation of quantum field theory. Tensors are used to describe the
properties of particles and fields in this framework.
6. Modern Mathematics: In the latter half of the 20th century, tensor algebra found
applications in various fields of mathematics, including differential geometry, algebraic
geometry, and algebraic topology. Tensors are used to describe geometric objects such as
manifolds and vector bundles.
7. Engineering and Physics: Today, tensor algebra plays a crucial role in various branches
of science and engineering, including continuum mechanics, electromagnetism, fluid
dynamics, and computer graphics. Tensors are used to describe physical quantities such
as stress, strain, electromagnetic fields, and fluid flow.
[7]
Overall, tensor algebra has a rich and diverse history, with contributions from
mathematicians and physicists over several centuries. It continues to be an essential tool
for describing the geometry and physics of the universe.
[8]
Vector Space : Let be a given set. We shall call elements of Vas vectors. Let F be a field
whose elements will be called scalars. We say that V(F) is a vector space if the following
conditions are satisfied:
(A) There exists an internal binary operation'+'on V called vector addition such that ( V ,+) is
an abelian group. Hence
(i)∀α, β, γ ∈ V
α + 0=0 +α ∀α ∈ V
0 is called identity or zero vector in V.
(iv) ∀ α , β ∈ V
(B)There exists an external binary operation called scalar multiplication such that λ ∈ F , α ∈ V
⇒λ α ∈ V. The scalar multiplication satisfies the following axioms:
(i) ∀ α , β ∈ V, λ∈F
λ (α + β)=λ α + λ β,
(ii) ∀ α ∈ V, λ,μ∈F
(λ + μ) α=λ α+ μ α ,
(iii)(λ μ)α=λ(μ α)
[9]
EXAMPLE-I: If R be the set of real numbers then (R,+, .) is a field If V be the set of all n × n
matrices over R , then V(R) is a vector space .
Solution: The internal binary operation on V is the matrix addition .If A ,B , C be arbitrary n × n
matrices of V then-
If 0 be n × n null matrix
(ii)A + 0=0 + A = A ∀ V
(iii)A + B = B + A=0
B is inverse of A.
Also ∀ A,B ∈ V
A + B=B + A
(i)λ(A + B) = λ A + λ B
(ii)(λ + μ)A = λ A + μ A,
(iv)1 A = A, 1∈R
Basis of a vector space : Let V(F) be a vector space . A set S{α1 , α2 ,……, αn} of vectors of
V is called the basis for V is the following two condition are hold good:
[10]
λ1 α1 + λ2 α2 +…...+ λn αn = 0⇒ λ1 = 0 , λ2 = 0,….. λn = 0 where λ1 , λ2,.... , λn ∈ F.
Definition: Number of elements in the basis of a vector space is called dimension of the space.
If a vector space has two finite basis they contain same number of elements.
Definition n-Dimensional Space: Consider an ordered set of n real variables (x1 , x2 , …..
, xi , …, xn). These variables are called co-ordinates. The space generated by all points
corresponding to different values of the co-ordinates, is called n-dimensional space and is
denoted by Vn. A subspace Vm (m < n) of Vn is defined as the collection of points which
satisfy the n- equations xi = xi (u1 , u2 , um) , (i = 1, 2, …, n).
The variables u1 , u2 , …., um are the co-ordinates of Vm. The suffixes 1 , 2, …, n serve as labels
only and do not possess any significance as power indices.
xi = xi (u) , (i = 1, 2, … ,n).
Definition Superscript And Subscript: The suffixes i and j in 𝐴𝑗𝑖 are called superscript
and subscript respectively. The upper position always denotes superscript and the lower position
denotes subscript. We have seen the suffix i in the co-ordinate xi does not have the character of
power indices. Usually powers will be denoted by brackets. Thus (xi)2 means the square of xi.
𝑛 𝑖
𝑖=1 𝑎𝑖 𝑥 .
Summation convention means drop the sigma sign and adopt the convention
[11]
𝑛 𝑖
𝑖=1 𝑎𝑖 𝑥 = 𝑎𝑖 𝑥 𝑖
Hence by summation convention we mean that if a suffix occurs twice in a term, once in the
lower position and once in the upper position, then the suffix implies sum over defined range. If
the range is not given, then we assume that the range is form 1 to n.
Definition Dummy Suffix: If a suffix occurs twice in a term, once in the upper position and
once in the lower position, then that suffix is called dummy suffix. For example i is a dummy
𝜇
suffix 𝑎𝑖 xi.
𝜇 𝜇 𝜇 𝜇
Evidently, 𝑎𝑖 xi = 𝑎1 x1 + 𝑎2 x2 + … + 𝑎𝑛 xn
𝜇 𝜇 𝜇 𝜇
𝑎𝑗 xj = 𝑎1 x1 + 𝑎2 x2 + … +𝑎𝑛 xn.
𝜇 𝜇
The last two equations prove that 𝑎𝑖 xi = 𝑎𝑗 xj.
This shows that a dummy suffix can be replaced by another dummy suffix not used in that term.
Also two or more than two dummy suffixes can be interchanged. For example-
𝜕𝑥 𝛼 𝜕𝑥 𝛽 𝜕𝑥 𝛽 𝜕𝑥 𝛼
aαb 𝜕𝑥 𝑖′ = aβα 𝜕𝑥 𝑖′ 𝜕𝑥 𝑗 ′
𝜕𝑥 𝑗 ′
Definition Real Suffix: A suffix which is not repeated is called a real or free suffix. For
𝜇
example 𝜇 is a real suffix in 𝑎𝑖 xi . A real suffix cannot be replaced by another real suffix. For
𝜇
𝑎𝑖 xi ≠ 𝑎𝑖𝑣 xi.
1 𝑖𝑓 𝑖 = 𝑗
Kronecker Delta: It is denoted by the symbol 𝛿𝑗𝑖 = 0 𝑖𝑓 𝑖 ≠ 𝑗
𝜕𝑥 𝑖
(1) = 𝛿𝑗𝑖 (2) 𝛿𝑖𝑖 = n.
𝜕𝑥 𝑗
[12]
But summation convention 𝛿𝑖𝑖 = 𝛿11 + 𝛿22 + 𝛿33 + … + 𝛿𝑛𝑛
𝛿𝑖𝑖 = 1 + 1 + 1 + ... + 1 = n.
𝛿𝑗1 Ajk = 𝛿11 A1k + 𝛿21 A2k + 𝛿31 A3k + … + 𝛿𝑛1 Ank
= A1k + 0 + 0 + … + 0
= A1k
𝑗
(4) 𝛿𝑗𝑖 . 𝛿𝑘 = 𝛿𝑘𝑖
𝑗
For 𝛿𝑗𝑖 . 𝛿𝑘 = 𝛿1𝑖 . 𝛿𝑘1 + 𝛿2𝑖 . 𝛿𝑘2 + 𝛿3𝑖 . 𝛿𝑘3 + … + 𝛿𝑖𝑖 . 𝛿𝑘𝑖 + … + 𝛿𝑛𝑖 . 𝛿𝑘𝑛
= 0 + 0 + 0 + … + 𝛿𝑘𝑖 + … + 0
= 𝛿𝑘𝑖
𝑗
(5) 𝛿𝑗𝑖 𝛿𝑖 = n.
𝑗
𝛿𝑗𝑖 𝛿𝑖 = 𝛿𝑖𝑖 = 𝛿11 + 𝛿22 + … + 𝛿𝑛𝑛
= 1 +1 + … + n
= n.
[13]
𝑎11 𝑎12 ⋯ ⋯ 𝑎1𝑛
𝑎12 𝑎22 ⋯ ⋯ 𝑎𝑛2 = a (say)
⋯ ⋯ ⋯ ⋯ ⋯
𝑎1𝑛 𝑎2𝑛 ⋯ ⋯ 𝑎𝑛𝑛
Here 𝑎𝑗𝑖 may be taken as the general element of this determinant. This determinant frequently
denoted by 𝑎𝑗𝑖 . The suffixes i and j denote the row and the column respectively to which the
element 𝑎𝑗𝑖 belongs. The cofactor of the element 𝑎𝑗𝑖 in the determinant is denoted by the symbol
𝑗
𝐴𝑖 .
𝑗
The determinant 𝑎𝑗𝑖 said to be symmetric if 𝑎𝑗𝑖 = 𝑎𝑖 for every i and j. The determinant is said to
𝑗
be skew (or anti) symmetric if 𝑎𝑗𝑖 = −𝑎𝑖 for every i and j.
𝜕𝑎 11 𝜕𝑎 21
⋯
𝜕𝑎 𝑛1 𝑎11 𝑎12 ⋯ ⋯ 𝑎1𝑛 𝑎11 𝑎12 ⋯ 𝑎1𝑛
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑎 12 𝜕𝑎 22
𝜕𝑎 ⋯ ⋯ 𝜕𝑥𝑛
𝜕𝑎 2 𝑎2 𝑎22 ⋯ 𝑎𝑛2
= 𝑎12 𝑎22 ⋯ 𝑎𝑛2 + 𝜕𝑥 𝜕𝑥 + … + ⋯1 ⋯ ⋯
𝜕𝑥
⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ 𝜕𝑎 1𝑛
⋯
𝜕𝑎 2𝑛 𝜕𝑎 𝑛𝑛
𝑎1𝑛 𝑎2𝑛 ⋯ 𝑎𝑛𝑛
𝑎1𝑛 𝑎2𝑛 ⋯ ⋯ 𝑎𝑛𝑛 𝜕𝑥 𝜕𝑥
⋯ 𝜕𝑥
𝜕𝑎 11 𝜕𝑎 21 𝜕𝑎 𝑛1
= . 𝐴11 + . 𝐴21 + … + . 𝐴𝑛1
𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜕𝑎 𝑖1
= . 𝐴𝑖1
𝜕𝑥
𝑗
Where 𝐴𝑗𝑖 is the cofactor of 𝑎𝑖 in the determinant 𝑎𝑗𝑖 .
[14]
𝜕𝑎 12 𝜕𝑎 22 𝜕𝑎 𝑛2
= . 𝐴12 + . 𝐴22 + … + . 𝐴𝑛2
𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜕𝑎 𝑖2
= . 𝐴𝑖2
𝜕𝑥
𝜕𝑎 𝑖𝑛
Similarly the last determinant of R.H.S. = . 𝐴𝑖𝑛
𝜕𝑥
𝜕𝑎 𝜕𝑎 𝑖1 𝜕𝑎 𝑖2 𝜕𝑎 𝑖𝑛
Finally, = . 𝐴𝑖1 + . 𝐴𝑖2 + … + . 𝐴𝑖𝑛 .
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥
𝑗 𝑗
𝜕𝑎 𝑖 𝜕𝑎 𝑖
= . 𝐴𝑗𝑖 = 𝐴𝑗𝑖
𝜕𝑥 𝜕𝑥
𝑗
𝜕𝑎 𝜕𝑎 𝑖
Thus, = 𝐴𝑗𝑖 .
𝜕𝑥 𝜕𝑥
𝑗 𝑗
𝜕𝑎 𝜕𝑎 𝑖 𝜕𝑎 𝜕𝑎 𝑖
Similarly, = 𝐴𝑗𝑖 , = 𝐴𝑗𝑖 etc.
𝜕𝑦 𝜕𝑦 𝜕𝑧 𝜕𝑧
[15]
𝑗
The cofactor of 𝑎𝑗𝑖 in the determinant is denoted by 𝐴𝑖 .
If a = 0, the transformation is said to be singular. If a ≠ 0, the transformation is said to be non-
singular.
The whole set of equations (i) is represented by the single equation 𝑎𝑗𝑖 xj = bi.
Multiplying by 𝐴𝑘𝑖 and summing for integral values of I from 1 to n, we obtain
𝑎𝑗𝑖 xj 𝐴𝑘𝑖 = bi𝐴𝑘𝑖
Or 𝑎 𝛿𝑗𝑘 𝑥 𝑗 = bi𝐴𝑘𝑖
Or 𝑎 𝑥 𝑘 = bi𝐴𝑘𝑖
bi𝐴𝑘𝑖
Or 𝑥𝑘 = if 𝑎 ≠ 0
𝑎
In this case the n equations zi = zi (yk) are solvable for the z‟s in terms of y‟s . Similarly suppose
that the n functions yi = ( x1 , x2 , …. , xn) are independent functions so that
𝜕𝑦 𝑖
≠ 0
𝜕𝑥 𝑘
[16]
Now we can write –
𝜕𝑧 𝑖 𝜕𝑧 𝑖 𝜕𝑦 1 𝜕𝑧 𝑖 𝜕𝑦 2 𝜕𝑧 𝑖 𝜕𝑦 3 𝜕𝑧 𝑖 𝜕𝑦 𝑛 𝜕𝑧 𝑖 𝜕𝑦 𝑗
= 𝜕𝑦 1 + 𝜕𝑦 2 + 𝜕𝑦 3 + …. + 𝜕𝑦 𝑛 = 𝜕𝑦 𝑗
𝜕𝑥 𝑘 𝜕𝑥 𝑘 𝜕𝑦 𝑘 𝜕𝑥 𝑘 𝜕𝑥 𝑘 𝜕𝑥 𝑘
𝜕𝑧 𝑖 𝜕𝑧 𝑖 𝜕𝑦 𝑗
=
𝜕𝑥 𝑘 𝜕𝑦 𝑗 𝜕𝑥 𝑘
𝜕𝑧 𝑖 𝜕𝑧 𝑖 𝜕𝑦 𝑗 𝜕𝑧 𝑖 𝜕𝑦 𝑖
= = .
𝜕𝑥 𝑘 𝜕𝑦 𝑗 𝜕𝑥 𝑘 𝜕𝑦 𝑗 𝜕𝑥 𝑗
𝜕𝑧 𝑖 𝜕𝑧 𝑖 𝜕𝑦 𝑖
Or = . …………………… (i)
𝜕𝑥 𝑘 𝜕𝑦 𝑗 𝜕𝑥 𝑗
Connecting the functional determinants. Consider the particular case in which z i = xi. Now (i)
becomes
𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝑦 𝑖
= .
𝜕𝑥 𝑘 𝜕𝑦 𝑗 𝜕𝑥 𝑗
𝜕𝑥 𝑖 𝜕𝑦 𝑖
1= . ……………………. (ii)
𝜕𝑦 𝑗 𝜕𝑥 𝑗
1 0 ⋯ ⋯
𝜕𝑥 𝑖 0 1 ⋯ ⋯ =1
For = 𝛿𝑘𝑖 = ⋯ ⋯ ⋯ ⋯
𝜕𝑥 𝑘
0 0 ⋯ 1
𝜕𝑥 𝑖 1 𝜕𝑥 1
From (ii), = 𝜕𝑦 𝑖
or = 𝜕𝑦
𝜕𝑦 𝑗 𝜕𝑦
𝜕𝑥 𝑗 𝑥𝑦
[17]
Tangent Vector: A tangent vector λ at m is a function from 𝐶 ∞ (m) into R satisfying the
following conditions.
Contravariant Vector: Let V(R) be a vector space and let m ∈ V . We denoted by C∞(m)
the set of all differentiable functions in the neighbourhood of m. A tangent vector λ at m is
function on C∞(m) mapping function of C∞(m) into real numbers and satisfying the following
axioms :
(iii)λ(af) = a λ(f) , a∈ R
Let Mm be the set of all tangent vectors at m . we define vector addition on M m as follows:
∀ λ ∈ Mm , a ∈ R, f ∈ C∞(m)
(a λ)(f) = a λ(f)
It can be shown that Mm(R) is a vector space under above vector addition and scalar
multiplication . We call vectors of Mm as Contravariant Vectors.
[18]
Let {e1 , e2 , …….. , en} be a basis for Mm . If λ ∈ Mm , We can write
𝑛
λ= 𝑖=1 λi ei …………………. (ii)
The repeated index „I‟ is called dummy index . In tensor notation , We do not put summation
sign and simply write (ii) as:
λ = λi ei
𝑛 𝑛 𝑛
Since λ= 𝑖=1 λi ei = 𝑗 =1 λj ij = 𝑘=1 λk ek=……. So we can write
In what follows, we denote the basis (e1 , e2 , …….. , en) by (ei) simply and λ‟ the components of
contravariant vector λ relative to basis (ei) of Mm.
Let (e1 , e2 , …….. , en) or simply (ei‟) be another basis for Mm. We can write λ as linear
combination of (ei‟) . Thus
.........................................................
………………………………………….
[19]
en=𝑝𝑛1′ e1‟+𝑝𝑛2′ e2‟+ ……+ 𝑝𝑛𝑛′ en‟
𝑝𝑖𝑖′ are scalars of R for 1≤i , i‟≤n. In matrix notation we can write :
Where (ei) , (ei‟) are column matrices of order n×1 and ( 𝑝𝑖𝑖′ ) is n×n transformation matrix . We
write transformation equation (v) more precisely as:
Thus the linear combination of basis vectors ei‟ is zero vector . Since basis (ei‟) is linearly
independent so
λi‟ - 𝑝𝑖𝑖′ λi = 0
or λi‟ = 𝑝𝑖𝑖′ λi
Theorem 1: To show that there is no distinction between contravariant and covariant vectors
when we restrict ourselves to rectangular Cartesian transformations of co- ordinates.
Proof: Let (x , y) be co-ordinates of a points P with respect to orthogonal Cartesian axes X and
Y. Let (x‟ , y‟) be the co-ordinates of the same point P relative to orthogonal Cartesian axes X‟
and Y‟ . Let (l1, m1) and (l2, m2) be direction cosines of the axes X‟ and Y‟ respectively. Then we
have the relation-
[20]
x‟ = l1x + m1 y ………………. (i)
y‟ = l2 x + m2 y
x = l1 x‟ + l2 y‟ ………………. (ii)
y = m1 x‟ + m2 y‟
Let x1 = x , x2 = y.
𝜕𝑥 𝑖′ 𝜕𝑥 𝑖′ 𝜕𝑥 𝑖′
Ai‟ = Aa = A1 + A2 .
𝜕𝑥 𝑎 𝜕𝑥 1 𝜕𝑥 2
𝜕𝑥 1′ 𝜕𝑥 1′
∴ A1‟ = A1 + A2
𝜕𝑥 1 𝜕𝑥 2
𝜕𝑥 2′ 𝜕𝑥 2′
And A2‟ = A1 + A2
𝜕𝑥 1 𝜕𝑥 2
𝜕𝑥 ′ 𝜕𝑥 ′
i.e., A1‟ = A1 + A2
𝜕𝑥 𝜕𝑦
𝜕𝑦 ′ 𝜕𝑦 ′
And A2‟ = A1 + A2 .
𝜕𝑥 𝜕𝑦
A2‟ = A1l2 + A2 m2
𝜕𝑥 𝛼 𝜕𝑥 1 𝜕𝑥 2
Ai‟ = Aα = A1 + A2 .
𝜕𝑥 𝑖′ 𝜕𝑥 𝑖′ 𝜕𝑥 𝑖′
𝜕𝑥 1 𝜕𝑥 2
∴ A1‟ = A1 𝜕𝑥 1′ + A2 𝜕𝑥 1′
𝜕𝑥 1 𝜕𝑥 2
And A2‟ = A1 𝜕𝑥 2′ + A2 𝜕𝑥 2′
[21]
𝜕𝑥 𝜕𝑦
i.e., A1‟ = A1 𝜕𝑥 ′ + A2 𝜕𝑥 ′
𝜕𝑥 𝜕𝑦
And A2‟ = A1 + A2
𝜕𝑦 ′ 𝜕𝑦 ′
A2‟ = A1l2 + A2 m2
Theorem 2: To prove that the transformation of the contravariant vector form a group.
Proof: Let Ai be a contravariant vector. Consider co-ordinate transformations xi‟ = xi‟(xk), xi‟‟ =
xi‟‟(xk‟), that is,
xi xi‟ xi‟‟
Ai Ai‟ Ai‟‟
𝜕𝑥 𝛼 ′
Aα‟ = Ap ………………. (i)
𝜕𝑥 𝑝
𝜕𝑥 𝑖′′
Ai‟‟= Aα‟ 𝜕𝑥 𝛼 ′
𝜕𝑥 𝛼 ′ 𝜕𝑥 𝑖′′
= Ap by (i)
𝜕𝑥 𝑝 𝜕𝑥 𝛼 ′
𝜕𝑥 𝑖′′
Or, Ai‟‟= Ap 𝜕𝑥 𝑝
[22]
This proves that if we make the transformation (i) to (iii), we get the same law of transformation.
This property is expressed by saying that transformations of a contravariant vector form a group.
⋆
We take n covariant vectors ei ∈ 𝑀𝑚 as:
μ (λ) = μi ei(λ)
∴ μ = μi ei
Or μ = μi ei = μ1 e1 + μ2 e2 + …… + μn en
⋆
So arbitrary covariant vector μ ∈ 𝑀𝑚 can be written as linear combination of (ei) . So (ei) spans
⋆
𝑀𝑚 (R).
Such that:
xi ei = 0. ……………….. (iv)
Or xi λi = 0
[23]
Or x1 c + x2 λ2 +……+ xn λn = 0 for arbitrary λ1 , λ2 , ……. λn. hence
x1 = x2 = ……= xn = 0
So xi ei = 0 ⇒ xi = 0, i = 1 , 2 ,…. , n
⋆
So the set c is linearly independent . Hence ( ei ) becomes basis for 𝑀𝑚 (R).
⋆
Thus dim 𝑀𝑚 (R) = n which is equal to dim Mm(R).
μ = μi ei ……………….. (i)
𝑗
= 𝑟𝑖𝑖′ (ei 𝑝𝑗′ ej)
𝑗
∴ 𝑟𝑖𝑖′ 𝑝𝑗′ ei (ej) = 𝛿𝑗′𝑖′
[24]
Or 𝑟𝑖𝑖′ 𝑝𝑗′𝑖 = 𝛿𝑗′𝑖′
𝑗′ 𝑗′
Or 𝑟𝑖𝑖′ 𝑝𝑗′𝑖 𝑝𝑘 = 𝛿𝑗′𝑖′ 𝑝𝑘
𝑗′
Or 𝛿𝑘𝑖 𝑟𝑖𝑖′ = 𝛿𝑗′𝑖′ 𝑝𝑘
Or 𝑟𝑘𝑖′ = 𝑝𝑘𝑖′
𝛿𝑗𝑖 Ai Bj = Ai B
Since i is again dummy index , so summation stands over „j‟ and it runs from 1 to n . Hence
Ai Bj = B1 + A2 B2 +……….+ An Bn
Hence ,
𝛿𝑗𝑖 Ai Bj = A1 B1 + A2 B2 +……….+ An Bn .
EXAMPLE III : Show that in 3- Dimensional space with orthogonal Cartesian co-ordinates x1 ,
x2 , x3 , the equation 𝛿ij xi xj = a2 represents the sphere .
[25]
Solution : In 𝛿ij xi xj both indices i and j are dummy . So we can write given equation as :
𝛿11 x1 x1 + 𝛿12 x1 x2 + 𝛿13 x1 xj3 + 𝛿21 x2 x1 + 𝛿22 x2 x2 + 𝛿23 x2 x3 + 𝛿31 x3 x1 + 𝛿32 x3 x2 + 𝛿33 x3
x3 = 0
Since 𝛿11 = 𝛿22 = 𝛿33 = 1 and 𝛿ij = 0 for I ≠ j so above equation takes from
x1 x1 + x2 x2 + x3 x3 = a2
Which is a sphere .
xi xi‟ xi‟‟
Ai Ai‟ Ai”
𝜕𝑥 𝑝
Aα‟ = Ap ………………. (i)
𝜕𝑥 𝛼 ′
𝜕𝑥 𝛼 ′
Ai‟‟= Aα‟ 𝜕𝑥 𝑖′′ ……………….. (ii)
𝜕𝑥 𝑝 𝜕𝑥 𝛼 ′
Ai‟‟= (Ap 𝜕𝑥 𝛼 ′ ) 𝜕𝑥 𝑖′′
[26]
𝜕𝑥 𝑝 𝜕𝑥 𝛼 ′
= Ap ( 𝜕𝑥 𝛼 ′ )
𝜕𝑥 𝑖′′
𝜕𝑥 𝑝
Or, Ai‟‟ = Ap ( 𝜕𝑥 𝑖′′ )
This proves that if we make the transformation (i) to (iii), we get the same law of transformation.
This property is expressed by saying that transformations of a covariant vector is transitive.
⋆
Tensor Product of Two Vector Spaces : As stated earlier , 𝑀𝑚 (R) is the space of
⋆
contravariant vectors and 𝑀𝑚 (R) is that covariant vectors . Let us now consider the Cartesian
⋆ ⋆
product set 𝑀𝑚 × 𝑀𝑚 and let T be a function:
⋆ ⋆ ⋆
𝑀𝑚 × 𝑀𝑚 → (R) such that ∀ α , β , γ ∈ 𝑀𝑚 , a , b , c ∈ (R) .
⋆ ⋆
Then T is called a real valued bilinear function 𝑀𝑚 × 𝑀𝑚 → R. We denoted the set of all such
real valued bilinear functions by Mm ⊗ Mm.
⋆
For all S, T ∈ Mm ⊗ Mm and α, β ∈ 𝑀𝑚
For all a ∈ R, ∈ Mm ⊗ Mm
It can be easily shown that Mm ⊗ Mm forms a vector space over R under vector addition and
scalar multiplication defined as above. We call (∈ Mm ⊗ Mm)) (R) as tensor product of M m with
Mm.
⋆ ⋆
The basis (ei) for Mm induces the basis (ei) for 𝑀𝑚 . If α, β ∈ 𝑀𝑚 , we can write:
[27]
α = αi ei , β = βj ej αi and βj are real numbers called components of covariant vectors 𝛼 and β
respectively . As T ∈ Mm ⊗ Mm
= αi βj T(ei , ej)
If we put:
To show that the set (eij) is linearly independent , take n2scalars xij ∈ R such that:
[28]
Or xkl = 0
Thus (eij) forms the basis for Mm ⊗ Mm and consequently dimension of Mm ⊗ Mm is n2. We also
denote Mm ⊗ Mm . We also denote Mm ⊗ Mm by (Mm)2.
Tensors of different orders are element of vector spaces formed by repeated tensors product of
⋆
Mm and 𝑀𝑚 .
⋆ ⋆
An element of Mm ⊗ Mm or (Mm)2 is called a contravariant tensor of second order 𝑀𝑚 ⊗ 𝑀𝑚
⋆ ⋆
are called covariant tensor of rank 2 and those of𝑀𝑚 ⊗ 𝑀𝑚
⋆ s
(s copies) or( 𝑀𝑚 ) are covariant tensor of rank s .
⋆ ⋆ ⋆
Mm ⊗ Mm ⊗ Mm ⊗ …… ⊗Mm (r copies) ⊗ 𝑀𝑚 ⊗ 𝑀𝑚 ⊗ ….⊗ 𝑀𝑚
⋆ s
(s copies) or (Mm)r ⊗ ( 𝑀𝑚 ) are called mixed tensor of type (r , s).
As we have seen in the previous sections that Mm(R) is the space of contravariant vectors and
⋆
𝑀𝑚 (𝑅) that of covariant vectors .
The basis (ei) for Mm induces a basis (𝑒𝑖1 𝑖2 ) for (Mm)2 . If T ∈ (Mm)2 we can write:
[29]
The right side of equation (i) is infact the sum of n 2 terms and is infact
𝑛 𝑛 𝑖1 𝑖2
𝑖1 =1 𝑖2 =1 𝑇 𝑒𝑖1 𝑖2 , i1 and i2 are both dummy indices.
Let (ei‟) be another basis for Mm. It will induce unique basis ( 𝑒𝑖1′ 𝑖2′ ) for (Mm)2 and we can
express T as:
We can express one basis vectors as linear combinations of other basis vectors, thus:
𝑖 𝑖
𝑒𝑖1 𝑖2 = 𝑝𝑖11′ 𝑝𝑖22′ 𝑒𝑖1′ 𝑖2′ ………………….. (iv)
𝑖 𝑖
𝑇 𝑖1′ 𝑖2′ 𝑒𝑖1′ 𝑖2′ = 𝑝𝑖11′ 𝑝𝑖22′ 𝑇 𝑖1 𝑖2 𝑒𝑖1′ 𝑖2′
𝑖 𝑖
Or ( 𝑇 𝑖1′ 𝑖2′ - 𝑝𝑖11′ 𝑝𝑖22′ 𝑇 𝑖1 𝑖2 ) 𝑒𝑖1′ 𝑖2′ = 0.
Since basis vectors (𝑒𝑖1′ 𝑖2′ ) are linearly independent so their scalar coefficient must vanish. Hence
𝑖 𝑖
𝑇 𝑖1′ 𝑖2′ - 𝑝𝑖11′ 𝑝𝑖22′ 𝑇 𝑖1 𝑖2 = 0
𝑖 𝑖
Or 𝑇 𝑖1′ 𝑖2′ = 𝑝𝑖11′ 𝑝𝑖22′ 𝑇 𝑖1 𝑖2 …………………….. (v)
Above equation (v) is the law of transformation of components of contravariant tensor of rank 2.
⋆
The basis (ei) for 𝑀𝑚 will induce unique basis (𝑒 𝑗 1 𝑗 2 ) for (𝑀𝑚
⋆ 2 ⋆ 2
) . If T ∈ (𝑀𝑚 ) i.e. , T is
covariant tensor of rank 2, we can write
T = 𝑇𝑗 1 𝑗 2 𝑒 𝑗 1 𝑗 2 …………………………. (i)
[30]
The other basis (ei‟) for 𝑀𝑚
⋆
will induce unique basis ( 𝑒 𝑗 1′ 𝑗 2′ ) for (𝑀𝑚
⋆ 2
) .
T = 𝑇𝑗 1′ 𝑗 2′ 𝑒 𝑗 1′ 𝑗 2′ …………………… (ii)
𝑗 𝑗
Now 𝑒 𝑗 1′ 𝑗 2′ = 𝑝𝑗 1′1 𝑝𝑗 2′2 𝑒 𝑗 1′ 𝑗 2′ ………………………. (iii)
𝑗 𝑗
𝑇𝑗 1′ 𝑗 2′ 𝑒 𝑗 1′ 𝑗 2′ = 𝑝𝑗 1′1 𝑝𝑗 2′2 𝑇𝑗 1 𝑗 2 𝑒 𝑗 1′ 𝑗 2′
𝑗 𝑗
Or ( 𝑇𝑗 1′ 𝑗 2′ - 𝑝𝑗 1′1 𝑝𝑗 2′2 𝑇𝑗 1 𝑗 2 ) 𝑒 𝑗 1′ 𝑗 2′ = 0
𝑗 𝑗
𝑇𝑗 1′𝑗 2′ = 𝑝𝑗 1′1 𝑝𝑗 2′2 𝑇𝑗 1 𝑗 2 ………………………. (iv)
In a similar manner , we can obtain the law of transformation of tensor of type (1 , 1) in the form
:
𝑗
𝑇𝑗′𝑖′ = 𝑝𝑖𝑖′ 𝑝𝑗′ 𝑇𝑗𝑖 . …………………. (1)
The basis (ei) for Mm will induce unique basis (𝑒𝑖1 𝑖2 ……..𝑖𝑟 ) for
If T ∈ ( Mm)r i.e. , T is a contravariant tensor of rank r , we can write in a way analogous to above
[31]
Similarly if (ei‟) be another basis for Mm it induces unique basis (𝑒𝑖1′ 𝑖2′ ……..𝑖𝑟′ ) for ( Mm)r and
consequently
𝑖 𝑖 𝑖
𝑒𝑖1 𝑖2 ……..𝑖𝑟 = 𝑝𝑖11′ 𝑝𝑖22′ …….. 𝑝𝑖𝑟𝑟′ 𝑒𝑖1′ 𝑖2′ ……..𝑖𝑟′ ………………….. (iv)
Thus from above equations (ii) , (iii) and (iv) and after small calculations
𝑖 𝑖 𝑖
(𝑇 𝑖1′ 𝑖2′ ………𝑖𝑟′ - 𝑝𝑖11′ 𝑝𝑖22′ …….. 𝑝𝑖𝑟𝑟′ 𝑇 𝑖1 𝑖2 ………𝑖𝑟 ) 𝑒𝑖1′ 𝑖2′ ……..𝑖𝑟′ = 0
𝑖 𝑖 𝑖
Or 𝑇 𝑖1′ 𝑖2′ ………𝑖𝑟′ = 𝑝𝑖11′ 𝑝𝑖22′ …….. 𝑝𝑖𝑟𝑟′ 𝑇 𝑖1 𝑖2 ………𝑖𝑟 ……………….. (v)
The law of transformation of covariant tensor of rank s can be obtain analogously in the form :
𝑗 𝑗 𝑗
𝑇𝑗 1′ 𝑗 2′ ……..𝑗 𝑠′ = 𝑝𝑗 1′1 𝑝𝑗 2′2 …….. 𝑝𝑗 𝑠′𝑠 𝑇𝑗 1 𝑗 2 ……..𝑗 𝑠 ………………….. (i)
⋆ s 𝑗 𝑗 …….𝑗
For general mixed tensor , we observe that the basis ( 𝑒𝑖11𝑖22……𝑖𝑟 𝑠 ) for (Mm)r⊗(𝑀𝑚 ) . If T be an
element if this tensor product space we can write:
𝑖 𝑖 ……𝑖 𝑗 𝑗 …….𝑗
T = 𝑇𝑗 11𝑗 22…….𝑗𝑟𝑠 𝑒𝑖11𝑖22……𝑖𝑟 𝑠 …………………….. (ii)
𝑖 𝑖 ……𝑖 𝑗 𝑗 …….𝑗
T = 𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑟′𝑠′ 𝑒𝑖1′1′𝑖2′2′……𝑖𝑟′ 𝑠′ …………………… (iii)
𝑗 𝑗 …….𝑗 𝑖 𝑖 𝑖 𝑗 𝑗 𝑗 𝑗 𝑗 …….𝑗
Also 𝑒𝑖11𝑖22……𝑖𝑟 𝑠 = 𝑝𝑖11′ 𝑝𝑖22′ …….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …….. 𝑝𝑗 𝑠′𝑠 𝑒𝑖1′1′𝑖2′2′……𝑖𝑟′ 𝑠′ ……………… (iv)
[32]
Making use of equation (ii) , (iii) and (iv) and in a way similar to previous cases , we can show :
𝑖 𝑖 ……𝑖 𝑖 𝑖 𝑖 𝑗 𝑗 𝑗 𝑖 𝑖 ……𝑖
𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑟′𝑠′ = 𝑝𝑖11′ 𝑝𝑖22′ …….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …….. 𝑝𝑗 𝑠′𝑠 𝑇𝑗 11𝑗 22…….𝑗𝑟𝑠 ………………….(v)
Solution: Relative to basis (ei) for Mm components 𝛿𝑗𝑖 of kronecker delta are given by :
𝛿𝑗′𝑖′ = ei‟(ej‟)
𝑗
= 𝑝𝑖𝑖′ ei (𝑝𝑗′ ej)
𝑗
= 𝑝𝑖𝑖′ 𝑝𝑗 ′ ei (ej)
𝑗
Or 𝛿𝑗′𝑖′ = 𝑝𝑖𝑖′ 𝑝𝑗 ′ 𝛿𝑗𝑖 . ……………………… (ii)
But (ii) is the law of transformation of a mixed tensor of type (1 , 1). So 𝛿𝑗𝑖 is a (1 , 1) tensor .
Solution: Let (ei‟) be another basis for Mm relative to which components of vectors are Ai‟ and
Bj‟ respectively. Obviously
𝑗
B j‟ = 𝑝𝑗′ Bj
[33]
𝑗
Ai‟ Bj‟ = 𝑝𝑖𝑖′ 𝑝𝑗′ Ai Bj
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 11𝑗 22…….𝑗𝑝𝑞 −1 𝑝 𝑝 +1 𝑟
.
−1 𝑗 𝑞 𝑗 𝑞 +1 ……..𝑗 𝑠
Since T is a tensor of type (r , s) hence if (ei‟) be another basis for Mm relative to which
𝑖 𝑖 ……𝑖
components are 𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑟′𝑠′ , we have
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑝𝑞′−1 𝑝 ′ 𝑝 ′+1 𝑟′
′−1 𝑗 𝑞 ′ 𝑗 𝑞 ′+1 ……..𝑗 𝑠′
𝑖 𝑖 𝑖 ′ 𝑖 𝑖 ′ 𝑖 𝑗 𝑗 𝑗 𝑗 𝑗 𝑗
= 𝑝𝑖11′ 𝑝𝑖22′ ….. 𝑝𝑖𝑝𝑝−1
−1′
𝑝𝑖𝑝𝑝 ′ 𝑝𝑖𝑝𝑝+1
+1′
….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …. 𝑝𝑗 𝑞′−1 𝑝𝑖𝑝𝑞′ 𝑝𝑗 𝑞′+1+ … 𝑝𝑗 𝑠′𝑠
𝑞 −1′ 𝑞 +1′
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 11𝑗 22…….𝑗𝑝𝑞 −1 𝑝 𝑝 +1 𝑟
−1 𝑗 𝑞 𝑗 𝑞 +1 ……..𝑗 𝑠
𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑖 𝑗 𝑗 𝑗 𝑗 𝑗 𝑖 𝑗
= 𝑝𝑖11′ 𝑝𝑖22′ ….. 𝑝𝑖𝑝𝑝−1
−1′
𝑝𝑖𝑝𝑝+1
+1′
….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …. 𝑝𝑗 𝑞′−1 𝑝𝑗 𝑞′+1 … 𝑝𝑗 𝑠′𝑠 (𝑝𝑖𝑝𝑝 ′ 𝑝𝑖𝑝𝑞′ )
𝑞 −1′ 𝑞 +1′
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 11𝑗 22…….𝑗𝑝𝑞 −1 𝑝 𝑝 +1 𝑟
−1 𝑗 𝑞 𝑗 𝑞 +1 ……..𝑗 𝑠
𝑖 𝑗 𝑗
As 𝑝𝑖𝑝𝑝 ′ 𝑝𝑖𝑝𝑞′ = 𝛿𝑖𝑝𝑞 hence we have:
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑝𝑞′−1 𝑝 ′ 𝑝 ′+1 𝑟′
′−1 𝑗 𝑞 ′ 𝑗 𝑞 ′+1 ……..𝑗 𝑠′
𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑖 𝑗 𝑗 𝑗 𝑗 𝑗 𝑗
= 𝑝𝑖11′ 𝑝𝑖22′ ….. 𝑝𝑖𝑝𝑝−1
−1′
𝑝𝑖𝑝𝑝+1
+1′
….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …. 𝑝𝑗 𝑞′−1 𝑝𝑗 𝑞′+1 … 𝑝𝑗 𝑠′𝑠 𝛿𝑖𝑝𝑞
𝑞 −1′ 𝑞 +1′
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 1𝑗 2…….𝑗𝑝 −1 𝑗𝑝 𝑗𝑝 +1 ……..𝑗𝑟
1 2 𝑞 −1 𝑞 𝑞 +1 𝑠
[34]
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖 𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑖 𝑗 𝑗 𝑗
𝑇𝑗 1′1′𝑗 2′2′…….𝑗𝑝𝑞′−1 𝑝 ′ 𝑝 ′+1 𝑟′
𝑝𝑖𝑝𝑝−1
−1′
𝑝𝑖𝑝𝑝+1
+1′
𝑜𝑟 = 𝑝𝑖11′ 𝑝𝑖22′ ….. ….. 𝑝𝑖𝑟𝑟′ 𝑝𝑗 1′1 𝑝𝑗 2′2 …. 𝑝𝑗 𝑞′−1
′−1 𝑗 𝑞 ′ 𝑗 𝑞 ′+1 ……..𝑗 𝑠′ 𝑞 −1′
𝑗 𝑗 𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑝𝑗 𝑞′+1 … 𝑝𝑗 𝑠′𝑠 𝑇𝑗 1𝑗 2…….𝑗𝑝 −1 𝑗𝑝 𝑗𝑝 +1 ……..𝑗𝑟
𝑞 +1′ 1 2 𝑞 −1 𝑞 𝑞 +1 𝑠
𝑖 𝑖 ……𝑖 𝑖 𝑖 …….𝑖
𝑇𝑗 11𝑗 22…….𝑗𝑝𝑞 −1 𝑝 𝑝 +1 𝑟
−1 𝑗 𝑞 𝑗 𝑞 +1 ……..𝑗 𝑠
𝑖𝑗
EXAMPLE VI: Show that in a tensor of type (2 , 1) with components 𝐴𝑘 , contraction reduces
it to a contravariant vector .
𝑖′𝑗′
Solution: If 𝐴𝑘′ be the components relative to basis (ei‟) for Mm, we have
𝑖′𝑗′ 𝑗′ 𝑖𝑗
𝐴𝑘′ = 𝑝𝑖𝑖′ 𝑝𝑗 𝑝𝑘′
𝑘
𝐴𝑘 …………………… (i)
𝑖′𝑗′ 𝑗′ 𝑖𝑗
𝐴𝑖′ = 𝑝𝑖𝑖′ 𝑝𝑗 𝑝𝑖′𝑘 𝐴𝑘
𝑗′ 𝑖𝑗
= (𝑝𝑖𝑖′ 𝑝𝑖′𝑘 ) 𝑝𝑗 𝐴𝑘
𝑗′ 𝑖𝑗
= 𝛿𝑖𝑘 𝑝𝑗 𝐴𝑘
𝑖′𝑗′ 𝑗′ 𝑖𝑗
Or 𝐴𝑖′ = 𝑝𝑗 𝐴𝑖 .
𝑖𝑗 𝑖𝑗
So 𝐴𝑖 satisfy the law of transformation of components of a contravariant vector. Hence 𝐴𝑖 are
components of a contravariant vector.
EXAMPLE VII: Show that one time application of contraction reduces the order of tensor by
two.
𝑖𝑗
Solution: Let us consider a tensor 𝐴𝑙𝑚𝑛 of type (2, 3). By the law of transformation-
𝑖′𝑗′ 𝑗′ 𝜇
𝑣 𝜆 𝑘𝑡
𝐴𝑙′𝑚 ′𝑛′ = 𝑝𝑘𝑖′ 𝑝𝑡 𝑝𝑙′ 𝑝𝑚 ′ 𝑝𝑛′ 𝐴𝜇𝑣𝜆
[35]
Let j‟ = n‟. Then
𝑖′𝑗′ 𝑣 𝜆 𝑘𝑡 𝑗′ 𝜇
𝐴𝑙′𝑚 ′𝑛′ = 𝐴𝑖′𝑙′𝑚 ′ = 𝑝𝑘𝑖′ 𝑝𝑡 𝑝𝑙′ 𝑝𝑚 ′ 𝑝𝑗′ 𝐴𝜇𝑣𝜆
′ 𝑣 𝜇 𝜆 𝑘𝑡 𝑗′
= 𝑝𝑘𝑖 𝑝𝑙 ′ 𝑝𝑚 ′ (𝑝𝑡 𝑝𝑗 ′ )𝐴𝜇𝑣𝜆
′ 𝑣 𝜇 𝜆 𝑘𝑡
= 𝑝𝑘𝑖 𝑝𝑙 ′ 𝑝𝑚 ′ 𝛿𝑡 𝐴𝜇𝑣𝜆
′ 𝑣𝜇 𝑘𝑡
= 𝑝𝑘𝑖 𝑝𝑙 ′ 𝑝𝑚 ′ 𝐴𝜇𝑣𝜆 ∵ 𝛿𝑡𝜆 = 1 if λ = t
′ 𝑣𝜇
= 𝑝𝑘𝑖 𝑝𝑙 ′ 𝑝𝑚 𝑘
′ 𝐴𝜇𝑣
Law of transformation of a tensor type (1, 2) therefore contraction reduces the order of tensor by
two.
Inner Product: The product of a contravariant tensor with a covariant tensor with atleast one
index common is called their inner product.
For example AijBjk , AijBij etc. are inner products of tensor A and B. The products of components
of A and B with all the indices distinct are called outer products AijBkl , AiBj etc. are outer
products.
EXAMPLE VIII: Show that the inner product Tij Sjh of tensors Tij and Sph is a mixed tensor of
order two.
Solution: As Tij are components of a contravariant tensor of second order, hence with respect to
bases (ei) of Mm the law of transformation is
𝑗′
Ti‟j‟ = 𝑝𝑖𝑖′ 𝑝𝑗 Tij ………………… (i)
𝑗′
Ti‟j‟ Sj‟h‟ = 𝑝𝑖𝑖′ (𝑝𝑗 𝑝𝑗′𝑙 ) 𝑝ℎ′
ℎ
Tij Slh
[36]
𝑗′
Since 𝑝𝑗 𝑝𝑗′𝑙 = 𝛿𝑗𝑙 hence
Thus quantities TijSjh satisfy the law of transformation of a mixed tensor of type (1, 1) i.e. of
order 2. So they are mixed tensor of order two.
EXAMPLE IX: Show that the inner product of contravariant and covariant vectors is scalar or
invariant.
Solution: Let Ai and Bj be components of contravariant and covariant vectors relative to basis
(ei) for Mm. If (ei‟) be another basis and vectors have components Ai‟ and Bj‟ relative to (ei‟) then
Ai‟ = 𝑝𝑖𝑖′ Ai
𝑗
And B i‟ = 𝑝𝑖′ Bj
𝑗
Thus Ai‟ Bi‟ = (𝑝𝑖𝑖′ 𝑝𝑖′ ) Ai Bj
𝑗
= 𝛿𝑖 Ai Bj
Or Ai‟ Bi‟ = Ai Bi
EXAMPLE X: Show that Aij Bjk are components of tensor of type (1, 1).
Solution: As Aij are components of contravariant tensor of second order, hence with respect to
basis (ei) and (ei‟) of Mm the law of transformation is
𝑗′
Ai‟j‟ = 𝑝𝑖𝑖′ 𝑝𝑗 Aij …………….(i)
[37]
Multiplying (i) and (ii) side by side
𝑗′
Ai‟j‟ Bj‟k‟ = 𝑝𝑖𝑖′ (𝑝𝑗 𝑝𝑗′𝑙 ) 𝑝𝑘′
𝑘
Aij Blk
= 𝑝𝑖𝑖′ 𝑝𝑘′
𝑘
𝛿𝑗𝑙 Aij Blk
= 𝑝𝑖𝑖′ 𝑝𝑘′
𝑘
Aij Bjk
Thus quantities Aij Bjk satisfy the law of transformation of a mixed tensor of type (1, 1) is of
order 2.
EXAMPLE XI: Show that outer product and inner product of tensor Tij and Sh are tensors of
type (2, 1) and (0, 1) respectively.
Solution: As Tij are components of contravariant tensor of second order hence with respect to
basis (ei) and (ei‟) of Mm the law of transformation is
𝑗′
Ti‟j‟ = 𝑝𝑖𝑖′ 𝑝𝑗 Tij …………………. (i)
ℎ
Sh‟ = 𝑝ℎ′ Sh …………………… (ii)
𝑗′
Ti‟j‟Sh‟ = 𝑝𝑖𝑖′ 𝑝𝑗 𝑝ℎ′
ℎ
TijSh
Thus quantities TijSh satisfy the law of transformation of mixed tensor of type (2, 1).
𝑗′
Ti‟j‟Sj‟ = 𝑝𝑖𝑖′ 𝑝𝑗 𝑝𝑗′𝑙 TijSl
𝑗′
= 𝑝𝑖𝑖′ (𝑝𝑗 𝑝𝑗′𝑙 ) TijSl
= 𝑝𝑖𝑖′ TijSj
[38]
Thus above satisfy the law of transformation of mixed tensor of type (1, 0).
𝑖 𝑖 ……𝑖 …….𝑖
In order that nr+R+s+S quantities 𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
relative to basis (ei) for Mm be components of a
mixed tensor of type (r+R,s+S) , it is necessary and sufficient that corresponding to r arbitrary
covariant vectors α, β,……,γ and s arbitrary contravariant vectors λ, μ, v, the quantities
𝑖 𝑖 ……𝑖 …….𝑖
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑖1 𝛽𝑖2 …..𝛾𝑖𝑟 𝜆𝑗 1 𝜇 𝑗 2 ….. 𝑣 𝑗 𝑠 are components of mixed tensor of type (R, S).
𝑖 𝑖 ……𝑖 …….𝑖
Proof: Suppose first that 𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
are components of a mixed tensor of type (r+R,s+S).
Hence if (ei‟) be another basis for Mm, the law of transformation is:
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑗 𝑗 𝑗 𝑗
𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
=𝑝𝑖11′ 𝑝𝑖22′ …..𝑝𝑖𝑟𝑟′ 𝑝𝑖𝑟𝑟+1
+1′
…..𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑝𝑗 1′1 𝑝𝑗 2′2 ….𝑝𝑗 𝑠′𝑠 𝑝𝑗 𝑠+1
′
….
𝑠 +1′
𝑗 𝑖 𝑖 ……𝑖 …….𝑖
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
……………….. (i)
𝑠 +𝑆′
𝑙 𝑙 𝑙
𝛼𝑖1′ = 𝑝𝑖1,1 𝛼𝑙1 , 𝛽𝑖2′ 𝛼𝑖𝑟′ = 𝑝𝑖2,2 𝛽𝑙2 …….. 𝛾𝑖𝑟′ = 𝑝𝑖𝑟′𝑟 𝛾𝑙𝑟 ……………….. (ii)
𝑗 𝑗 𝑗
𝜆𝑗 1′ = 𝑝𝑚1′1 𝜆𝑚 1 𝜆𝑚 1 , 𝜇 𝑗 2′ = 𝑝𝑚2′2 𝜇 𝑚 2 …….𝑣 𝑗 𝑠′ = 𝑝𝑚𝑠′𝑠 𝑣 𝑚 𝑠 . ………………… (iii)
𝑖 𝑖 ……𝑖 …….𝑖
Thus 𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟 ……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
𝛼𝑖1′ 𝛽𝑖2′ …….𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′
𝑖 𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑗 𝑗 𝑗 𝑗 𝑗 𝑖 𝑖 ……𝑖 …….𝑖 𝑙 𝑙 𝑙
=𝑝𝑖11′ 𝑝𝑖22′ …..𝑝𝑖𝑟𝑟′ 𝑝𝑖𝑟𝑟+1
+1′
…..𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑝𝑗 1′1 𝑝𝑗 2′2 ….𝑝𝑗 𝑠′𝑠 𝑝𝑗 𝑠+1
′
…. 𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝑝𝑖1,1 𝑝𝑖2,2 ....... 𝑝𝑖𝑟′𝑟
𝑠 +1′ 𝑠 +𝑆′
𝑗 𝑗 𝑗
𝛼𝑙1 𝛽𝑙2 …. 𝛾𝑙𝑟 𝑝𝑚1′1 𝑝𝑚2′2 . . . 𝑝𝑚𝑠′𝑠 𝜆𝑚 1 𝜇 𝑚 2 … 𝑣 𝑚 𝑠
𝑖 𝑖 𝑖 𝑙 𝑙 𝑙 𝑗 𝑗 𝑗 𝑗 𝑗 𝑗 𝑖 ′ 𝑖 ′ 𝑗
= (𝑝𝑖11′ 𝑝𝑖22′ …..𝑝𝑖𝑟𝑟′ ) (𝑝𝑖1,1 𝑝𝑖2,2 ....... 𝑝𝑖𝑟′𝑟 ) (𝑝𝑗 1′1 𝑝𝑗 2′2 ….𝑝𝑗 𝑠′𝑠 ) (𝑝𝑚1′1 𝑝𝑚2′2 . . . 𝑝𝑚𝑠′𝑠 ) 𝑝𝑖𝑟+1
𝑟 +1′
…..𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑝𝑗 𝑠+1
′
….
𝑠 +1′
𝑗 𝑗 𝑖 𝑖 ……𝑖 …….𝑖
𝑝𝑗 𝑠+𝑆
′
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑙1 𝛽𝑙2 …. 𝛾𝑙𝑟 𝜆𝑚 1 𝜇 𝑚 2 … 𝑣 𝑚 𝑠
𝑠 +𝑆′ 𝑠 +𝑆′
𝑙 𝑙 𝑙 𝑗 𝑗 𝑗 𝑖 ′ 𝑖 ′ 𝑗 𝑗
=(𝛿𝑖11 𝛿𝑖22 …..𝛿𝑖𝑟𝑟 ) (𝛿𝑚11 𝛿𝑚22 ….𝛿𝑚𝑠𝑠 ) 𝑝𝑖𝑟𝑟+1
+1′
….. 𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑝𝑗 𝑠+1
′
…. 𝑝𝑗 𝑠+𝑆
′
𝛼𝑙1 𝛽𝑙2 …. 𝛾𝑙𝑟 𝜆𝑚 1 𝜇 𝑚 2 … 𝑣 𝑚 𝑠
𝑠 +1′ 𝑠 +𝑆′
[39]
Thus
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 ′ 𝑖 ′ 𝑗
𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟 ……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
𝛼𝑖1′ 𝛽𝑖2′ …….𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′ = 𝑝𝑖𝑟𝑟+1
+1′ 𝑟 +𝑅′
…..𝑝𝑖𝑟+𝑅 𝑝𝑗 𝑠+1
′
….
𝑠 +1′
𝑗 𝑖 𝑖 ……𝑖 …….𝑖
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑖1 𝛽𝑖2 …….𝛾𝑖𝑟 𝜆𝑗 1 𝜇 𝑗 2 ……..𝑣 𝑗 𝑠
𝑠 +𝑆′
𝑖 𝑖 ……𝑖 …….𝑖
Thus the quantities 𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑖1 𝛽𝑖2 …….𝛾𝑖𝑟 𝜆𝑗 1 𝜇 𝑗 2 ……..𝑣 𝑗 𝑠 are components of a tensor of
type (R, S). Hence the condition is necessary.
𝑖 𝑖 ……𝑖 …….𝑖
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑖1 𝛽𝑖2 …….𝛾𝑖𝑟 𝜆𝑗 1 𝜇 𝑗 2 ……..𝑣 𝑗 𝑠 are tensors of type (R,S). Hence if (ei‟) be
another basis for Mm , the law of transformation is
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 ′ 𝑖 ′ 𝑗
𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟 ……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
𝛼𝑖1′ 𝛽𝑖2′ …….𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′ = 𝑝𝑖𝑟𝑟+1
+1′ 𝑟 +𝑅′
…..𝑝𝑖𝑟+𝑅 𝑝𝑗 𝑠+1
′
….
𝑠 +1′
𝑗 𝑖 𝑖 ……𝑖 …….𝑖
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝛼𝑖1 𝛽𝑖2 …….𝛾𝑖𝑟 𝜆𝑗 1 𝜇 𝑗 2 ……..𝑣 𝑗 𝑠 …………………… (iv)
𝑠 +𝑆′
𝑖 𝑖 𝑖
Since 𝛼𝑖1 = 𝑝𝑖11′ 𝛼𝑖1′ , 𝛽𝑖2 = 𝑝𝑖22′ 𝛽𝑖2′ ….. , 𝛾𝑖𝑟 = 𝑝𝑖𝑟𝑟′ 𝛾𝑖𝑟′
𝑗 𝑗
And 𝜆𝑗 1 = 𝑝𝑗 1′1 𝜆𝑗 1′ …. 𝑣 𝑗 𝑠 = 𝑝𝑗 𝑠′𝑠 𝑣 𝑗 𝑠′
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 ′ 𝑖 ′
𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
𝛼𝑖1′ 𝛽𝑖2′ …..𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′ = 𝑝𝑖𝑟𝑟+1
+1′
…..𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑗 𝑗 𝑖 𝑖 ……𝑖 …….𝑖 𝑖 𝑖 𝑖 𝑗 𝑗
𝑝𝑗 𝑠+1
′
…. 𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
𝑝𝑖11′ 𝛼𝑖1′ 𝑝𝑖22′ 𝛽𝑖2′ ….. 𝑝𝑖𝑟𝑟′ 𝛾𝑖𝑟′ 𝑝𝑗 1′1 𝜆𝑗 1′ ….. 𝑝𝑗 𝑠′𝑠 𝑣 𝑗 𝑠′
𝑠 +1′ 𝑠 +𝑆′
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 𝑖 𝑖
Or 𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟 ……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
𝛼𝑖1′ 𝛽𝑖2′ …..𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′ = 𝑝𝑖11′ 𝑝𝑖22′ …..𝑝𝑖𝑟𝑟′
𝑖 ′ 𝑖 ′ 𝑗 𝑗 𝑗 𝑗
𝑝𝑖𝑟𝑟+1
+1′ 𝑟 +𝑅′
…..𝑝𝑖𝑟+𝑅 𝑝𝑗 1′1 𝑝𝑗 2′2 ….𝑝𝑗 𝑠′𝑠 𝑝𝑗 𝑠+1
′
…
𝑠 +1′
𝑗 𝑖 𝑖 ……𝑖 …….𝑖𝑟 +𝑅
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆 𝛼𝑖1′ 𝛽𝑖2′ …..𝛾𝑖𝑟′ 𝜆𝑗 1′ 𝜇 𝑗 2′ ……..𝑣 𝑗 𝑠′
𝑠 +𝑆′
[40]
𝑖 𝑖 ……𝑖 …….𝑖 𝑖 𝑖 𝑖 𝑖 ′ 𝑖 ′ 𝑗 𝑗 𝑗 𝑗
𝑇𝑗 1′1′𝑗 2′2′……𝑗 𝑠′′𝑟……𝑗 𝑠′+𝑆′
𝑟′+𝑅′
= 𝑝𝑖11′ 𝑝𝑖22′ …..𝑝𝑖𝑟𝑟′ 𝑝𝑖𝑟𝑟+1
+1′
…..𝑝𝑖𝑟𝑟+𝑅
+𝑅′
𝑝𝑗 1′1 𝑝𝑗 2′2 ….𝑝𝑗 𝑠′𝑠 𝑝𝑗 𝑠+1
′
𝑠 +1′
𝑗 𝑖 𝑖 ……𝑖 …….𝑖𝑟 +𝑅
𝑝𝑗 𝑠+𝑆
′
𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑠 +𝑆′
𝑖 𝑖 ……𝑖 …….𝑖
Hence 𝑇𝑗 11𝑗 22……𝑗 𝑠𝑟……𝑗 𝑠+𝑆
𝑟 +𝑅
are components of a mixed tensor of type (r + R, s + S).
EXAMPLE XII: If for all covariant vectors Sa , Ta Sa is invariant , show that Ta are components
of a covariant vectors.
As Sa is a contravariant vector , so
𝑎 a‟
Sa = 𝑝𝑎′ S
So from (i)
Or 𝑎
( Ta‟ - 𝑝𝑎′ Ta ) Sa‟ = 0
𝑎
Ta‟ - 𝑝𝑎′ Ta = 0
𝑎
Or Ta‟ = 𝑝𝑎′ Ta showing that Ta are components of a covariant vector.
EXAMPLE XIII: If S(i, j, k)Ai Bj Ck is an invariant for arbitrary vectors Ai, Bj and Ck , show
that S(i, j, k) are tensors of type (1, 2).
Solution: As given S(i, j, k)Ai Bj Ck is an invariant, hence for basis (ei) and (ei‟) for Mm , we
have
[41]
𝑗
Ai = 𝑝𝑖′𝑖 Ai‟ , Bj = 𝑝𝑗′ Bj‟
𝑗
S(i‟, j‟, k‟)Ai‟ Bj‟ Ck‟ = S(i, j, k)𝑝𝑖′𝑖 Ai‟ 𝑝𝑗′ Bj‟ 𝑝𝑘𝑘′ Ck‟
𝑗
Or S(i‟, j‟, k‟)Ai‟ Bj‟ Ck‟ = 𝑝𝑖′𝑖 𝑝𝑗′ 𝑝𝑘𝑘′ S(i , j , k) Ai‟ Bj‟ Ck‟
𝑗
S (i‟, j‟, k‟) = 𝑝𝑖′𝑖 𝑝𝑗′ 𝑝𝑘𝑘′ S(i , j , k)
SPECIAL TENSORS
Let T be symmetric in pth and qth places relative to basis (ei) for Mm. So (i) holds .
𝑖 𝑖 𝑖𝑝 ′ 𝑖𝑞 ′ 𝑖
𝑇 𝑖1′ 𝑖2′ ……𝑖𝑝 ′ ….𝑖𝑞 ′ ….𝑖𝑟′ = 𝑝𝑖11′ 𝑝𝑖22′ ….. 𝑝𝑖𝑝 ….. 𝑝𝑖𝑞 …… 𝑝𝑖𝑟𝑟′ 𝑇 𝑖1 𝑖2 ……𝑖𝑝 ….𝑖𝑞 ….𝑖𝑟
𝑖 𝑖 𝑖 𝑖 𝑖
𝑇 𝑖1′ 𝑖2′ ……𝑖𝑝 ′ ….𝑖𝑞 ′ ….𝑖𝑟′ = 𝑝𝑖11′ 𝑝𝑖22′ ….. 𝑝𝑖𝑞𝑞 ′ ….. 𝑝𝑖𝑝𝑝 ′ …… 𝑝𝑖𝑟𝑟′ 𝑇 𝑖1 𝑖2 ……𝑖𝑝 ….𝑖𝑞 ….𝑖𝑟
Or 𝑇 𝑖1′ 𝑖2′ ……𝑖𝑝 ′ ….𝑖𝑞 ′ ….𝑖𝑟′ = 𝑇 𝑖1′ 𝑖2′ ……𝑖𝑞 ′ ….𝑖𝑝 ′ ….𝑖𝑟′
Hence if T is symmetric with respect to basis (ei) , it is also symmetric relative to any other basis
(ei‟). Hence symmetry is invariant over change of basis.
[42]
Skew Symmetric Tensor: If α1, α2, …… , αr be covariant vectors, we can write above
symmetry in free indices as follows:
Or T(α1, α2, ….. , αp, …. , αq, …. , αr) = - T(α1, α2, ….. , αq, …. , αp, …. , αr)
EXAMPLE XIV: If Φ = aij xi xj where aij are components of a covariant tensor of a second
order, show that Φ = bij xi xj where bij is a symmetric tensor .
Solution: As given,
𝑎 𝑖𝑗 +𝑎 𝑗𝑖
Φ=( ) xi xj.
2
𝑎 𝑖𝑗 +𝑎 𝑗𝑖
If we put bij = then bij = bji. Thus Φ = bij xi xj where bij is a symmetric tensor.
2
EXAMPLE XV: Show that every second order tensor can be expressed as the sum of two
tensors , one symmetric and other skew symmetric tensor of second order.
Solution: Let aij be components of a second order tensor relative to basis (e i) for Mm we can
write
𝑎 𝑖𝑗 +𝑎 𝑗𝑖 𝑎 𝑖𝑗 −𝑎 𝑗𝑖
aij = + ……………….. (i)
2 2
[43]
𝑎 𝑖𝑗 +𝑎 𝑗𝑖 𝑎 𝑖𝑗 −𝑎 𝑗𝑖
putting Sij = and Tij =
2 2
Then obviously Sij is symmetric and Tij skew-symmetric. To show that they are second order
tensors, we observe that since aij is covariant tensor of second order
𝑗 𝑗
ai'j‟ = 𝑝𝑖′𝑖 𝑝𝑗′ aij and aj‟i‟ = 𝑝𝑗 , 𝑝𝑖′𝑖 aji
Thus,
𝑎 𝑖′𝑗 ′ +𝑎 𝑗 ′𝑖′ 𝑗 𝑎 𝑖𝑗 +𝑎 𝑗𝑖
Si‟j‟ = = 𝑝𝑖′𝑖 𝑝𝑗′ ( )
2 2
𝑗
Or S i‟j‟= 𝑝𝑖′𝑖 𝑝𝑗′ Sij.
Hence Sij is a covariant tensor of rank 2. Similarly Tij also covariant tensor of second rank.
EXAMPLE XVI: Show that in n-dimensional space, a symmetric second order tensor has at
𝑛(𝑛 +1)
most distinct component.
2
Solution: Let Aij be components of a symmetric covariant tensor of second order. The n2
components Aij are as follows:
Since Aij = Aji , so out of remaining (n2-n) components, only half are distinct . Components on
one side of the principal diagonal are equal to the corresponding components on the other side.
[44]
𝑛 2 −𝑛 𝑛 (𝑛+1)
= n+ = .
2 2
EXAMPLE XVII: Show that in an n- dimensional space , a skew symmetric second order
𝑛 (𝑛−1)
tensor has at most distinct components.
2
Solution: Let Aij be a component of skew symmetric covariant tensor of second order then n2
components in n- dimensional space are arranged by:
So Aij = -Aji
𝑛 2 −𝑛
Therefore number of distinct components corresponding to distinct suffixes = 2
𝑛 2 −𝑛 𝑛 𝑛−1
Hence the total number of distinct components = 0 + =
2 2
Theorem: If aij be a components of a symmetric tensor of the type (0 , 2) , such that a = 𝑎𝑖𝑗 ≠
0
Prove that it is possible to define a tensor whose components aij satisfy aij ajk = 𝛿𝑘𝑖 . Prove further
that this tensor is a contravariant symmetric tensor of second order and a a ij = cofactor of aij in a.
Proof: consider a covariant symmetric tensor aij of second order. Denote by a the determinant
𝑎𝑖𝑗 , i.e. , a = 𝑎𝑖𝑗 .
[45]
We also denote the cofactor of 𝑎𝑖𝑗 in the determinant 𝑎𝑖𝑗 by Aji. The normalized cofactor of aij
1
in the determinant 𝑎𝑖𝑗 is denoted by 𝑎 Aji. We define
𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟 𝑜𝑓 𝑎 𝑖𝑗 𝑖𝑛 𝑡ℎ𝑒 𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑎𝑛𝑡 𝑎 𝑖𝑗
𝑎𝑖𝑗 = 𝑎
𝐴 𝑗𝑖
= , by our assumption
𝑎
𝐴 𝑗𝑖
𝑎𝑖𝑗 = .
𝑎
This declares that 𝑎𝑖𝑗 is the normalized factor of 𝑎𝑖𝑗 in 𝑎𝑖𝑗 . Now we shall show that 𝑎𝑖𝑗 is a
second rank contravariant symmetric tensor.
𝑎𝑖𝑗 is symmetric ⇒ 𝑎𝑖𝑗 is symmetric
⇒ Aji is symmetric
1
⇒ Aji is symmetric
𝑎
⇒ 𝑎𝑖𝑗 is symmetric.
Let ui be an arbitrary vector . since the product of two tensors is a tensor and hence u i 𝑎𝑖𝑗 is a
tensor.
Write Bj = ui 𝑎𝑖𝑗 .
Now Bj is an arbitrary vector due to the fact that ui is arbitrary.
Bj 𝑎 𝑗𝑘 = ui 𝑎𝑖𝑗 𝑎 𝑗𝑘
𝐴 𝑘𝑗
= ui 𝑎𝑖𝑗 𝑎
u i 𝑎 𝛿 𝑖𝑘
= , by a well known property of determinant
𝑎
= ui 𝛿𝑖𝑘
= uk = a contravariant tensor of rank one
∴ Bj 𝑎 𝑗𝑘 = Tensor
This proves by Quotient law that 𝑎 𝑗𝑘 is a tensor of the type as indicated by its suffixes. Hence
𝑎𝑖𝑗 is a contravariant second order tensor. We have already shown that 𝑎𝑖𝑗 is symmetric. Finally,
𝑎𝑖𝑗 is contravariant second order symmetric tensor . The tensors 𝑎𝑖𝑗 and 𝑎𝑖𝑗 are defined as
reciprocal to each other. They are also sometimes called conjugate tensors.
[46]
Associated Tensor: With the help of the metric tensor gij , we can associated with a tensor of
type (r , s), tensors of type (r-1,s+1), (r-2,s+2) etc. For example consider a tensor T of type (2 , 1)
𝑖𝑗 𝑗 𝑙𝑗
with components 𝑇𝑘 relative to basis (ei) foe Mm . We define 𝑇𝑖𝑘 = gil𝑇𝑘 . It is a tensor of type (1,
𝑖𝑗 𝑗 𝑖𝑗
2) associated with original (2 , 1) tensor 𝑇𝑘 . We say that 𝑇𝑖𝑘 is obtained from 𝑇𝑘 by lowering
𝑗 𝑙𝑗
the index i . Multiplying the equation 𝑇𝑖𝑘 = gil𝑇𝑘 by the reciprocal tensor ghi we get,
𝑗 𝑙𝑗
ghi 𝑇𝑖𝑘 = ghi gil 𝑇𝑘
𝑗 𝑙𝑗
ghi 𝑇𝑖𝑘 = 𝛿𝑙ℎ 𝑇𝑘
ℎ𝑗 𝑗
or 𝑇𝑘 = ghi 𝑇𝑖𝑘 .
ℎ𝑗 𝑗
𝑇𝑘 = ghm 𝑇𝑚𝑘
𝑗 𝑗
So 𝑇𝑖𝑘 = gim 𝑇𝑚𝑘 .
𝑖𝑗 𝑗
Thus the original tensor 𝑇𝑘 may be obtained from 𝑇𝑖𝑘 by means of the reciprocal metric tensor.
𝑖𝑗
All the tensors that can be obtained from 𝑇𝑘 by raising or lowering the indices are the tensors
𝑖𝑗
associated with 𝑇𝑘 .
L2 = Ai Ai
[47]
So L2 = Ai gij Aj = gij Ai Aj ……………. (i)
Also Ai = gij Aj .
L2 = gij Ai Aj = gij Ai Aj .
𝑗
Ai‟ = 𝑝𝑖′ Aj ……………….. (iv)
𝑗
Ai‟ Ai‟ = 𝑝𝑖𝑖′ 𝑝𝑖′ Ai Aj
𝑗
= 𝛿𝑖 Ai Aj
= Ai Aj
So L2 = Ai Ai = Ai‟ Ai‟
[48]
(ii) We have ,
[49]
CONCLUSION
The conclusion of tensor algebra is that it provides a powerful framework for mathematical
operations involving tensors, which are multidimensional arrays of numbers. It is extensively
used in various fields such as physics, engineering, computer science, and machine learning.
Tensor algebra enables us to manipulate and analyze complex data structures efficiently, making
it a fundamental tool in modern mathematics and scientific computing.
[50]
REFERENCE
∗ Differential Geometry And Tensor Analysis, By: Prof. Ram Nivas , Dr. C. P. Awasthi , Dr.
B. P. Singh.
* M. E. Gurtin: An introduction to continuum mechanics. Academic Press, 1981,
or also, in a similar style, the long article
* P. Podio-Guidugli: A primer in elasticity. Journal of Elasticity, v. 58: 1-104, 2000.
A short, effective introduction to tensor algebra and differential geometry of curves can
be found in the following text of exercices on analytical mechanics:
* P. Biscari, C. Poggi, E. G. Virga: Mechanics notebook. Liguori Editore, 1999.
A classical textbook on linear algebra that is recommended is
* P. R. Halmos: Finite-dimensional vector spaces. Van Nostrand Reynold, 1958.
In the previous textbooks, tensor algebra in curvilinear coordinates is not developed; an
introduction to this topic, especially intended for physicists and engineers, can be found in
* W. H. M¨uller: An expedition to continuum theory. Springer, 2014, which has largely
influenced Chapter 6.
Two modern and application-oriented textbooks on differential geometry of curves and surfaces
are.
* V. A. Toponogov: Differential geometry of curves and surfaces - A concise guide. Birkh¨auser,
2006,
* A. Pressley: Elementary differential geometry. Springer, 2010. A short introduction to the
differential geometry of surfaces, oriented toward the mechanics of shells, can be found in the
classical book
* V. V. Novozhilov: Thin shell theory. Noordh off LTD., 1964. For what concerns the calculus
of variations, a still valid textbook in the matter (but not only) is
* R. Courant, D. Hilbert: Methods of mathematical physics. Interscience Publishers,
1953.
Two very good and classical textbooks with an introduction to the calculus of variations for
engineers are
* C. Lanczos: The variational principles of mechanics. University of Toronto Press, 1949
* H. L. Langhaar: Energy methods in applied mechanics. Wiley, 1962.
[51]
* Bourbaki, Nicolas (1989). Algebra I. Chapters 1-3. Elements of Mathematics. Springer-
Verlag. ISBN 3-540-64243-9. (See Chapter 3 §5)
* Serge Lang (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (3rd ed.), Springer
Verlag, ISBN 978-0-387-95385-4
*"Tensor Calculus" by J. L. Synge and A. Schild: This classic text provides a rigorous
introduction to tensor calculus and its applications in physics and engineering.
*"Introduction to Tensor Calculus and Continuum Mechanics" by J. H. Heinbockel: This book
covers the basics of tensor algebra and its applications in continuum mechanics, making it
suitable for engineering students.
*"Tensor Analysis on Manifolds" by Richard L. Bishop and Samuel I. Goldberg: This book
delves into tensor analysis in the context of differential geometry, offering a deeper
understanding of tensors on curved spaces.
* "A Geometric Approach to Differential Forms" by David Bachman: Although focused on
differential forms, this book also covers tensor algebra and provides a geometric perspective on
tensors and their applications.
M. E. Gurtin: An introduction to continuum mechanics. Academic Press, 1981,
or also, in a similar style, the long article
* P. Podio-Guidugli: A primer in elasticity. Journal of Elasticity, v. 58: 1-104, 2000.
* A short, effective introduction to tensor algebra and differential geometry of curves can
be found in the following text of exercices on analytical mechanics:
* P. Biscari, C. Poggi, E. G. Virga: Mechanics notebook. Liguori Editore, 1999.
* A classical textbook on linear algebra that is recommended is
* P. R. Halmos: Finite-dimensional vector spaces. Van Nostrand Reynold, 1958.
* In the previous textbooks, tensor algebra in curvilinear coordinates is not developed; an
introduction to this topic, especially intended for physicists and engineers, can be found
in
* W. H. M¨uller: An expedition to continuum theory. Springer, 2014,
which has largely influenced Chapter 6.
* Two modern and application-oriented textbooks on differential geometry of curves and
V. A. Toponogov: Differential geometry of curves and surfaces - A concise guide.
Birkh¨auser, 2006,
[52]
* A. Pressley: Elementary differential geometry. Springer, 2010.
* A short introduction to the differential geometry of surfaces, oriented toward the mechan-
ics of shells, can be found in the classical book
* V. V. Novozhilov: Thin shell theory. Noordhoff LTD., 1964.
* For what concerns the calculus of variations, a still valid textbook in the matter (but not
only) is
* R. Courant, D. Hilbert: Methods of mathematical physics. Interscience Publishers,
1953.
* Two very good and classical textbooks with an introduction to the calculus of variations
for engineers are
* C. Lanczos: The variational principles of mechanics. University of Toronto Press,
1949,
* H. L. Langhaar: Energy methods in applied mechanics. Wiley, 1962.
*"Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum
Mechanics" by Mikhail Itskov: This book is aimed at engineers and covers tensor algebra in the
context of continuum mechanics, emphasizing practical applications.
*Online resources such as lectures, notes, and tutorials from universities like MIT
OpenCourseWare, Khan Academy, and Coursera can also be valuable for learning tensor
algebra.
*These resources vary in depth and focus, so you can choose based on your background and
specific interests in tensor algebra and its applications.
* P. R. Halmos: Finite-dimensional vector spaces. Van Nostrand Reynold, 1958.
* M. E. Gurtin: An introduction to continuum mechanics. Academic Press, 1981.
* P. Podio-Guidugli: A primer in elasticity. Journal of Elasticity, v. 58: 1-104, 2000.
* Altenbach H. Zhilin PA (1988) Osnovnye uravneniya neklassicheskoi teorii uprugikh
obolochek (Basic equations of a non-classical theory of elastic shells, in Russ.). Adv Mech
11:107-148
* Altenbach H. Naumenko K. L'vov GI. Pylypenko S (2003a) Numerical estimation of the elastic
properties of thin-walled structures manufactured from short-fiber reinforced thermoplastics.
Mech Compos Mater 39:221-234
[53]
* Altenbach H. Naumenko K. Zhilin P (2003b) A micro-polar theory for binary media with
application to phase-transitional flow of fiber suspensions. Continuum Mech Thermodyn 15:539-
570
* Altenbach H. Naumenko K. Zhilin PA (2005) A direct approach to the formulation of
constitutive equations for rods and shells. In: Pietraszkiewicz W, Szymczak C (eds) Shell
structures: theory and applications, pp 87-90. Taylor and Francis, Leiden
* Altenbach H. Naumenko K. Zhilin P (2006) A note on transversely isotropic invariants.
ZAMM-J Appl Math Mech/Zeitschrift für Angewandte Mathematik und Mechanik 86:162-168
* Altenbach H. Naumenko K. Pylypenko S, Renner B (2007) Influence of rotary inertia on the
fiber dynamics in homogeneous creeping flows. ZAMM-J Appl Math Mech/Zeitschrift für
Angewandte Mathematik und Mechanik 87(2):81-93
* Betten J (1976) Plastic anisotropy and Bauschinger-effect general formulation and comparison
with experimental yield curves. Acta Mech 25(1-2):79-94
* Betten J (1985) On the representation of the plastic potential of anisotropic solids. In: Boehler J
(ed) Plastic behavior of anisotropic solids, CNRS, Paris, pp 213-228
* Betten J (1987) Tensorrechnung für Ingenieure. Springer. Berlin
* Betten J (2001) Kontinuumsmechanik. Springer, Berlin
* Betten J (2008) Creep mechanics, 3rd edn. Springer, Berlin
* Bischoff-Beiermann B. Bruhns O (1994) A physically motivated set of invariants and tensor
gen- erators in the case of transverse isotropy. Int J Eng Sci 32:1531-1552
* Boehler JP (ed) (1987) Application of tensor functions in solid mechanics. CISM Lecture
Notes No. 292, Springer, Wien
* Boehler JP, Sawczuk A (1977) On yielding of oriented solids. Acta Mech 27:185-206
* Bruhns O, Xiao H, Meyers A (1999) On representation of yield functions for crystals,
quasicrystals. and transversely isotropic solids. Eur J Mech A/Solids 18:47-67
* Courant R, Hilbert D (1989) Methods of mathematical physics, vol 2: Partial differential
equations.
* Wiley Interscience Publication, New York
* Eringen AC (1999) Microcontinuum field theories, vol 1: Foundations and Solids. Springer,
New
[54]
* York Gariboldi E, Naumenko K, Ozhoga-Maslovskaja O, Zappa E (2016) Analysis of
anisotropic damage
* in forged Al-Cu-Mg-Si alloy based on creep tests, micrographs of fractured specimen and
digital image correlations. Mater Sci Eng: A 652:175-185
* Kröner C, Altenbach H. Naumenko K (2009) Coupling of a structural analysis and flow
simulation for short-fiber-reinforced polymers: property prediction and transfer of results. Mech
Compos Mater 45(3):249-256
* Lurie Al (1990) Nonlinear theory of elasticity. North-Holland, Dordrecht Mücke R. Bernhardi
O (2003) A constitutive model for anisotropic materials based on Neuber's rule. Comput
Methods Appl Mech Eng 192:4237-4255
* Hirotachi Abo, Anna Seigal, and Bernd Sturmfels. Eigenconfigurations of tensors, 2017.
* E. Acar and B. Yener. Unsupervised multiway data analysis: A literature survey.
IEEETransactions on Knowledge and Data Engineering, 21(1):6–20, January 2009.
* Elizabeth S. Allman, John A. Rhodes, Bernd Sturmfels, and Piotr Zwiernik. Tensors
ofnonnegative rank two. Linear Algebra and its Applications, 473:37–53, May 2015.
* O. Alter, P. O. Brown, and D. Botstein. Singular value decomposition for genome-
wideexpression data processing and modeling. Proceedings of the National Academy of
Sciences,97(18):10101–10106, August 2000.
* O. Alter and G. H. Golub. Integrative analysis of genome-scale data by using
pseudoinverseprojection predicts novel correlation between DNA replication and RNA
transcription. Pro-ceedings of the National Academy of Sciences, 101(47):16577–16582,
November 2004.
* S.-I. Amari, O. E. Barndorff-Nielsen, R. E. Kass, S. L. Lauritzen, and C. R. Rao.
Differentialgeometry in statistical inference. Lecture Notes-Monograph Series, 10:i–240, 1987.
* Shun-ichi Amari. Information geometry and its applications. Springer, Japan, 2016
* Shun-Ichi Amari and Hiroshi Nagaoka. Methods of information geometry. American Math-
ematical Society, Providence, RI, 2000.
* Anima Anandkumar, Yuan Deng, Rong Ge, and Hossein Mobahi. Homotopy analysis
fortensor pca. In Satyen Kale and Ohad Shamir, editors, Proceedings of the 2017 Conferenceon
Learning Theory, volume 65 of Proceedings of Machine Learning Research, pages 79–104,
Amsterdam, Netherlands, 07–10 Jul 2017. PMLR.
[55]
* Anima Anandkumar, Prateek Jain, Yang Shi, and U. N. Niranjan. Tensor vs. matrix meth-ods:
Robust tensor decomposition under block sparse perturbations. In Arthur Gretton andChristian
C. Robert, editors, Proceedings of the 19th International Conference on ArtificialIntelligence and
Statistics, volume 51 of Proceedings of Machine Learning Research, pages268–276, Cadiz,
Spain, 09–11 May 2016. PMLR.
* Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgar-sky.
Tensor decompositions for learning latent variable models. J. Mach. Learn. Res.,15(1):2773–
2832, January 2014.
* Animashree Anandkumar, Rong Ge, and Majid Janzamin. Learning overcomplete
latentvariable models through tensor methods. In Peter Gr¨unwald, Elad
[56]