0% found this document useful (0 votes)
723 views43 pages

PDF PPT Mathematical Physics Tensor Unit 7

This document provides an introduction to tensor algebra and tensor analysis. It discusses key topics such as: 1) Tensors generalize scalars, vectors, and matrices to higher dimensions. They can describe physical properties like stress and strain. 2) n-dimensional space is defined by an ordered set of n real variables that determine the coordinates of a point. 3) Coordinate transformations relate two sets of coordinate variables through transformation equations. 4) The indical and summation conventions are introduced to simplify notation when working with tensors in multiple dimensions.

Uploaded by

Pratip Jana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
723 views43 pages

PDF PPT Mathematical Physics Tensor Unit 7

This document provides an introduction to tensor algebra and tensor analysis. It discusses key topics such as: 1) Tensors generalize scalars, vectors, and matrices to higher dimensions. They can describe physical properties like stress and strain. 2) n-dimensional space is defined by an ordered set of n real variables that determine the coordinates of a point. 3) Coordinate transformations relate two sets of coordinate variables through transformation equations. 4) The indical and summation conventions are introduced to simplify notation when working with tensors in multiple dimensions.

Uploaded by

Pratip Jana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

MATHEMATICAL PHYSICS

UNIT – 7
Tensor Algebra

PRESENTED BY: DR. RAJESH MATHPAL


ACADEMIC CONSULTANT
SCHOOL OF SCIENCES
U.O.U. TEENPANI, HALDWANI
UTTRAKHAND
MOB:9758417736,7983713112
STRUCTURE OF UNIT

 7.1. INTRODUCTION
 7.2. n-DIMENSIONAL SPACE
 7.3. CO-ORDINATE TRANSFORMATIONS
 7.4. INDICAL AND SUMMATION CONVENTIONS
 7.5. DUMMY AND REAL INDICES
 7.6. KEONECKER DELTA SYMBOL
 7.7. SCALARS, CONTRAVARIANT VECTORS AND COVARIANT
VECTORS
 7.8. TENSORS OF HIGHER RANKS
 7.9. SYMMETRIC AND ANTISYMMETRIC TENSORS
 7.10. ALGEBRAIC OPERATIONS ON TENSORS
7.1. INTRODUCTION

 Tensors are mathematical objects that generalize scalars, vectors


and matrices to higher dimensions. If you are familiar with basic
linear algebra, you should have no trouble understanding
what tensors are. In short, a single-dimensional tensor can be
represented as a vector.
 The term rank of a tensor extends the notion of the rank of a matrix
in linear algebra, although the term is also often used to mean the
order (or degree) of a tensor. The rank of a matrix is the minimum
number of column vectors needed to span the range of the matrix.
 A tensor is a vector or matrix of n-dimensions that represents all types
of data. All values in a tensor hold identical data type with a known
(or partially known) shape. The shape of the data is the
dimensionality of the matrix or array. A tensor can be originated
from the input data or the result of a computation.
 Tensors are simply mathematical objects that can be used to
describe physical properties, just like scalars and vectors. In
fact tensors are merely a generalisation of scalars and vectors; a
scalar is a zero rank tensor, and a vector is a first rank tensor.
 Pressure itself is looked upon as a scalar quantity; the
related tensor quantity often talked about is the stress tensor. that is,
pressure is the negative one-third of the sum of the diagonal
components of the stress tensor (Einstein summation convention is
implied here in which repeated indices imply a sum).
 Tensors are to multilinear functions as linear maps are to single
variable functions. If you want to apply techniques in linear algebra
to problems depending on more than one variable linearly (usually
something like problems that are more than one-dimensional), the
objects you are studying are tensors.
 A tensor field has a tensor corresponding to each point space.
An example is the stress on a material, such as a construction beam
in a bridge. Other examples of tensors include the strain tensor, the
conductivity tensor, and the inertia tensor.
 A tensor is a generalization of vectors and matrices and is easily
understood as a multidimensional array. ... It is a term and set of
techniques known in machine learning in the training and operation
of deep learning models can be described in terms of tensors.
 The tensor is a more generalized form of scalar and vector. Or, the
scalar, vector are the special cases of tensor. If a tensor has only
magnitude and no direction (i.e., rank 0 tensor), then it is called
scalar. If a tensor has magnitude and one direction (i.e., rank
1 tensor), then it is called vector.
 Tensors are a type of data structure used in linear algebra, and like
vectors and matrices, you can calculate arithmetic operations
with tensors. After completing this tutorial, you will know:
That tensors are a generalization of matrices and are represented
using n-dimensional arrays.
 A tensor is a container which can house data in N dimensions. Often
and erroneously used interchangeably with the matrix (which is
specifically a 2-dimensional tensor), tensors are generalizations of
matrices to N-dimensional space. Mathematically
speaking, tensors are more than simply a data container, however.
 Stress is a tensor because it describes things happening in two
directions simultaneously. ... Pressure is part of the stress tensor. The
diagonal elements form the pressure. For example, σxx measures
how much x-force pushes in the x-direction. Think of your hand
pressing against the wall, i.e. applying pressure.
7.2. n-DIMENSIONAL SPACE

 In three dimensional space a point is determined by a set of three


numbers called the co-ordinates of that point in particular system.
For example (x, y, z) are the co-ordinates of a point in rectangular
Cartesian co-ordinate system. By analogy if a point is respected by
an ordered set of n real variables (x1, x2, x3,……xi …..xn)
or more conveniently (x1, x2, x3, ....xi …..xn)
 Hence the suffixes 1, 2, 3, …, i, ….., n denote variables and not the
powers of the variables involved], then all the points corresponding
to all values of co-ordinates (i.e., variables) are said to form an n-
dimensional space, denoted by Vn.
 A curve in n-dimensional space (Vn) is defined as the collection of
points which satisfy the n-equations.
xi = xi (u), (i =1, 2, 3, …, n)
where u is a parameter and xi (u) are n functions of u, which satisfy
certain continuity conditions.
 A sub-space Vm (m < n) of Vn is defined as the collection of points
which satisfy n-equations
xi = xi (u1, u2, …, um) (i = 1, 2, …, n)
where u1, u2, …, um are m parameters and xi (u1, u2, …, um) are n
functions of u1, u2, …, um which satify certain continuity conditions.
7.3. CO-ORDINATE TRANSFORMATIONS
 Tensor analysis is intimately connected with the subject of co-ordinate transformations.

 Consider two sets of variables (x1, x2, x3, …, xn) and 𝑥 1 , 𝑥 2 , 𝑥 3 , … 𝑥 𝑛 which determine the
co-ordinates of point in an n-dimensional space in two different frames of reference. Let
the two sets of variables be related to each other by the transformation equations
𝑥 1 = 𝜙1 (𝑥 1 , 𝑥 2 , 𝑥 3 , … 𝑥 𝑛 )
𝑥 2 = 𝜙 2 (𝑥 1 , 𝑥 2 , 𝑥 3 , … 𝑥 𝑛 )
… … … …
… … … …
𝑥 𝑛 = 𝜙 𝑛 (𝑥 1 , 𝑥 2 , 𝑥 3 , … 𝑥 𝑛 )
or briefly 𝑥 𝜇 = 𝜙𝜇 (𝑥 1 , 𝑥 2 , 𝑥 3 , … , 𝑥 𝑖 , … , 𝑥 𝑛 ) …(7.1)
(i = 1, 2, 3, …, n)
where function 𝜙𝜇 are single valued, continuous differentiable functions of co-ordinates. Iit sis
essential that the n-function 𝜙𝜇 be independent.
Equations (7.1) can be solved for co-ordinates xi as functions of 𝑥 𝜇 to yield

𝑥 𝑖 = 𝜓 𝑖 (𝑥1 , 𝑥 2 , 𝑥 3 , … , 𝑥 𝜇 , … , 𝑥 𝑛 ) …(7.2)

Equations (4.1) and (4.2) are said to define co-ordinate transformations.

From equations (4.1) the differentials 𝑑𝑥 𝜇 are transformed as


𝜇 𝜇 𝜇
𝜕𝑥 1 𝜕𝑥 2 𝜕𝑥 𝑛
𝑑𝑥 𝜇 = 1
𝑑𝑥 + 2
𝑑𝑥 + ⋯+ 𝑛
𝑑𝑥
𝜕𝑥 𝜕𝑥 𝜕𝑥

n
x i
=  i dx , (𝜇 = 1, 2, 3, …, n) …(7.3)
i 1 x
7.4.INDICAL AND SUMMATION
CONVENTIONS
Let us now introduce the following two conventions :
(1) Indicial convention. Any index, used either as subscript or
superscript will take all values from 1 to n unless the contrary is
specified. Thus equations (4.1) can be briefly written as
𝜇
𝑥 = 𝜙𝜇 𝑥 𝑖 …(7.4)
The convention reminds us that there are n equations with 𝜇 = 1, 2, …n
and 𝜙𝜇 are the functions of n-co-ordinates with (i = 1, 2, …, n).
(2) Einstein’s summation convention. If any index is repeated in a term, then a
summation with respected to that index over the range 1, 2, 3, …, n is implied. This
convention instead is called Einstein’s summation convention.
n

According to this conversation instead of expression 


, a
i 1
j xj

we merely write ai xi.

Using above tow conversation eqn. (7.3) is written as


𝜇 𝜇
𝜕𝑥
𝑑𝑥 = 𝑑𝑥 𝑖 …(7.5a)
𝜕𝑥𝑖

Thus the summation convention means the drop of sigma sign for the index
appearing twice in a given term. In other words the summation convention implies the sum
of the term for the index appearing twice in that term over defined range.

7.5. DUMMY AND REAL INDICES
Any index which is repeated in a given term, so that the summation convention
implies, is called a dummy index and it may be replaced freely by any other
𝜇
index not already used in the term. For example I is a dummy index in 𝑎𝑖 𝑥 𝑖 . Also I
is a dummy index in eqn. (4.5a), so that equation (4.5a) is equally written as
𝜇 𝜇 𝜇
𝜕𝑥 𝜕𝑥
𝑑𝑥 = 𝑑𝑥 𝑘 = 𝑑𝑥 𝜆 . …(7.5b)
𝜕𝑥 𝑘 𝜕𝑥 𝜆

Also two or more dummy indices can be interchanged. In order to avoid confusion
the same index must not be used more than twice in any single team.
For example will not be written as aixi ai xi but rather aiajxi xj.
 Any index which is not repeated in a given term is called a real index. For
𝜇
example 𝜇 is a real index in𝑎𝑖 𝑥 𝑖 . A real index cannot be replaced by another real
index, e.g.
𝜇
𝑎𝑖 𝑥 𝑖 ≠ 𝑎𝑖𝑣 𝑥 𝑖
7.6.KEONECKER DELTA SYMBOL
 The symbol kronecker delta
𝑗 1 𝑖𝑓 𝑗 = 𝑘
𝛿𝑘 =ቊ …(7.6)
0 𝑖𝑓 𝑗 ≠ 𝑘
 Some properties of kronecker delta
(i) If x1, x2, x3, …xn are independent variables, then
𝜕𝑥 𝑗 𝑗
= 𝛿𝑘 …(7.7)
𝜕𝑥 𝑘
(ii) An obvious property of kronecker delta symbol is
𝑗
𝛿𝑘 𝐴 𝑗 = 𝐴𝑘 . …(7.8)
Since by summation convention in the left hand side of this
equation the summation is with respect to j and by definition of
kronecker delta, the only surviving term is that for which j = k.
(iii) If we are dealing with n dimensions, then
𝑗
𝛿𝑗 = 𝛿𝑘𝑘 = 𝑛 …(7.9)
By summation convention
𝑗
𝛿𝑗 = 𝛿11 + 𝛿22 + 𝛿33 + ⋯ + 𝛿𝑛𝑛
= 1 + 1 + 1 + ⋯+ 1 = 𝑛
𝑗 𝑗
(iv) 𝛿𝑗 𝛿𝑘 = 𝛿𝑘𝑖 . …(7.10)
By summation convention
𝑗
𝛿𝑗𝑖 𝛿𝑘 = 𝛿1𝑖 𝛿𝑘1 + 𝛿2𝑖 𝛿𝑘2 + 𝛿3𝑖 𝛿𝑘3 + ⋯ + 𝛿𝑖𝑖 𝛿𝑘𝑖 + ⋯ 𝛿𝑛𝑖 𝛿𝑘𝑛
= 0 + 0 + 0 + ⋯ + 1. 𝛿𝑘𝑖 + ⋯ + 0
= 𝛿𝑘𝑖
𝑖
𝜕𝑥 𝑗 𝜕𝑥 𝜕𝑥 𝑗 𝑗
(v) 𝑖 𝑘 = = 𝛿𝑘 . …(7.11)
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘
Generalised Kronecker Delta. The generalised kronecker delta is symbolized as
𝑗 𝑗 …𝑗
𝛿 1 2 𝑚
𝑘1 𝑘2 … 𝑘𝑚
and defined as follows :
 The subscripts and superscripts can have any value from 1 to n.
 If either at least two superscripts or at least two subscripts have the same value or the subscribts
are not the same set as super-scripts, then the generalised kronecker delta is zero. For example
𝑖 𝑖𝑗𝑘 𝑖𝑗𝑘
𝛿𝑗𝑘𝑙 𝑘𝑘 = 𝛿𝑙𝑚𝑚 = 𝛿𝑘𝑙𝑚 = 0.
 If all the subscripts are separately different and the subscripts are the same set of numbers as
the superscripts, then the generalised kronecker delta has value +1 or -1 according to whether
it requires as even or odd number of permutations to arrange the superscripts in the same order
as the subscripts.
 For example
123 123 1452
𝛿123 = 𝛿231 = 𝛿4125 = +1
and 123
𝛿213 123
= 𝛿132 1452
= 𝛿4152 = −1.
 It should be noted that
𝑖 𝑖 𝑖 …𝑖 𝑖 𝑖 𝑖 …𝑖
3 𝑛
𝛿11 , 𝛿22 , 𝛿3…𝑛 𝛿𝑗11 , 𝛿𝑗22 , 𝛿𝑗3…𝑛
3 …𝑗𝑛
= 𝛿𝑗11 , 𝛿𝑗22 , 𝛿𝑗33…𝑗𝑛𝑛
7.7. SCALARS, CONTRAVARIANT VECTORS
(a)AND COVARIANT
Scalars. Consider a function 𝜙 in a VECTORS
co-ordinate system of variables x and let his function
i
𝜇
have the value 𝜙 in another system of variables 𝑥 . Then if
𝜙=𝜙
then the function 𝜙 is said to be scalar or invariant or a tensor of order zero.
The quantity
𝛿𝑖𝑖 = 𝛿11 + 𝛿22 + 𝛿33 + ⋯ + 𝛿𝑛𝑛 = 𝑛
is a scalar or an invariant.
(b) Contravariant Vectors. Consider a set of n quantities 𝐴1 , 𝐴2 , 𝐴3 , … 𝐴𝑛 in a system of
1 2 3 𝑛
variables x and let 𝜇these quantities have values 𝐴 , 𝐴 , 𝐴 , … 𝐴 in another co-ordinate
i
system of variables 𝑥 . If these quantities obey the transformation relation
𝜇 𝜇
𝜕𝑥
𝐴 = = 𝐴𝑖 …(7.12)
𝜕𝑥 𝑖
then the quantities 𝐴 are said to be the components of a contravariant vector or a
𝑖
contravariant tensor of first tank.
Any n functions can be chosen as the components of a contravariant vector in a
𝜇
system of variables 𝑥 .
𝜕𝑥 𝑗
Multiplying equation (4.12) by 𝜇 and taking the sum over the index 𝜇 from 1 to n,
𝜕𝑥
we get
𝜇
𝜕𝑥 𝑗 𝜇 𝜕𝑥 𝑗 𝜕𝑥 𝜕𝑥 𝑗 𝑖
𝜇 𝐴 = 𝜇 𝐴𝑖 = 𝐴 = 𝐴𝑗
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖 𝜕𝑥 𝑖
𝜕𝑥 𝑗 𝜇
or 𝐴𝑗 = 𝜇𝐴 . …(7.13)
𝜕𝑥

Equations (7.13) represent the solution of equations (7.12).


𝜇 𝜇
The transformation of differentials 𝑑𝑥 𝑖 and 𝑑𝑥 in the systems of variables xi and 𝑥
respectively, from eqn. (4.5a), is given by
𝜇
𝜇 𝜕𝑥
𝑑𝑥 = 𝑑𝑥 𝑖 …(7.14)
𝜕𝑥 𝑖

As equations (7.12) and (7.14) are similar transformation equations, we can say that
the differentials dxi form the components of contravariant vector, whose
𝜇
components in any other system are the differentials 𝑑𝑥 of that system. Also we
conclude that the components of a contravariant vector are actually the
components of a contravariant tensor of rank one.
𝜇
Let us now consider a further change of variables from 𝑥 to x’p, then the new
components A’p must be given by
𝜇
𝜕𝑥 ′𝑃 𝜇 𝜕𝑥 ′𝑃 𝜕𝑥
𝐴 ′𝑃
= 𝜇 𝐴 = 𝜇 . 𝐴𝑖 (using 7.12)
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖
𝜕𝑥 ,𝑝
= 𝐴𝑖 . …(7.15)
𝜕𝑥 𝜌
This equation has the same from as eqn. (4.12). This indicates that the transformations of
contravariant vectors form a group.
Note. A single superscripts is always used to indicate a contravariant vector unless the
contrary is explicitly stated
Covariant vectors. Consider a set of n quantities A1, A2, A3, … An in a system of variables
𝜇
xi and let these quantities have values 𝐴1 , 𝐴2 , 𝐴3 , … 𝐴𝑛 in another system of variables 𝑥 . If
these quantities obey the transformation equations
𝜕𝑥 𝑖
𝐴𝜇 = 𝜇 𝐴𝑖 …(7.16)
𝜕𝑥
then the quantities Aj are said to be the components of a convariant vector or a
covariant tensor of rank one.
Any n functions can be chosen as the components of a covariant vector in a system of
variables xi and equations (4.16) determine the𝜇
n-components in the new system of
𝜇 𝜕𝑥
variables 𝑥 . Multiplying equation (8.16) by 𝑖 and taking the sum over the index 𝜇 from
𝜕𝑥
1 to n, we get
𝜇 𝜇
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖 𝜕𝑥 𝑖
𝐴𝜇 = 𝜇 𝐴𝑖 = 𝐴 = 𝐴𝑗
𝜕𝑥 𝑗 𝜕𝑥 𝑗 𝜕𝑥 𝜕𝑥 𝑗 𝑖
𝜇
𝜕𝑥
thus 𝐴𝑗 = 𝐴 . …(7.17)
𝜕𝑥 𝑗 𝜇
Equations (7.17) represent the solution of equations (7.16).
𝜇
Let us now consider a further change of variables from 𝑥 to x’p. Then the new
components 𝐴′𝑝 must be given by
𝜇 𝜇
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖
𝐴′𝑝 = 𝜕𝑥 ,𝑝 𝐴𝜇 = 𝜕𝑥 ,𝑝 𝜇 𝐴𝑖
𝜕𝑥
𝜕𝑥 𝑖
= 𝜕𝑥 ,𝑝 𝐴𝑙 . …(7.18)
This equation has the same form as eqn. (4.16). This indicates that the transformation of
convariant vectors form a group.
𝜕𝜓 𝜕𝜓 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝜓
As 𝜇 = 𝜇 = 𝜇
𝜕𝑥 𝜕𝑥 𝑖 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖
𝜕𝜓
It follows from (4.16) that form the components of a convariant vector, whose
𝜕𝑥 𝑖
𝜕𝜓
components in any other system are the corresponding partial derivatives 𝜇 . This
𝜕𝑥
contra variant vector is called grad 𝜓.

Note. A single subscript is always used to indicate covariant vector unless contrary is
explicitly stand; but the exception occurs in the notation of co-ordinates.
7.8. TENSORS OF HIGHER RANKS
The laws of transformation of vectors are:
𝜇 𝜇
𝜕𝑥
Contravariant …𝐴 = 𝐴𝑖 …(7.12)
𝜕𝑥 𝑖
𝜇 𝜕𝑥 𝑖
Covariant …𝐴 = 𝜇 𝐴𝑖 …(7.16)
𝜕𝑥
(a) Contravariant tensors of second rank.
Let us consider (n)2 quantities Aij (here I and j take the values from 1 to n independently) in a
𝜇𝑣 𝜇
system of variables xi and let these quantities have values 𝐴 in another system of variables 𝑥 . If
these quantities obey the transformation equations
𝜇𝑣 𝜇 𝑣
𝜕𝑥 𝜕𝑥
𝐴 = 𝜕𝑥 𝑖 𝜕𝑥 𝑗
𝐴𝑖𝑗 …(7.19)
then the quantities Aij are said to be the components of a contravariant tensor of second rank.
The transformation law represented by (7.19) is the generalisation of the transformation law (7.12).
Any set of (n)2 quantities can be chosen as the components of a contravariant tensor of second
rank in a system of variables xi and then quantities (7.19) determine (n)2 components in any other
𝜇
system of variable 𝑥 .
(b) Covariant tensor of second rank. If (n)2 quantities Aij in a system of variables xi are
𝜇
related to another (n)2 quantities 𝐴𝜇𝑣 in another system of variables 𝑥 by the
transformation equations
𝜕𝑥 𝑖 𝜕𝑥 𝑗
𝐴𝜇𝑣 = 𝜇 𝑣 𝐴𝑖𝑗 …(7.20)
𝜕𝑥 𝜕𝑥
then the quantities Aij are said to be the components of a covariant tensor of second
rank.
The transformation law (7.20) is a generalisation of (7.16). Any set of (n)2 quantities
can be chosen as the components of a covariant tensor of second rank in a system of
variables xi𝜇and then equation (7.20) determine (n)2 components in any other system of
variables 𝑥 .
(c) Mixed tensor of second rank. If (n)2 quantities 𝐴𝑗𝑖 in a system of variables xi are related
𝜇 𝜇
to another (n)2 quantities 𝐴𝑣 in another system of variables 𝑥 by the transformation
equations
𝜇
𝜇 𝜕𝑥 𝜕𝑥 𝑗 𝑖
𝐴𝑣 = 𝜕𝑥 𝑗 𝑣 𝐴𝑗 …(7.21)
𝜕𝑥
then the quantities 𝐴𝑗𝑖 are said to be component of a mixed tensor of second rank.
An important example of mixed tensor of second rank is kronecker delta 𝐴𝑗𝑖 .
(d) Tensor of higher ranks, rank of a tensor. The tensors of higher ranks are defined
by similar laws. The rank of a tensor only indicates the number of indices attached
𝑖𝑗𝑘
to its per component. For example 𝐴𝐼 are the components of a mixed tensor of
rank 4; contravariant of rank 3 and covariant of rank 1, if they transform according
to the equation
𝜇𝑣𝜎 𝜇 𝑣 𝜎 𝑙
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖𝑗𝑘
𝐴𝑝 = 𝐴 …(7.22)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝑝 𝑙

The rank of a tensor when raised as power to the number of dimensions gives the
number of components of the tensor. For example a tensor of rank r inn dimensional
space has (n)r components. Thus the rank of a tensor gives the number of the mode
of changes of a physical quantity when passing from one system to the other which
is in rotation relative to the first. Obviously a quantity that remains unchanged when
axes are rotated is a tensor of zero rank. The tensors of zero rank are scalars or
invariant and similarly the tensors of rank one are vectors.
7.9. SYMMETRIC AND ANTISYMMETRIC
TENSORS
If two contravariant or covarint indices can be interchanged without altering the
tensor, then the tensor is said to be symmetric with respect to these two indicas.
For example if
𝐴𝑖𝑗 = 𝐴𝑗𝑖
or ቋ …(7.23)
𝐴𝑖𝑗 = 𝐴𝑗𝑖
then the contracariant tensor of second rank Aij or covariant tensor Aij is said to be
symmetric.
𝑖𝑗𝑘
For a tensor of higher rank 𝐴𝑙 if
𝑖𝑗𝑘 𝑗𝑖𝑘
𝐴𝑙 = 𝐴𝑙
𝑖𝑗𝑘
then the tensor 𝐴𝑙 is said to be symmetric with respect to indices i and j.
The symmetry property of a tensor in independent of co-ordinate system used. So if
a tensor is symmetric with respect to two indices in any co-ordinate system, it
remains symmetric with respect to these two indices in any other co-ordinate
system.
This can be seen as follows:
𝑖𝑗𝑘
If tensor 𝐴𝑙 is symmetric with respect to first indices i and j, we have
𝑖𝑗𝑘 𝑗𝑖𝑘
𝐴𝑙 = 𝐴𝑙 …(7.24)
𝜇𝑣𝜎 𝜇 𝑣 𝜎 𝑙
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖𝑗𝑘
We have 𝐴𝑝 = 𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝑝 , 𝐴𝑙
𝜇 𝑣 𝜎
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙
= 𝜌 𝐴 𝑗𝑖𝑘 (using 7.24)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥
Now interchanging the dummy indices i and j, we get
𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑣𝜇𝜎
𝐴𝜌 = 𝜕𝑥 𝑗 𝜕𝑥 𝑖 𝜕𝑥 𝑘 𝜕𝑥
𝜌𝐴 𝑗𝑖𝑘 = 𝐴𝜌
i.e., given tensor is gain symmetric with respect to first two indices in new
co-ordinate system. This result can also be proved for covariant indices.
Thus the symmetry property of a tensor is independent of coordinate
system.
𝑖𝑗𝑘
Let 𝐴𝑙 be symmetric with respect to two indices, one contravarient I and the other covariant l,
then we have
𝑖𝑗𝑘 𝑙𝑗𝑘
𝐴𝑙 = 𝐴𝑖 …(7.25)
𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
We have 𝐴𝜌 = 𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜌 𝐴𝑙
𝜕𝑥
𝜇 𝑣 𝜎 𝑙
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙𝑗𝑘
= 𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥
𝜌 𝐴 𝑖 [(using 7.25)]
Now interchanging dummy indices i and l, we have
𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
𝐴𝜌 = 𝑖 𝑗 𝑘 𝜌 𝐴𝑙
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥
𝑣 𝜎 𝜇
𝜕𝑥 𝑖 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖𝑗𝑘
= 𝜌 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝑙 𝐴𝑙 …(7.26)
𝜕𝑥
According to tensor transformation law,
𝜌 𝑣 𝜎
𝜌𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
𝐴𝜇 = 𝜇 𝐴𝑙 …(7.27)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥
Comparing (4.26) and (4.27), we see that
𝜇𝑣𝜎 𝜌𝑣𝜎
𝐴𝜌 ≠ 𝐴𝜇
i.e., summetry is not preserved after a change of co-ordinate system. But kronecker delta which is
a mixed tensor is symmetric with respect to its indices.
(b) Antisymmetric tensors or skew symmetric tensors. A tensor, whose each component
alters in sign but not in magnitude when two contravariant or covariant indices are
inhterchanged, is said to be skew symmetric or antisymmetric with respect to these two
indices.
For example if
𝐴𝑖𝑗 = −𝐴 𝑗𝑖
or ቋ …(7.28)
𝐴𝑖𝑗 = −𝐴𝑗𝑖
then contravariant tensor Aij or covariant tensor Aij of second rank is antisymmetric or for
𝑖𝑗𝑘
a tensor of higher rank 𝐴𝑙 if
𝑖𝑗𝑘 𝑖𝑘𝑗
𝐴𝑙 = −𝐴𝑙
𝑖𝑗𝑘
then tensor 𝐴𝑙 is antisymmetric with respect to indices j and k.
The skew-symmetry property of a tensor is also independent of athe choice of co-
ordinate system. So if a tensor is skew, symmetric with respect to two indices in any co-
ordinate system, it remains skew-symmetric with respect to these two indices in any
other co-ordinate system.
𝑖𝑗𝑘
If tensor 𝐴𝑙 is antisymmetric with respect to first two indices i and j.
We have
𝑖𝑗𝑘 𝑗𝑖𝑘
𝐴𝑙 = −𝐴𝑙 …(7.29)
𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
We have 𝐴𝜌 = 𝜌 𝐴𝑙
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥
𝜇 𝑣 𝜎
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑗𝑖𝑘
= − 𝑖 𝑗 𝑘 𝜌 𝐴𝑙 [using (7.29)]
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥

Now interchanging the dummy indices i and j, we get


𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘 𝑣𝜇𝜎
𝐴𝜌 = 𝜌 𝐴𝑙 = −𝐴𝜌
𝜕𝑥 𝑗 𝜕𝑥 𝑖 𝜕𝑥 𝑘 𝜕𝑥

i.e., given tensor is again antisymmetric with respect to first two indices in new co-
ordinate system. Thus antisymmetry property is retained under co-ordinate
transformation.
Antisymmetry property, like symmetry property, cannot be defined with respect to
two indices of which one is contravariant and the other covariant.
If all the indices of a contravariant or covariant tensor can be interchanged so that
its components change sign at each interchange of a pair of indices, the tensor is
said to be antisymmetric, i.e.,
Aijk = -Ajik = +Ajki.
Thus we may state that a contravariant or covariant tensor is antisymmetric if its
components change sign under an odd permutation of its indices and do not
change sign under an even permutation of its indices.
7.10. ALGEBRAIC OPERATIONS ON TENSORS

(i) Additional and subtraction: The addition and subtraction of tensors is


defined only in the case of tensors of some rank and same type. Same
type means the same number of contravarient and covariant indices. The
addition or subtraction of two tensors, like vectors, involves the individual
elements. To add or subtract two tensors the corresponding elements are
added or subtracted.
The sum or difference of two ensors of the same rank and same type is
also a tensor of the same rank and same type.
𝑖𝑗 𝑖𝑗
For example if there are two tenors 𝐴𝑘 and 𝐵𝑘 of the same rank and
same type, then the laws of addition and subtraction are given by
𝑖𝑗 𝑖𝑗 𝑖𝑗
𝐴𝑘 + 𝐵𝑘 = 𝐶𝑘 (Addition) …(7.35)
𝑖𝑗 𝑖𝑗 𝑖𝑗
𝐴𝑘 − 𝐵𝑘 = 𝐷𝑘 (Subtraction) …(7.36)
𝑖𝑗 𝑖𝑗
where 𝐶𝑘 and 𝐷𝑘 are the tensors of the same rank and same type as the
given tensors.
The transformation laws for the given tensors are
𝜇 𝑣
𝜇𝑣 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝑖𝑗
𝐴𝜎 = 𝑖 𝜎 𝐴𝑘 …(7.37)
𝜕𝑥 𝜕𝑥 𝑖 𝜕𝑥
𝜇 𝑣
𝜇𝑣 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝑖𝑗
and 𝐵𝜎 = 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜎 𝐵𝑘 …(7.38)
𝜕𝑥
Adding (7.37) and (7.38), we get
𝜇 𝑣
𝜇𝑣 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝑖𝑗
𝐶𝜎 = 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝑥
𝜎 𝐶𝑘 …(7.39)
𝑖𝑗 𝑖𝑗
where is a transformation law for the sum and is similar to transformation laws for 𝐴𝑘 and 𝐵𝑘 given
𝑖𝑗 𝑖𝑗 𝑖𝑗
by (7.37) and (7.38). Hence the sum 𝐶𝑘 = 𝐴𝑘 + 𝐵𝑘 is itself a tensor of the same rank and same
type as the given tensors.
Subtracting eqn. (4.38) from (4.37), we get
𝜇 𝑣
𝜇𝑣 𝜇𝑣 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝑖𝑗 𝑖𝑗
𝐴𝜎 − 𝐵𝜎 = 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜎 (𝐴𝑘 − 𝐵𝑘 )
𝜕𝑥
𝜇𝑣 𝜇 𝑣 𝑘
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖𝑗
or 𝐷𝜎 = 𝑖 𝑖
𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜎 𝐷𝑘 . …(7.40)
𝑖𝑗
which is a transformation law for the difference and is again similar to the transformation law for 𝐴𝑘
𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗
and 𝐵𝑘 . Hence the difference 𝐷𝑘 = 𝐴𝑘 − 𝐵𝑘 is itself a tensor of the same rank and same type as
the given tensors.
(ii) Equity of tensors: Two tensors of the same rank and same type are said to be
equal if their components are one to one equal, i.e., if
𝑖𝑗 𝑖𝑗
𝐴𝑘 = 𝐵𝑘 for all values of the indices.
If two tensors are equal in one co-ordinate system, they will be equal in any other
co-ordinate system. Thus if a particular equation is expressed in tensorial form, then it
will be invariant under co-ordinate transformations.
(iii) Outer product: The outer product of two tensors is a tensor whose rank is the sum of the ranks of
given tensors.
Thus if r and r’ are the ranks of two tensors, their outer product will be a tensor of rank (r + r’).
𝑖𝑗
For example if 𝐴𝑘 and 𝐵𝑚
𝑙 are two tensors of ranks 3 and 2 respectively, then

𝑖𝑗 𝑖𝑗𝑙
𝑘𝑚 (say) …(7.41)
𝑙 =𝐶
𝐴𝑘 𝐵𝑚
is a tensor of rank 5 (= 3 + 2)
For proof of this statement we write the transformation equations of the given tensors as
𝜇 𝑣
𝜇𝑣 𝜇𝑣 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝑖𝑗
𝐴𝜎 − 𝐵𝜎 = 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝑥
𝜎 𝐴𝑘 …(7.42)
𝜌
𝜌 𝜕𝑥 𝜕𝑥 𝑚 𝑙
𝐵𝜆 = 𝐵 .
𝜕𝑥 𝑙 𝜕𝑥𝜆 𝑚
…(7.43)
Multiplying (8.42) and (8.43), we get
𝜇 𝑣 𝜌
𝜇𝑣 𝜌 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝜕𝑥 𝜕𝑥 𝑚 𝑖𝑗 𝑙
𝐴𝜎 − 𝐵𝜆 = 𝑖 𝑗 𝜎 𝑙 𝜆 𝐴𝑘 𝐵𝑚
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥
𝜇 𝑣 𝜌
𝜇𝑣𝜌 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑘 𝜕𝑥 𝑚 𝑖𝑗𝑙
or 𝐶𝜎𝜆 = 𝜎 𝐶 …(7.44)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑙 𝜕𝑥 𝜕𝑥𝜆 𝑘𝑚
𝑖𝑗
which is a transformation law for tensor of rank 5. Hence the outer product of two tensors 𝐴𝑘 and
𝑙 is a tensor 𝐶 𝑖𝑗𝑙 of rank (3 + 2 =) 5.
𝐵𝑚 𝑘𝑚
(iv) Contraction of tensors: The algebraic operation by which the rank of a mixed tonsor is lowered by 2 is
known as contraction. In the process of contraction one contravariant index and one convariant index of a
mixed tensor are set equal and the repeated index summed over, the result is a tensor of rank lower by two
than the original tensor.
𝑖𝑗𝑘
For example consider a mixed tensor 𝐴𝑙𝑚 of rank 5 with contravariant indices i, j, k and covariant indices
l, m
The transformation law of the given tensor is
𝜇 𝑣 𝜎
𝜇𝑣𝜌 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝜕𝑥 𝑚 𝑖𝑗𝑘
𝐴𝜎𝜆 = 𝜌 𝐴 …(7.45)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝜕𝑥𝜆 𝑙𝑚

To apply the process of contraction, we put 𝜆 = 𝜎 and obtain


𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝜕𝑥 𝑚 𝑖𝑗𝑘
𝐴𝜌𝜎 = 𝜌 𝜎 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝜕𝑥
𝜇 𝑣 𝜎
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝜕𝑥 𝜕𝑥 𝑚 𝑖𝑗𝑘
= 𝜌 𝜎 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝜕𝑥 𝑘 𝜕𝑥
𝜇 𝑣
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑚 𝑖𝑗𝑘
= 𝜌 𝛿𝑘 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥
𝜎
𝜕𝑥 𝜕𝑥 𝑚
since substitution operator 𝑘 𝜎 = 𝛿𝑘𝑚
𝜕𝑥 𝜕𝑥
𝜇 𝑣
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
i.e., 𝐴𝜌𝜎 = 𝜌 𝐴𝑙𝑘 …(7.46)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥
𝑖𝑗𝑘
which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘 is a mixed tensor of rank 3 and may be
𝑖𝑗
denoted by 𝐴𝑙 . In this example we can further apply the contraction process and obtain the contravariant
𝑖𝑗𝑘
vector 𝐴 or 𝐴𝑖 . Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed
To apply the process of contraction, we put 𝜆 = 𝜎 and obtain
𝜇 𝑣 𝜎
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝜕𝑥 𝑚 𝑖𝑗𝑘
𝐴𝜌𝜎 = 𝜌 𝜎 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝑘 𝜕𝑥 𝜕𝑥
𝜇 𝑣 𝜎
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝜕𝑥 𝜕𝑥 𝑚 𝑖𝑗𝑘
= 𝜌 𝜎 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥 𝜕𝑥 𝑘 𝜕𝑥
𝜇 𝑣
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑚 𝑖𝑗𝑘
= 𝜌 𝛿𝑘 𝐴𝑙𝑚
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥
𝜎
𝜕𝑥 𝜕𝑥 𝑚
since substitution operator 𝑘 𝜎 = 𝛿𝑘𝑚
𝜕𝑥 𝜕𝑥
𝜇 𝑣
𝜇𝑣𝜎 𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑙 𝑖𝑗𝑘
i.e., 𝐴𝜌𝜎 = 𝜌 𝐴𝑙𝑘 …(7.46)
𝜕𝑥 𝑖 𝜕𝑥 𝑗 𝜕𝑥
𝑖𝑗𝑘
which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘 is a mixed tensor of rank 3 and may be
𝑖𝑗
denoted by 𝐴𝑙 . In this example we can further apply the contraction process and obtain the contravariant
𝑖𝑗𝑘
vector 𝐴𝑙𝑘 or 𝐴𝑖 . Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed
tensor of rank r.
An another example consider the contraction of the mixed tensor 𝐴𝑗𝑖 of rank 2, whose transformation
law is
𝜇
𝜇 𝜕𝑥 𝜕𝑥 𝑗 𝑖
𝐴𝑣 = 𝑣 𝐴𝑗 .
𝜕𝑥 𝑖 𝜕𝑥

To apply contraction process we put 𝑣 = 𝜇 and obtain


𝜇
𝜇 𝜕𝑥 𝜕𝑥 𝑗 𝑖 𝑗
𝐴𝜇 = 𝜇 𝐴𝑗 = 𝛿𝑖 𝐴𝑗𝑖 = 𝐴𝑖𝑖
𝜕𝑥 𝑖 𝜕𝑥
(v) Inner Product: The outer product of two tensors followed by a contraction results a new tensor
called and inner product of the two tensors and the process is called the inner multiplication of two
tensors.
𝑖𝑗
Example (a). Consider two tensors 𝐴𝑘 and 𝐵𝑚
𝑙

The outer product of these two tensors is


𝑖𝑗 𝑖𝑗𝑙
𝑙
𝐴𝑘 𝐵𝑚 = 𝐶𝑘𝑚 (say)
Applying contraction process by setting m = i, we obtain
𝑖𝑗 𝑖𝑗𝑙 𝑗𝑙
𝑘𝑚 = 𝐷𝑘 (a new tensor)
𝑙 =𝐶
𝐴𝑘 𝐵𝑚
𝑗𝑙 𝑖𝑗
The new tensor 𝐷𝑘 is the inner product of the two tensors 𝐴𝑘 and 𝐵𝑚
𝑙 .

(b) An another example consider two tensors of rank 1 as 𝐴𝑖 and 𝐵𝑗 . The outer product of 𝐴𝑖 and 𝐵𝑗 is
𝐴𝑖 𝐵𝑗 = 𝐶𝑗𝑖
Applying contraction process by setting i = j, we get
𝑗
𝐴𝑖 𝐵𝑗 = 𝐶𝑗 (a scalar or a tensor of rank zero).
Thus the inner product of two tensors of rank one is a tensor of rank zero. (i.e., invariant).
(vi) Quotient law: In tensor analysis it is often necessary to ascertain whether a given
entity is a tensor or not. The direct method requires us to find out if the given entity
obeys the tensor transformation law or not. In practise this is troublesome and a
simpler test is provided by a law known as quantient law which states:
An entity whose inner product with an arbitrary tensor (contravariant or covariant) is
a tensor, is itself a tensor.
(vii) Extension of rank: The rank of a tensor can be extended by differentiating its each
component with respect to variables xi.
As an example consider a simple case in which the original tensor is of rank zero, i.e., a
𝜕𝑆
scalar S (xi) whose, derivatives relative to the variables xi are 𝑖 . In Other system of
𝜇 𝜇 𝜕𝑥
variables 𝑥 the scalar is 𝑆 𝑥 , such that
𝜕𝑆 𝜕𝑆 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝑆
𝜇 = 𝜕𝑥 𝑖 𝜕𝑥
𝜇 = 𝜇 …(7.47)
𝜕𝑥 𝜕𝑥 𝜕𝑥 𝑖
𝜕𝑆
This shows that 𝜕𝑥 𝑖
,
transforms like the components of a tensor of rank one. Thus the
differentiation of a tensor of rank zero gives a tensor of rank one. In general we may say
that the differentiation of a tensor with respect to variables xi yields a new tensor of rank
one greater than the original tensor.
The rank of a tensor can also be extended when a tensor depends upon another
tensor and the differentiation with respect to that tensor is performed. As an example
consider a tensor S of rank zero (i.e., a scalar) depending upon another tensor Aij, then
𝜕𝑆
= 𝐵𝑖𝑗 = 𝑎 tensor of rank 2. …(7.48)
𝜕𝐴𝑖𝑗
Thus the rank of the tensor of rank zero has been extended by 2.
THANKS

You might also like