0% found this document useful (1 vote)
52 views26 pages

Unit 6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
52 views26 pages

Unit 6

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit 6 Vector Spaces

UNIT 6
VECTOR SPACES
Structure

6.1 Introduction 6.5 Orthonormal Vectors


Expected Learning Outcomes Orthonormal Bases
6.2 Introduction to Vector Spaces Construction of Orthonormal Bases
Linear Independence Signature of the Metric
Linear Subspaces 6.6 Complex Vector Spaces: Hilbert
6.3 Bases and Dimension Space
Changes in Bases Hermitian Inner Product
Change in Components and Pythagoras Theorem and Bessel’s
Changes in Basis Inequality
Direct Sum of Vector Spaces Schwarz and Triangle Inequalities
6.4 Linear Operators Orthonormal Bases
Product of Operators 6.7 Summary
Inner Product or Metric 6.8 Terminal Questions
6.9 Solutions and Answers

6.1 INTRODUCTION
You have learnt the concept of vectors and the operations of addition and
multiplication by scalars on them in your school and UG physics and
mathematics courses. In this unit, we generalise these operations to define
what is known as vector space or linear space.
You know that the concept of vectors is derived from the idea of translation or
displacement of points in space. If something is displaced from point A to B
and then again from B to point C, then the resultant displacement is from A to
C. We connect points A to B by a straight line arrow pointing towards B and
also a similar arrow from B to C. The resultant displacement is then defined as
the sum of these two arrow like quantities called displacement vectors. It is
usually written as:

AB  BC  AC
There are three essentials in defining the displacement vector: the starting
point, the end point and the length of the displacement.
The concept of displacement vector can be generalized to situations where
there is no displacement of any kind. Very often a vector is defined to be some
physical quantity which has a length and direction.
This definition is not a precise definition as we shall see. There are vectors for
which there is no concept of ‘direction’. And ‘length’ of a vector is not an 7
Block 2 Vector Spaces, Matrices and Tensors
essential quality of vectors but an additional feature which is to be separately
defined.
In this unit, you will study the generalization from physical vectors to abstract
vectors, which are elements of vector spaces. In Sec. 6.2, we define vector
spaces and then explain the related concepts of linear independence, linear
subspaces, bases and dimension, change of bases, change in components
with basis change and direct sum of vector spaces.
In Sec. 6.3, we explain linear operators in vector spaces, product of operators
and inner product or metric. Sec. 6.4 deals with orthonormal vectors, inner
product or metric, orthonormal bases, construction of orthonormal bases and
signature of the metric. Finally, in Sec. 6.5, we discuss complex vector spaces
(in particular, the Hilbert space), hermitian inner product, Pythagoras theorem
and Bessel’s inequality, Schwarz and triangle inequalities and explain the
concept of orthonormal bases.
In the next unit, we discuss matrices.
Expected Learning Outcomes
After studying this unit, you should be able to:
 define and identify a vector space, define and ascertain linear
independence of vectors;
 define a basis for a vector space and its dimension;
 change bases, determine the components with change in bases, and
define direct sum of vector spaces;
 define linear operators and determine products of operators, define and
determine the inner product and norm of elements in vector spaces;
 define orthogonal vectors, define and construct orthonormal bases, and
define signature of the metric;
 define complex vector spaces, Hilbert space, and determine Hermitian
inner product;
 state and prove the Pythagorean theorem and Bessel’s inequality,
Schwarz and triangle inequalities; and
 define and determine orthonormal bases.

6.2 INTRODUCTION TO VECTOR SPACES


We first define real vector spaces. Let R be the set of all real numbers with its
usual properties. By definition, a real vector space V is a set whose
elements are called vectors and on which two operations are defined:
1. addition of vectors, and
2. multiplication of vectors by real numbers.

In this course, we will denote elements of the vector space by bold face
  
symbols: like u,v ,w , etc. The addition of two vectors (i.e., elements of a
vector space, read the margin remark) is shown by a ‘+’ sign. Multiplication by

8
a real number a v with vector is written as av. We use parentheses ‘(’ and ‘)’
Unit 6 Vector Spaces
to separate quantities when needed. Elements of a vector space must satisfy Note that we are
the condition of closure, which requires that calling the elements
  of a vector space as
 Addition of any two elements u and v in V, will result in an element that vectors. These may
belongs to V: be directed line
  segments (or vectors
If u  V and v  V, then u  v  V (6.1a) in the sense that you
have learnt in your
 If an element u of a vector space V is multiplied by a scalar a, then the
UG courses so far). A
resulting element au also belongs to V: set of real numbers,
  complex numbers,
If u  V and a is a scalar, then a u  V (6.1b)
matrices, functions,
The operations of vector addition and scalar multiplication must satisfy the polynomials, etc.
could also be a vector
following eight conditions:
space if the conditions
 
1. Addition is commutative: So, for any two vectors u v in V: explained in this
    section are fulfilled.
u  v  v  u (6.1c)
  
2. Addition is associative: So, for any three vectors u, v and w in V:
     
(u  v)  w  u  (v  w) (6.1d)

3. There is a unique vector 0 in V called the zero vector (additive identity)
such that for every vector belonging to the vector space V : Although every vector
     space contains a
v  0  0  v v (6.1e)
unique zero
  (identity) vector, and
4. For every vector v there exists a unique vector  v (additive inverse) such
we should specify the
that zero vector of space V

   by 0V , and that of
v  ( v )  0 (6.1f)
another

vector space
5. Distributive property (with respect to addition of vectors): for any real W by 0W , and so on, it
 
number a and any vectors u, v, is not done in
    common practice.
a ( u  v )  (v  u ) (6.1g) What is more, very
often the symbol for
6. Distributive property (with respect to addition of real numbers): for any the zero vector is

real numbers a, b and any vector u, written like the
   number zero (0).
(a  b )u  a u  b u (6.1h) Usually there is no
confusion because in
7. Successive multiplication by real numbers: For any real numbers
 an equation like

a, b and any vector u, v  0, if the left hand
 
a (bu )  (ab) u (6.1i) side is a vector, the
right hand side cannot

8. Multiplication by real numbers 0, 1 and  1: For any vector u in V, be the number 0 but
  only the zero vector of
0u  0 (6.1j) that vector space to

  which v belongs.
1u  u (multiplicative identity) (6.1k)
 
1 u   u (6.1l)

Along with closure, these are the eight properties that elements of a vector
space must possess or eight conditions that the elements must satisfy. 9
Block 2 Vector Spaces, Matrices and Tensors
Some direct consequences of the definition are as follows:
Due to the associative and commutative properties, the sum of any finite
number of vectors can be written in any order. For example,
     
u  v  w  u  (v  w )

If a is a real number, then


 
a0  0

Proof
    
a 0  a 0  a (0  0 )  a 0

Adding  a 0 on both sides, the left hand side of this equation is:
     
a 0  a 0  ( a 0 )  a 0  0  a 0

while the right hand side is:


  
a 0  ( a 0 )  0
 
As a special case, if we take a   1, then  0  0
    
Another simple consequence is that v  v  1 v  1 v  2 v.

The set of all real numbers is a vector space and the set of 2  2 real matrices
is also a vector space (SAQ 1a, b). To sum up, a vector space is a set
whose elements satisfy the ten properties set out in Eqs. (6.1 a to l). The
elements of a vector space could be directed line segments (i.e., vectors,
in the sense that you know from school or UG physics), functions,
matrices, polynomials, etc.
Let us now consider a couple of examples of real vector spaces.
Example 6.1

a) Cartesian space
Let us represent points in a coordinate system as a column of three
real numbers (as you may have done in your school and UG courses):
 u1  v 1 
     
u   u 2 , v   v 2 , etc. (6.2a)
 3  3
u  v 
   

The addition of vectors in this vector space is defined as:


 u1  v 1 
   
u  v  u 2  v 2  (6.2b)
 3 
u  v 3 
 

and the multiplication of the vector by a number a as:


 u1   a u1 
    
a u  a u 2   a u 2  (6.2c)
 3  3
u  a u 
10    
Unit 6 Vector Spaces
The set of all such points [(represented by Eq. (6.2a)] defined by the
operations in Eqs. (6.2b and c) is called the Cartesian space. You can
verify that all the conditions [Eqs. (6.1a to l)] for a set to be a vector
space are satisfied by elements of the Cartesian space.
The zero (identity) vector of the Cartesian space is:

  
0
0   0 , (6.2d)
0
 
 
and for u, the unique vector  u (additive inverse) is:

  u1 
  
u   u2  (6.2e)
 
 u3 
 

You can verify yourself that the remaining conditions explained above are
satisfied. So, the Cartesian space is a vector space.
b) Space C [0, 1] of all continuous real valued functions on [0, 1]: f : [0,1]  
 
Let f ,g, etc. be continuous real-valued functions on the interval [0, 1] of
the real line. This means that for any x  [0, 1], there are unique real
 
numbers f (x ), g (x ), etc. defined by the functions. Let us define
addition of two functions as:
   
(f  g ) ( x )  f ( x )  g ( x ) x  [0, 1], (6.2f)

and multiplication by a number as


 
(af ) (x)  (af ) (x) (6.2g)

Then the set of all these functions forms a vector space. Note that in this
case the set is made up of functions.

The zero vector of this space is the function which is 0( x )  0 for all
x  [0, 1]. Before studying further, you should verify that all the conditions
given in Eqs. (6.1a to l) are satisfied by these functions.

We will end this discussion on the concept of a vector space by giving a


counter example of a set that is not a vector space. Consider a set of column
vectors with element 3 in the first row:
 3   3 
u   , v   , and so on.
 u1   v1 

Is this set a vector space? What do we get when we add any two elements u

and v of this set? We get,
  3  3   6 
u v     
 u1  v 1   u1  v 1 
But the set we started with has only the number 3 in the first row. Therefore,
this element does not belong to the set and the set does not satisfy the 11
Block 2 Vector Spaces, Matrices and Tensors
closure property. Therefore, this set is not a vector space even though its
elements are vectors.

SAQ 1
a) Show that the set of all real numbers R is a vector space.
b) Show that the set of all real 2  2 matrices is a vector space.

We now discuss linear independence of elements of vector space


6.2.1 Linear Independence
Two non-zero vectors u and v are called dependent on each other if one of
them can be obtained by multiplying the other by a real number:
   
u  av or v  (1/ a )u

Otherwise, they are called linearly independent.


This important concept of linear independence can be generalized to any finite
number of vectors. But first a definition:
 
Let v 1, v 2 , . . ., v r be r non-zero vectors.

A linear combination of these vectors is a vector of the type


   
w  a1 v 1  a2 v 2  . . .  ar v r
where a1, a2 , . . ., ar are real numbers. These numbers are called
coefficients.
  
This set of vectors v 1, v 2 , . . , v r is called linearly independent if none of the
vectors can be written as a linear combination of the others.
An equivalent and practical way of saying it is this:
  
The vectors v 1, v 2 , . . ., v r are linearly independent if and only if the linear
combination
   
w  a1 v 1  a2 v 2  . . .  a r v r (6.3a)

equated to zero vector,


   
w  a1 v 1  a2 v 2  . . .  ar v r  0 (6.3b)

implies that all these coefficients are zero,


a1  a2  . . .  ar  0 (6.3c)
Proof
If we assume the opposite, and if some particular ai  0, then we can write
 1    
vi  (a1 v 1  . . .  ai  1 v i  1  ai  1 v i  1  . . .  ar v r )
ai

which shows that v i can be written as a linear combination of other vectors,
which is a contradiction. Even if all coefficients except some fixed a i are zero,
  
it is still a contradiction because then v i  0 but v i is a non-zero vector.
Another way to say the same thing, and which is quite useful, is this. If
  
v1, v 2 , . . ., v r are linearly independent and if a vector v is a linear
combination of these
   
v  a1 v 1  a2 v 2  . . .  ar v r
12
Unit 6 Vector Spaces
then these coefficients a’s are unique. That is, if we were to write
      
v  b1 v 1  b2 v 2  . . .  br v r  a1 v 1  a2 v 2  . . .  ar v r
then b1  a1, b2  a 2 , ... br  a r .

Let us now take up an example, following which you can solve an SAQ.
Example 6.2
Show that the vectors
1  1 
     
u   0 , and v  1 
1  0
   
in the Cartesian space are linearly independent.
Solution : Following Eq. (6.3a), we can write a linear combination of u and v
as:
1  1  a  b
       
a u  b v  a  0   b 1    b 
1  0  a 
     
 
From Eq. (6.3b), it follows that u and v will be linearly independent only if
a  b 0
     
au  bv  0   b   0
 a  0
   
 
This is true only if a  0, b  0. Therefore, according to Eq. (6.3c), u and v
are linearly independent.

SAQ 2
In Example 6.1b, show that vectors
 
f ( x )  x 2, and g( x )  x 3
are linearly independent.
 
Hint: If a f  b g  0 then it means that a x 2  b x 3  0 for all values of x in
the interval [0, 1]. It is enough to choose just two values appropriately
to find a and b.

6.2.2 Linear Subspaces


A subset V1  V is called a subspace of V if all vectors in it satisfy the
conditions of a vector space. In particular, the zero vector 0 of V must be in V1
also.
  
Given a finite number v1, v 2 , . . ., v r of linearly independent vectors in a vector
  
space V, we denote by L ( v1, v 2 , . . ., v r ), the set of all possible linear
combinations
  
a1 v 1  a2 v 2  . . .  ar v r
of these vectors by arbitrary real numbers a1, a2 , . . . ar .
  
The set W  L ( v 1, v 2 , . . ., v r ) is a subspace of V, called a linear span of
  
v 1, v 2 , . . ., v r . 13
Block 2 Vector Spaces, Matrices and Tensors
In the next few subsections, we discuss a few more concepts related to vector
spaces.

6.3 BASES AND DIMENSION


If a vector space has only one vector, that has to be the zero

vector, because
every vector space must have a zero vector. Thus the set 0 is by itself a vector
space, called a trivial vector space.
A vector space V with more than one element will actually have at least one
non-zero element. Let us call it u1. We now look for a non-zero vector linearly
 
independent of u1, that is, a vector which is not a multiple of u1. Let this other

vector be u 2 . Next we look around for another non-zero vector which is
 
linearly independent of both u1. and u 2 . If there is no such vector, then the
process stops here and every vector in the vector space V will be a linear
combination of these two vectors. Otherwise there will be another non-zero
 
vector u3 which is linearly independent of both u1. u 2 . This process can go on
indefinitely, or it may come to a stop.
If we reach an end to this process in n steps, then there will be a set
  
u1, u2...,un such that every vector of the space V will be a linear combination of
these n linearly independent vectors. In this case we call the set of non-zero
vectors   
u1,u2...,un 
a basis. The integer n is called the dimension of the vector space V, and we
say that the vector space is finite dimensional. If the process does not stop
and we keep finding new linearly independent vectors indefinitely, then we say
that the vector space is infinite dimensional.
In a finite dimensional space, no matter how we choose the basis,
the number of basis vectors will always be the same.
Always remember the above statement, which we have given without proof as
the proof is not in the syllabus. From school and UG physics, you know that
{iˆ, ˆj , kˆ } with iˆ  (1, 0, 0), ˆj  (0, 1, 0) and kˆ  (0, 0, 1) is a basis of 3 , the
vector space of all 3-dimensional vectors. You may like to solve a couple of
problems on linear independence and determination of basis.

SAQ 3
In the vector space of Example 6.1a, show that vectors
1  1  0
        
u1   0 , u 2  1  and u 3  1 
1  0 0
     
form a basis, that is, (1) they are linearly independent and (2) any arbitrary
vector
v 1 
  2
. v  v 
 3
v 
 
  
can be uniquely written as a linear combination of the vectors u1, u 2 and u 3 .
What is the dimension of the vector space?
14
Unit 6 Vector Spaces
An elementary knowledge of matrix multiplication is required in Sec. 6.3.1.
There can be more than one basis for a vector space (TQ 4). We now discuss
change of bases and how components of a vector change with a change in
basis.
6.3.1 Change of Bases

Let V be a vector space of dimension n and let E e i ni 1 be a basis in V. We

write a vector v  V as a linear combination of the basis vectors as follows:
 n 
v   x i ei (6.4)
i 1

The numbers x i , i  1,..., n are called components of vector v with respect to


the basis E. You must note that components of a vector depend on the
basis chosen.
 n 

Let F  fi i  1 be another basis of V. Every one of the vectors fi can be
expanded in terms of the basis E as:
 
fi  Tij e j  (6.5a)
J
The numbers Tij can be treated as elements of a matrix, with row index i and
column index j. We can do the opposite too. That is, we can write every vector
of the E basis as a linear combination of the basis vectors of F:
 
ei  
Sij f j (6.5b)
j

We now ask: What is the relation of the numbers Sij with Tij ? We can express
 
e j as a combination of fi
 
fi  
Tij e j You may recall the
matrix multiplication
J
 notation:
  Tij S jk fk
j, k [ AB] ij   A ij B ik
  j
or fi   (TS ) ik f k (6.5c)
k

You should note that we use the convention of writing the elements of the
matrices as Tij and Sij , respectively. The product matrix is TS, which connects
 From Eq. (6.5c), we
the vectors f to themselves.
have
Note that we can write f1 as follows in the F basis:   
    f1  (TS )11f1  (TS )12 f2
f1  1f1  0f2  ...  0fn (6.5d)

It follows that (read the margin remark):  ...  (TS )1n fn
(TS )11  1, (TS )12  0, ... (TS )1n  0 On comparison of the
   above expression with
We get similar results for other basis vectors f2 , f3 , ..., fn . The result is that
Eq. (6.5d), we get
TS is equal to the identity matrix, or, in other words, the matrices T and S are Eq. (6.5e).
inverse of each other. Remember that these are square matrices:
S  T 1 (6.6) 15
Block 2 Vector Spaces, Matrices and Tensors
6.3.2 Change in Components with Basis Change
Let the components of a vector v with respect to basis E be x i :
 n 
v   x i ei
i 1

The same vector v with respect to basis F has components Y i :
 n 
v   y i fi
i 1

We now ask: How are components x i and y i related? Using Eq. (6.5a), we
have:

   
v  x j ej   i
y i f 
 y i Tij e j (6.7a)
j i i, j

  
or   x j   y i Tij  e j  0 (6.7b)
j  i 

Since e j , j  1, 2, ..., n are linearly independent, each coefficient is zero. This
determines the relation:
xj   y iTij (6.7c)
i

and yi   x j (T 1) ji   (T 1T )ij x j (6.7d)


j j

Eqs. (6.7c and d) are summarized by saying that


when the basis changes by a matrix T, then the components of the same
vector change by the inverse transpose of the matrix.

6.3.3 Direct Sum of Vector Spaces


Let U1  V and U 2  V be two subspaces of V such that

1. apart from the zero vector, which must belong to every subspace, there is
no other vector common between U1 and U 2 , and
 
2. every vector v  V can be uniquely written as a sum of two vectors u 1

and u 2 belonging to U1 and U 2 , respectively:
    
v  u1  u 2 , u1  U1, u2  U2 (6.8a)

In such a case, we say that the space V is the direct sum () of the
subspaces U1 and U 2 , and write
V  U1  U 2 (6.8b)

Let the dimension of the space V be n, and that of U1 and U2 be r and s,


       
respectively. We choose the bases e1, e2 , e3 , ..., er and f1, f2 , f3 , ..., fs ,

respectively in the two subspaces. Since every vector v can be written as the
sum of two vectors in the respective spaces, and those vectors can be
expanded in the two bases, we have:
16
Unit 6 Vector Spaces
  
v  u1  u 2

r  s 
  x i ei  y j fj (6.9a)
i 1 j 1
       
This shows that e1, e2 , e3 , ..., e r , f1, f2 , f3 , ..., fs form a basis in V and
therefore the dimension of V is the sum of the dimensions of U 1 and U 2 :
n  n1  n 2 (6.9b)

6.4 LINEAR OPERATORS


Linear operators are mappings between vector spaces (read the margin
remark). Let V and W be two finite dimensional real vector spaces of
dimensions n and m, respectively. From your school and
 UG courses, you are
A mapping T : V  W from V to W assigns to every vector v of V, a vector
w  T ( v ) of W in such a manner that for every real number a familiar with the
concept of functions.

T (av )  aT (v ) (6.10a) You know that the
  statement “y is a
and if v 1, v 2 are in V then function of x” , i.e.,
    y  f (x ) means that
T (v 1  v 2 )  T (v 1 )  T (v 2 ) (6.10b)
for each value of x,
The essential point about linear mappings is that linear combinations go over the function f assigns
to similar linear combinations: a unique number f(x).
    You have also learnt
T (av 1  bv 2 )  aT ( v 1 )  bT ( v 2 ) (6.10c) in your UG
mathematics courses
Some immediate consequences of the definition are: that a function F from
1. The zero vector of V maps into the zero vector of W, because for any a set X into a set Y is
 a rule which
v V
  associates each
 
T (0)  T (0v )  0T (v )  0 (6.11a) element x  X with a
unique element y  Y
2. All vectors of V which map into the zero vector of W form a subspace and know the
 
K  V: if T (v 1 )  0 and T (v 2 )  0 following notation:
  F : X  Y
   
T (av 1  bv 2 )  aT (v1 )  bT (v 2 )  a0  b0  0 (6.11b) The set X is called the
domain and the set Y
The subspace K is called the

kernel of the linear mapping T. If the kernel of F is called the
has only the single vector 0 then the linear mapping is one-to-one. range (or co-domain)
of F. A function is also
3. The subset in W which is the image of T, that is, contains those vectors in called a mapping. We
W which are obtained as a result of the action of T on some vector of V say that each element
form a subspace of W. of the set X is mapped
    to a unique element of
Let w 1 and w 2 in W be such that there are v 1 and v 2 in V with
    the set Y by the
T (v 1 )  w 1 and T (v 2 )  w 2 . Then, mapping F.
      The elements of the
aw 1  bw 2  aT (v 1 )  bT (v 2 )  T (av 1 )  T (bv 2 ) (6.11c)
sets can be any
This subspace is the image R  T (V )  W or range of T in W. entities: numbers,
matrices, functions,
4. If the dimension of the space V is n, dimension of the kernel, k and vectors, etc. as you
dimension of the range R  W is r, then have learnt in the unit.

nk r (6.11d)
17
Block 2 Vector Spaces, Matrices and Tensors
In particular, if the kernel has only the zero vector, then k  0 (because
A surjective function the dimension of a zero vector space is zero) and the mapping is one-to-
(also known as
one. For such a linear operator T, the dimension of the range is the same
surjection, or onto
function) is a function
as the dimension of V.
f that maps an If T is also ‘onto’ (or surjective), that is, the range R is the whole space W,
element x to every then V and W have the same dimension.
element y; that is, for
every y, there is an x Let us consider an example.
such that f(x) = y.
Example 6.3

Prove the statement that if the kernel has only the single vector 0 then the
linear mapping T is one-to-one.
Solution : To prove that

T is one-to-one (or injective), we should show that if

there are two vectors v 1 and v 2 both of which are mapped by T into the zero
An injective function  
vector of W, then they must be equal. But T v1  T v 2 means that
is a function f that
   
maps distinct T v 1  T v 2  T (v 1  v 2 )  0
elements to distinct  
elements; that is, But as 0 is the only vector in V mapped to 0 vector of W, we deduce that
f(x1) = f(x2) implies     
v1  v 2  0 or v1  v 2
x1 = x2.

6.4.1 Product of Operators


Let T be a linear operator mapping the vector space U into a vector space V
and S a linear operator mapping the vector space V into a vector space W:
T : U  V, S:V W
We define the product of operators ST : U  W as successive application of
these mappings:
  
(ST ) u  S (Tu ), for every u  U (6.12a)

Very often, the spaces U, V and W etc. are the same common space V. Then
we can define product of operators any number of times because the mapped
vector remains in the same space.
We can define powers and polynomials of operators as well. For example, if
T : V  V then
 
T 2v  T (Tv ), and so on. (6.12b)

6.4.2 Inner Product or Metric


A real vector space is defined by the operations of the sum of its vectors
(elements) and multiplication of its vectors by real numbers.
An inner product or metric is an additional structure on a vector space.
You have learnt about the dot product of vectors in school and UG physics
courses. The generalization of the dot product to an arbitrary vector
space is called an “inner product.”
 
The inner product of any two
 
vectors v and w in a real vector space V is a
real number denoted by v ,w . The function which defines the inner product
should have the following three properties:
18
Unit 6 Vector Spaces
1. It is linear in the second argument, that is,
          
u, v  w   u, v   u, w , u, av   au, v  (6.13a)
  
for any v , u,w  V and any real number a.
2. It is symmetric
   
u,v   v , u  (6.13b)
 
for any v ,u  V. With this property, we can see that the inner product is
linear in the first argument as well:
          
u  v , w   u, w   v , w ,  a u , v   a u , v  (6.13c)

Vectors linear in both arguments are called bi-linear.


  
3. And lastly, it is non-degenerate, that is, if u,v   0 for all v  V, then
 
u  0. We can rephrase it saying that the only vector whose inner
product with all other vectors is zero, is the zero vector.
Notation: The inner product is also called the metric or dot product. The dot

product is usually written as u.v.
A positive definite inner product is an inner product which, moreover, has
the property that for every non-zero vector, it is strictly positive.
      
v ,v   0 and v ,v   0 if and only if v  0.

Note that in special theory of relativity, the inner product of space-time position
vectors is not positive definite as you will learn in the second semester course
entitled Classical Electrodynamics.]
In one convention, the inner product of ‘time-like’ vectors with themselves is
negative, and that of ‘space-like’ vectors with themselves is positive. In
addition, there are null vectors whose product with themselves is zero. (In
another convention, the inner product of ‘time-like’ vectors with themselves is
positive, and that of ‘space-like’ vectors with themselves is negative.)
But in spite of this, the inner product is non-degenerate.

6.5 ORTHONORMAL VECTORS


Given an inner product, we can define the notion of orthogonality. Two

vectors v and u in V are called orthogonal if their inner product is zero, that
is,  
v , u   0 (6.14)
  
For any vector v , the number v , v  is its norm squared.
When the inner product is positive definite, as in Euclidean space, the positive
  
number v ,v  is its norm or length. It is denoted by v .

When the inner product is not positive definite, the norm squared can be
positive, negative or zero.
A vector with zero norm squared (that is a vector which is orthogonal to itself)
is called a null vector.
A vector with norm squared equal to 1 or  1 is called normalized. So,
orthogonal vectors with norm squared equal to 1 or  1 are called orthonormal
vectors. 19
Block 2 Vector Spaces, Matrices and Tensors
Let us consider an example.
Example 6.4
Show that two non-null, orthogonal vectors are linearly independent.
 
Solution : Let u and v be two vectors with
       
u, u  a  0, v , v   b  0
 
and u, v   0
 
To show that u and v are linearly independent, we must show that a linear
combination equated to zero vector would require both coefficients to be zero.
Now, if
  
cu  dv  0 (i)
then we have to show that c and d are zero. By taking the inner product of
 
Eq. (i) with v and using u, v   0, we get
   
d v , v   db  0, v   0
But since b  v therefore, d  0. Similarly, by taking the inner product with

u, we can show that c  0. You can write the steps for this part yourself.

SAQ 4

In Example 6.4, show that the coefficient c in Eq. (i) is zero.

6.5.1 Orthonormal Bases



Let e i , i  1,..., n be a basis in a real n-dimensional vector space V. The
components of the inner product or metric in this basis are:
 
g ij  ei , e j  (6.15a)

This symmetric matrix contains all the information about the inner product
   
because if v  v i ei and u  u j e j are two vectors, then the bi-linearity of the
product in its two arguments implies
 
v , u   g ij v i u j (6.15b)

The non-degeneracy of the inner product means that


g  det g ij  0 (6.15c)

In other words, the matrix of metric tensor components in any basis is


non-singular. We do not prove this statement here but we will discuss it in
the next two units on matrices.
In spite of the existence of null (zero-norm) vectors in the space, we can
  
always choose a basis {ni } ni 1 such that ij  ni , n j   0 if i  j and the
 
norm squared ni , n j  is either  1 or  1. Such a basis is called an
orthonormal basis. We construct such a basis in the next section.
The number of vectors with norm squared  1 and those with norm squared
20  1 is fixed by the definition of the inner product. The number of positive norm
Unit 6 Vector Spaces
squared vectors minus the number of negative norm squared vectors in an
orthonormal basis is called the signature of the metric and is denoted by
sig (V ).

There do exist bases which contain (non-zero) null vectors as basis vectors.
But these bases are not orthonormal. You must know how to distinguish
between the zero vector and the null vector. We explain it below.
The zero vector always has zero norm: 0, 0  0. In the Minkowski space of
special relativity, there are non-zero vectors that are called “light-like” vectors.
A light-like vector v has  v, v  0. This is because the metric of special
relativity is not positive definite. There are also vectors for which
 v, v  0.
The metric of the 4-dimensional vector space of special relativity (called
Minkowski space) is:
 1 0 0 0
 
0 1 0 0
  
0 0 1 0
 
0 0 0 1
An example of two different non-zero null vectors n not being orthogonal is:

1  1 
   
1 1
n    , n   
0 0 
   
0 0 
But
n , n    (n ) (n )  (1)(1)(1)  1(1)(1)   2  0

6.5.2 Construction of Orthonormal Bases


We go through the proof of the existence of orthonormal bases because of its
fundamental importance. This construction is called the Gram-Schmidt
process.
Let V be a vector space which is not trivial, that is, it is not a space with just
one vector, the zero vector. Because of the non-degenerate nature of the
inner product, we cannot have all the vectors in V with zero norm, that is, all
the non-zero vectors cannot be null vectors. This is so because the only vector
having zero inner product with all vectors (including itself) is the zero vector.
 
Therefore, there is a non-zero vector a with non-zero norm squared a, a  0.
   
Let n1  a a, a . Depending on the sign of norm squared of a,
 
1  n1, n1 (6.16)

is  1 or  1.

Let V1 be the one-dimensional subspace spanned by n1, and let V2 be the set
of all vectors in V orthogonal to every vector in V1. Then V2 is a vector
 
subspace, and every vector v V v can be decomposed as:
       
v  1 v , n1 n1  [v  1 v , n1 n1]
21
Block 2 Vector Spaces, Matrices and Tensors
where the first term is in V1 and the second term is in V2 . The only vector
common to V1 and V2 is the zero vector and this again follows from
non-degeneracy.

Therefore, recalling the definition of the direct sum [Eq. (6.8b)], we have:

V  V1  V2

And the inner product restricted to V2 is again non-degenerate because a


vector in V2 orthogonal to all other vectors of V2 is moreover orthogonal to
V1 and hence is zero vector.

We can now start with V2 as the starting space and find a non null vector b 
      
V2 such that b, b  0, and construct n2  b / b / b, b with
 
n2 , n2    2 equal to  1 or  1. We proceed in this manner inductively until
the whole basis is constructed.

Thus, we have a basis {ni } with the metric components
 
(I  )ij  ni , n j    i  ij (no summation on i)

or

 1 0 . . . 0 
 
 0 2 . . . 0 
I    (6.17)
 
 0 0 . . . n 

6.5.3 Signature of the Metric

Note that whichever route we take to choose orthonormal vectors for a basis,
the number of vectors n  with norm  1 and the number of vectors n  with
norm  1 is always the same. As dimV  n  n  n is fixed, so is the
number t  n  n . The number t is called the signature of the metric.

The Minkowski space of special relativity has one time-like unit vector with
0   1 and three space-like vectors with (c i  1, i  1, 2, 3) in any orthonormal
basis. Thus, it has signature  2.

Similarly, the number  1  c1c 2 ... c n  det I  , the determinant of the matrix I 
of metric components (in the orthonormal basis) is a characteristic of the
 
metric. If {e j } is any basis with g ij  e i , e j  as the metric components, then
g  det g ij (which is always non-zero) has the same sign as det I  . We
write this number as sgn (g) in general.

For space-time in general relativity, the sign of g  det g ij is always


negative because the number of vectors with negative norm is odd.

So far, you have studied about real vector spaces. Complex vector spaces are
equally important in physics and you need to learn about them too.
22
Unit 6 Vector Spaces

6.6 COMPLEX VECTOR SPACES: HILBERT


SPACE
A complex vector space, like the real vector space, is a set of vectors which
can be added to give another vector and which can be multiplied by a complex
number (in place of a real number) to give another vector. Complex vector
spaces occur in physics most prominently as Hilbert spaces, which are
complex vector spaces with a hermitian inner product. The set of all complex
numbers is a complex space (TQ 5).
6.6.1 Hermitian Inner Product
Let ℋbe a complex vector space with elements f, g, h, etc. Let a, b, c, etc. be
complex numbers. Let us denote the zero vector by 0. Then for any vector f ,

the multiplication by number zero gives the zero vector: 0f  0. Also for any

complex number c, c0  0. Moreover for any f  ℋ, f  0  f .

A hermitian inner product on ℋ associates to any two vectors f, g  ℋ, a


complex number denoted by (f, g), which has the following properties: for any
f, g, h  ℋ and any complex number a :
1. (f , g  h )  (f , g )  (f , h ) and (f , ag )  a (f , g ) (6.18a)

2. (f , g )  (g, f ) where the denotes the complex conjugate. (6.18b)
3. (f , f )  0, where (f , f )  0 if and only if f  0. (6.18c)
Because of the first two properties above, (f  g, h )  (f , h )  (g, h ),
(af , g )  a (f , g ) and (af  g, h)  a (f , h)  (g, h) for any f, g, h  ℋ and
any complex number a.
The positive square root of the positive number (f , f ) is called the norm of the
vector f  ℋ and denoted by f . Notice that the condition (f , f )  0 is

equivalent to f  0. For vectors f, g  ℋ, the number f  g satisfies all the
properties of a distance function between f and g considered as points in the
set ℋ.
6.6.2 Pythagoras Theorem and Bessel’s Inequality
A vector f in a Hilbert space ℋis called a unit vector or normalised if f  1.
Two vectors f and g are called orthogonal if (f , g )  0. The zero vector is
   
orthogonal to every vector (0, f )  (0  0, f )  (20, f ) a nd therefore,

(0, f )  0. If a vector is orthogonal to itself, then it can only be the zero vector,
which follows from the definition of the vector product.
For orthogonal vectors f and g with (f , g )  0, the Pythagoras theorem holds
true:
Note that
f  g2  f 2  g2 (6.19)
(f , g )  0  ( g , f )
To verify Eq. (6.19), we use the defining properties of the inner product: since f and g are
orthogonal.
f  g 2  (f  g, f  g )  (f , f )  (f , g )  (g, f )  (g, g )

 f 2 g2
23
Block 2 Vector Spaces, Matrices and Tensors
  
Let e1, e2 , ..., en be unit vectors orthogonal to each other. This means that for
every i , j  1, 2,..., n
 
ei , e j    ij ij  1, if i  j and ij  0, if i  j (6.20)

Such a finite set of vectors is called an orthonormal set.


  
Let f  ℋ be an arbitrary vector and {e1, e2 , ..., en } an orthonormal set. Then
n
f 2   (e i , f ) 2 (6.21)
i 1

This inequality is called Bessel’s inequality and we now give its proof.
Let us call c i  (ei , f ) the numbers occurring in the inequality, and let
n
  c i ei (6.22a)
i 1

which has the norm square

2   (c i ei , c j e j )   (c i c i  ij )   c i 2 (6.22b)
i, j i, j i

We can see that  is orthogonal to (f  ) :

(f  , )  (f , )   2   c i (f , e i )   ci 2  0 (6.22c)
i i

because (f , ei )  (ei , f )  c i (6.22d)

Therefore, by Pythagoras theorem, and Eqs. (6.22 c and d):

f 2  (f   )   2  f   2   2  2   (ei , f ) 2
i
n
which proves the inequality: f
2
  (e i , f ) 2
i 1

The proof also shows that the inequality becomes equality when f is exactly
n
equal to  ci ci .
i 1

You should note the geometrical analogy in a Euclidean space. Here


e1, e2 , ... are orthogonal basis vectors and c i  (ei , f ) are the components
n
or expansion coefficients of f. By writing    c i ei , we are trying to
i 1
‘reconstruct’ f from its components, and f   is a measure of the failure of
the attempt.
6.6.3 Schwarz and Triangle Inequalities
The full name of this inequality is Cauchy-Schwarz-Buniakovski inequality.
For any two vectors f, g in Hilbert space ℋ, the Schwarz inequality can be
written as:

24
f , g   f g (Schwarz Inequality) (6.23)
Unit 6 Vector Spaces
This is a special case of the Bessel inequality. There is nothing to prove if one

or both vectors are null vectors. Therefore, we assume that g  0. Let
e  g / g . Then for this one member orthonormal set e, the Bessel’s
inequality says

f 2  e, f  2 (6.24)

Substituting for e we get

1
g / g , f   g, f   f
g

which gives the Schwarz inequality [Eq. (6.23)].

The Schwarz inequality becomes an equality if the Bessel inequality becomes


an equality. And this happens when
g, f 
f  e e, f   g
g2

in other words, when f and g differ by a multiplicative complex number.

An immediate consequence of Schwarz inequality is the triangle inequality for


any two vectors f, g  ℋ :
fg  f  g (Triangle inequality)

Let us do the following calculation:

f  g 2  f  g, f  g   f 2  g 2  f , g   f , g 

Now f , g   f , g   2 Ref , g   2 f , g   2 f g ,

and therefore,

f g 2  f 2 g 2  2 f g

  f  g 2

which proves the result. The name triangle inequality has obvious connotation
as can be seen in the diagram.

You may now like to solve an SAQ on this section.

SAQ 5
Prove the following for any f, g  ℋ :
a) f g  f  g

b) f g 2  f g 2  2 f 2  2 g 2

The equation in SAQ 5b is called the parallelogram law because of the


geometrical analogy.
25
Block 2 Vector Spaces, Matrices and Tensors
We can ask: Why should we define an inner product on a space and then a
norm. Why cannot we start with a vector space and define a norm . as the
positive valued function satisfying the properties:

1. f  0, f  0 iff f  0

2. af  a f

3. f g  f  g (Triangle inequality) (6.25)

The polarization identity can then be used to define an inner product. In other
words, isn’t every normed space an inner product space?
The answer is that unless the norm satisfies the parallelogram law, it is not
possible to prove the linearity property of the inner product.
6.6.4 Orthonormal Bases
We defined a finite orthonormal (o.n.) set of mutually orthogonal vectors of unit
norm while discussing Bessel’s inequality.
We now consider a general o.n. set (containing countably or even uncountably
many vectors). All we need to check is that each vector in the set is of unit
norm and any two distinct vectors in the set are orthogonal. An o. n. set in a
Hilbert space ℋ is called complete if it is not a proper subset of another o. n.
set.
A complete o.n. set has the property that the only vector orthogonal

to all the
members of the set is the zero vector 0. This is so because if f  0 was such
a vector then we could enlarge the o.n. set by including the unit vector
e  f f falsifying the claim that the set cannot be a proper subset of a larger
o.n. set.
A complete o.n. set is called an orthonormal basis.
There are infinitely many choices for o.n. bases in a Hilbert space. In Quantum
Mechanics, we encounter Hilbert spaces of a special type in which o.n. bases
have countably many elements. Such spaces are called separable Hilbert
spaces and we shall restrict ourselves only to separable spaces.
Let  e1,..., en ,... be an o.n. basis, and f any vector. Then we can write
(“expand”) f as:

f   c i ei , c i  (e i , f ) (6.26)
i 1
The proof of this statement is as follows:
First, the meaning of an infinite series of vectors in the above expression is
this. Let
n
n   c i ei
i 1
then the real positive numbers f  n  0 as n  .
As seen in the proof of the Bessel’s inequality, f   n is orthogonal to  n , and
by the Pythagoras theorem,
n

26
f  n 2  f 2  n 2  f 2   ci 2
1
Unit 6 Vector Spaces
n
On the right-hand side, the sum  c i 2 keeps growing with n and so positive
1
2
numbers f   n keep decreasing with n. There are two possibilities: either it
keeps decreasing indefinitely, in which case f  n  0 as was to be proved.
Or, from some value i  N onwards all coefficients ci are zero. In this case,
we can see that for any integer j
 N  N
f    cj 
  c i ei , e j
  c i  ij
 i 1  i 1

The right-hand side is zero for every j because if j  N, it is c j  c j  0 and


N
for j  N, it is c j but all c j  0 for j  N. Thus, f   c i ei is orthogonal to
i 1
the complete o.n. set complete o.n. set and so is equal to the null vector:
N 
f   c i ei  0
i 1


Since c j  0, j  N, we can again write f   c i ei .
i 1

We can write the norm square as:



f 2   c i 2, c i  (e i , f )
i 1

We now summarise this unit.

6.7 SUMMARY
In this unit, we have covered the following concepts:
 Definition of a vector space (Sec. 6.2, [Eqs. (6.1a to l)], linear
independence of vectors (Sec. 6.2.1, [Eqs. (6.3a to c)]) and linear
subspaces (Sec. 6.2.2).
 The concepts of basis and dimension of a vector space (Sec. 6.3),
change of bases (Sec. 6.3.1, [Eqs. (6.5a to c)]), change of components
with change in bases (Sec. 6.3.2, [Eqs. (6.7a to d)]), direct sum of vector
spaces (Sec. 6.3.3, [Eqs. (6.8a, b)]).
 Definition of linear operators (Sec. 6.4, [Eqs. (6.10a to c)]), product of
operators (Sec. 6.4.1, [Eqs. (6.12a, b)]) and inner product or metric
(Sec. 6.4.2, [Eqs. (6.13a to c)]).
 Definition of orthonormal vectors (Sec. 6.5, Eq. 6.14)), orthonormal bases
(Sec. 6.5.1, [Eqs. (6.15a to c)]), construction of orthonormal bases
(Sec. 6.5.2, [Eqs. (6.16 and 6.17)]) and signature of the metric
(Sec. 6.5.3)
 Definition and properties of complex vector spaces and Hilbert space
(Sec. 6.6), Hermitian inner product (Sec. 6.6.1, [Eqs. (6.18a to c)]), 27
Block 2 Vector Spaces, Matrices and Tensors
Pythagoras theorem and Bessel’s inequality (Sec. 6.6.2, [Eqs. (6.19 and
6.21)]), Schwarz and triangle inequalities (Sec. 6.6.3, [Eqs. (6.23 and
6.25)]) and orthonormal bases (Sec. 6.6.3, [Eq. (6.26)]).

6.8 TERMINAL QUESTIONS


1. For any positive integer n, show that the set
n  ( x1, x 2 ,. . ., x n ), x i  
is a vector space over  if we define vector addition and multiplication by
a scalar as
( x1, x 2 ,. . ., x n )  ( y 1, y 2 ,. . ., y n )

 ( x1  y1, x2  y 2, . . ., xn  y n )
and a ( x1, x 2 ,. . ., x n )  (ax1, ax 2 ,. . ., ax n ) for a scalar a.

2. In a vector space V, prove that


  
a) For every element u  V, ( 1) ( u )  u
     
b) For every u and v in the vector space,  (u  v )   u  v

3. Check whether the following vectors are linearly independent:


1  0 0
     
a)  0 , 1 , 1 
0 0 1 
     
1  1  1
     
b)  0 , 1 , 1
0 0 1
     
   1    1
4. Show that the vectors {v 1, v 2 }, v 1    , v 2    form a basis in R 2,
1  1
the vector space of two-dimensional vectors.
5. Show that the set of all complex numbers is a vector space.
6. Show that:
a) | f  g |  f  g
2
b) f  g  f  g 2  i f  ig 2  i f  ig 2  4 (f , g )

c) Given that for two non-zero vectors f and g,


f g  f  g

x1, x 2 , x 3
show that there is a positive constant a > 0 such that f  ag .

6.9 SOLUTIONS AND ANSWERS


Self-Assessment Questions
1. a) All real numbers R form a vector space with ordinary addition and
multiplication by another real number. The zero vector is the number 0,
28
Unit 6 Vector Spaces
and for a  R there is –a = (1) a. It is a one-dimensional space where
any non-zero number can be taken as a basis vector.

b) All real 2  2 matrices also form a real vector space, with addition and
multiplication by a real number defined as

 a b   a b   a  a b  b   a b   ea eb 
   , e  .
 c d   c d    c  c d  d    c d   ec ed 

The zero vector is the matrix

0 0
 
0 0

This is a four-dimensional vector space where a basis can be chosen with


four linearly independent matrices:

 1 0  0 1 0 0 0 0
 ,  ,  ,  
0 0 0 0  1 0  0 1
       

2. We use the hint given in the text: If ax 2  bx 3  0 for all values of x in the
interval [0, 1], it is just enough to choose x  1, then a  b  0 and, say,
x  1/ 2 then a / 4  b / 8  0. These equations give a  0 and b  0
proving that x 2 and x 3 are linearly independent.

 1  1 0 a  b 0
         
3. a  0   b 1   c  1    b  c    0 
         
         
 1  0  0  a  0

implies
a  b  0, b  c  0, a0

These equations have solutions a = 0, b = 0, c = 0 proving that these


vectors are linearly independent. The vector space is three-dimensional
because any general vector

x  1  1 0
       
 y   z  0   z  1  (z  y )  0 
       
       
z  1 0 0

is linearly dependent on these three basis vectors.


 
4. We take inner product with u of the equation cu  dv  0. Then
      
c u, u  d u,v  c u, u  0.
 
As u,u  0, it follows that c = 0.

5. a) From triangle inequality

A†  A1
because for any real number a, we have ag  a g . 29
Block 2 Vector Spaces, Matrices and Tensors
b) Parallelogram law

f  g 2  f  g, f  g  f , f  g, g  f , g  g, f

 f 2  g 2  f , g  g, f

f  g 2  f  g, f  g  f , f  g, g  f , g  g, f

 f 2  g 2  f , g  g, f

Adding the two equations, we get the result.

Terminal Questions
1. The vector space axioms are satisfied: the zero vector is (0, 0, …, 0), and
the space is n-dimensional with a basis of n linearly independent vectors:
(1, 0, …, 0), (0, 1, …, 0), … (0, 0, …, 1).

2. a) Using axioms (6.1 l), (6.1i) and (6.1 k):


    
( 1)(u )  ( 1)( 1)u   ( 1)(1)u  ( 1)u  u.

b) Using axioms (6.1 l), (6.1 g):


       
 (u  v )
(1)(u  v )
(1)
u (1)
v  u  v .

 1 0 0  a  0


         
3. a) a 0  b 1  c 1  b  c   0
      
         
         
0 0  1  c   0 

gives a = 0, b + c = 0, c = 0 so that a = 0, b = 0, c = 0 proving the linear


independence of the three vectors.
 1  1 1  a  b  c   0 
         
b) a  0   b  1   c 1   b  c    0 
         
         
0 0 1  c  0

gives a + b + c = 0, b + c = 0, c = 0, so that a = 0, b = 0, c = 0 proving the


linear independence of the three vectors.
1   1  a  b   0 
4. a   b     
1  1  a  b 0
       

gives a  b = 0, a + b = 0, so that a = 0, b = 0 proving the linear


independence of the two vectors. Any general vector can be written as:
 x  x  y 1 y  x   1
     
y  2 1 2 1
     

Therefore, the two vectors form a basis in R 2 .


5. All complex numbers form a one-dimensional complex vector space,
because the laws of addition and multiplication of complex numbers follow
all the axioms or conditions required for a vector space.
30
Unit 6 Vector Spaces
6. a) Using triangle inequality:
f  f g g  f g  g .

Therefore,

f  g  f g

Similarly
g  g f f  g f  f  f g  f

which gives
g  f  f g .

When a real number x and its negative  x are both less than equal to
a positive real number y then x  y . Using this

f  g  f g .

b) f  g 2  f  g, f  g  f , f  g, g  f , g  g, f

f  g 2  f  g, f  g  f , f  g, g  f , g  g, f

f  ig 2  f  ig, f  ig  f , f  ig, ig  f , ig  ig, f

f  ig 2  f  ig, f  ig  f , f   ig,ig  f ,ig   ig, f

Using
g, f  f , g *, ig, ig  (i ) * (i ) g, g  g, g   ig,ig

f , ig  i f , g , ig, f  i f , g *

we get:

f  g 2  f  g 2  2 f , g  2 f , g *  4 Re f , g ,

and

f  ig 2  f  ig 2  2i f , g  2i f , g *  2i  f , g  f , g *  4Im f, g ,

Therefore,

 i  f  ig  f  ig   4Re f , g  4i lm f , g  4 f , g .
2 2 2 2
f g  f g
 
which is the polarization identity.
c) We first show that Re f , g  f  g

As f g  f  g ,

f  g 2   f  g 2  f 2  g 2  2 f g .

on the other hand,

f  g 2  f 2  g 2  2Re f , g
31
Block 2 Vector Spaces, Matrices and Tensors
This shows that
Re f , g  f g

Let a > 0 be the unknown constant. Then,

f  ag 2  f 2  a 2 g 2  2a Re f , g

 f 2  a 2 g 2  2a f g .

This expression is zero if we choose a  f g . Therefore

f  ag  0 or f  ag

Note: f  g  f  g is the triangle inequality showing that the length of the


vector f + g which represents one side of the triangle, cannot be greater
than the sum of the lengths of the other two sides. (see Fig. 6.1). When
f  g  f  g the triangle collapses to a straight line and vectors f
and g become proportional to each other in the same direction.

fg
g

Fig. 6.1: The triangle inequality.


f  g  f  g is possible if the triangle in Fig. 6.1 collapses.

But, f  g  f  g if the triangle in Fig. 6.2 collapses.

fg g

f
Fig. 6.2

32

You might also like