Unit 6
Unit 6
UNIT 6
VECTOR SPACES
Structure
6.1 INTRODUCTION
You have learnt the concept of vectors and the operations of addition and
multiplication by scalars on them in your school and UG physics and
mathematics courses. In this unit, we generalise these operations to define
what is known as vector space or linear space.
You know that the concept of vectors is derived from the idea of translation or
displacement of points in space. If something is displaced from point A to B
and then again from B to point C, then the resultant displacement is from A to
C. We connect points A to B by a straight line arrow pointing towards B and
also a similar arrow from B to C. The resultant displacement is then defined as
the sum of these two arrow like quantities called displacement vectors. It is
usually written as:
AB BC AC
There are three essentials in defining the displacement vector: the starting
point, the end point and the length of the displacement.
The concept of displacement vector can be generalized to situations where
there is no displacement of any kind. Very often a vector is defined to be some
physical quantity which has a length and direction.
This definition is not a precise definition as we shall see. There are vectors for
which there is no concept of ‘direction’. And ‘length’ of a vector is not an 7
Block 2 Vector Spaces, Matrices and Tensors
essential quality of vectors but an additional feature which is to be separately
defined.
In this unit, you will study the generalization from physical vectors to abstract
vectors, which are elements of vector spaces. In Sec. 6.2, we define vector
spaces and then explain the related concepts of linear independence, linear
subspaces, bases and dimension, change of bases, change in components
with basis change and direct sum of vector spaces.
In Sec. 6.3, we explain linear operators in vector spaces, product of operators
and inner product or metric. Sec. 6.4 deals with orthonormal vectors, inner
product or metric, orthonormal bases, construction of orthonormal bases and
signature of the metric. Finally, in Sec. 6.5, we discuss complex vector spaces
(in particular, the Hilbert space), hermitian inner product, Pythagoras theorem
and Bessel’s inequality, Schwarz and triangle inequalities and explain the
concept of orthonormal bases.
In the next unit, we discuss matrices.
Expected Learning Outcomes
After studying this unit, you should be able to:
define and identify a vector space, define and ascertain linear
independence of vectors;
define a basis for a vector space and its dimension;
change bases, determine the components with change in bases, and
define direct sum of vector spaces;
define linear operators and determine products of operators, define and
determine the inner product and norm of elements in vector spaces;
define orthogonal vectors, define and construct orthonormal bases, and
define signature of the metric;
define complex vector spaces, Hilbert space, and determine Hermitian
inner product;
state and prove the Pythagorean theorem and Bessel’s inequality,
Schwarz and triangle inequalities; and
define and determine orthonormal bases.
In this course, we will denote elements of the vector space by bold face
symbols: like u,v ,w , etc. The addition of two vectors (i.e., elements of a
vector space, read the margin remark) is shown by a ‘+’ sign. Multiplication by
8
a real number a v with vector is written as av. We use parentheses ‘(’ and ‘)’
Unit 6 Vector Spaces
to separate quantities when needed. Elements of a vector space must satisfy Note that we are
the condition of closure, which requires that calling the elements
of a vector space as
Addition of any two elements u and v in V, will result in an element that vectors. These may
belongs to V: be directed line
segments (or vectors
If u V and v V, then u v V (6.1a) in the sense that you
have learnt in your
If an element u of a vector space V is multiplied by a scalar a, then the
UG courses so far). A
resulting element au also belongs to V: set of real numbers,
complex numbers,
If u V and a is a scalar, then a u V (6.1b)
matrices, functions,
The operations of vector addition and scalar multiplication must satisfy the polynomials, etc.
could also be a vector
following eight conditions:
space if the conditions
1. Addition is commutative: So, for any two vectors u v in V: explained in this
section are fulfilled.
u v v u (6.1c)
2. Addition is associative: So, for any three vectors u, v and w in V:
(u v) w u (v w) (6.1d)
3. There is a unique vector 0 in V called the zero vector (additive identity)
such that for every vector belonging to the vector space V : Although every vector
space contains a
v 0 0 v v (6.1e)
unique zero
(identity) vector, and
4. For every vector v there exists a unique vector v (additive inverse) such
we should specify the
that zero vector of space V
by 0V , and that of
v ( v ) 0 (6.1f)
another
vector space
5. Distributive property (with respect to addition of vectors): for any real W by 0W , and so on, it
number a and any vectors u, v, is not done in
common practice.
a ( u v ) (v u ) (6.1g) What is more, very
often the symbol for
6. Distributive property (with respect to addition of real numbers): for any the zero vector is
real numbers a, b and any vector u, written like the
number zero (0).
(a b )u a u b u (6.1h) Usually there is no
confusion because in
7. Successive multiplication by real numbers: For any real numbers
an equation like
a, b and any vector u, v 0, if the left hand
a (bu ) (ab) u (6.1i) side is a vector, the
right hand side cannot
8. Multiplication by real numbers 0, 1 and 1: For any vector u in V, be the number 0 but
only the zero vector of
0u 0 (6.1j) that vector space to
which v belongs.
1u u (multiplicative identity) (6.1k)
1 u u (6.1l)
Along with closure, these are the eight properties that elements of a vector
space must possess or eight conditions that the elements must satisfy. 9
Block 2 Vector Spaces, Matrices and Tensors
Some direct consequences of the definition are as follows:
Due to the associative and commutative properties, the sum of any finite
number of vectors can be written in any order. For example,
u v w u (v w )
Proof
a 0 a 0 a (0 0 ) a 0
Adding a 0 on both sides, the left hand side of this equation is:
a 0 a 0 ( a 0 ) a 0 0 a 0
The set of all real numbers is a vector space and the set of 2 2 real matrices
is also a vector space (SAQ 1a, b). To sum up, a vector space is a set
whose elements satisfy the ten properties set out in Eqs. (6.1 a to l). The
elements of a vector space could be directed line segments (i.e., vectors,
in the sense that you know from school or UG physics), functions,
matrices, polynomials, etc.
Let us now consider a couple of examples of real vector spaces.
Example 6.1
a) Cartesian space
Let us represent points in a coordinate system as a column of three
real numbers (as you may have done in your school and UG courses):
u1 v 1
u u 2 , v v 2 , etc. (6.2a)
3 3
u v
0
0 0 , (6.2d)
0
and for u, the unique vector u (additive inverse) is:
u1
u u2 (6.2e)
u3
You can verify yourself that the remaining conditions explained above are
satisfied. So, the Cartesian space is a vector space.
b) Space C [0, 1] of all continuous real valued functions on [0, 1]: f : [0,1]
Let f ,g, etc. be continuous real-valued functions on the interval [0, 1] of
the real line. This means that for any x [0, 1], there are unique real
numbers f (x ), g (x ), etc. defined by the functions. Let us define
addition of two functions as:
(f g ) ( x ) f ( x ) g ( x ) x [0, 1], (6.2f)
Then the set of all these functions forms a vector space. Note that in this
case the set is made up of functions.
The zero vector of this space is the function which is 0( x ) 0 for all
x [0, 1]. Before studying further, you should verify that all the conditions
given in Eqs. (6.1a to l) are satisfied by these functions.
SAQ 1
a) Show that the set of all real numbers R is a vector space.
b) Show that the set of all real 2 2 matrices is a vector space.
Let us now take up an example, following which you can solve an SAQ.
Example 6.2
Show that the vectors
1 1
u 0 , and v 1
1 0
in the Cartesian space are linearly independent.
Solution : Following Eq. (6.3a), we can write a linear combination of u and v
as:
1 1 a b
a u b v a 0 b 1 b
1 0 a
From Eq. (6.3b), it follows that u and v will be linearly independent only if
a b 0
au bv 0 b 0
a 0
This is true only if a 0, b 0. Therefore, according to Eq. (6.3c), u and v
are linearly independent.
SAQ 2
In Example 6.1b, show that vectors
f ( x ) x 2, and g( x ) x 3
are linearly independent.
Hint: If a f b g 0 then it means that a x 2 b x 3 0 for all values of x in
the interval [0, 1]. It is enough to choose just two values appropriately
to find a and b.
SAQ 3
In the vector space of Example 6.1a, show that vectors
1 1 0
u1 0 , u 2 1 and u 3 1
1 0 0
form a basis, that is, (1) they are linearly independent and (2) any arbitrary
vector
v 1
2
. v v
3
v
can be uniquely written as a linear combination of the vectors u1, u 2 and u 3 .
What is the dimension of the vector space?
14
Unit 6 Vector Spaces
An elementary knowledge of matrix multiplication is required in Sec. 6.3.1.
There can be more than one basis for a vector space (TQ 4). We now discuss
change of bases and how components of a vector change with a change in
basis.
6.3.1 Change of Bases
Let V be a vector space of dimension n and let E e i ni 1 be a basis in V. We
write a vector v V as a linear combination of the basis vectors as follows:
n
v x i ei (6.4)
i 1
We now ask: What is the relation of the numbers Sij with Tij ? We can express
e j as a combination of fi
fi
Tij e j You may recall the
matrix multiplication
J
notation:
Tij S jk fk
j, k [ AB] ij A ij B ik
j
or fi (TS ) ik f k (6.5c)
k
You should note that we use the convention of writing the elements of the
matrices as Tij and Sij , respectively. The product matrix is TS, which connects
From Eq. (6.5c), we
the vectors f to themselves.
have
Note that we can write f1 as follows in the F basis:
f1 (TS )11f1 (TS )12 f2
f1 1f1 0f2 ... 0fn (6.5d)
It follows that (read the margin remark): ... (TS )1n fn
(TS )11 1, (TS )12 0, ... (TS )1n 0 On comparison of the
above expression with
We get similar results for other basis vectors f2 , f3 , ..., fn . The result is that
Eq. (6.5d), we get
TS is equal to the identity matrix, or, in other words, the matrices T and S are Eq. (6.5e).
inverse of each other. Remember that these are square matrices:
S T 1 (6.6) 15
Block 2 Vector Spaces, Matrices and Tensors
6.3.2 Change in Components with Basis Change
Let the components of a vector v with respect to basis E be x i :
n
v x i ei
i 1
The same vector v with respect to basis F has components Y i :
n
v y i fi
i 1
We now ask: How are components x i and y i related? Using Eq. (6.5a), we
have:
v x j ej i
y i f
y i Tij e j (6.7a)
j i i, j
or x j y i Tij e j 0 (6.7b)
j i
Since e j , j 1, 2, ..., n are linearly independent, each coefficient is zero. This
determines the relation:
xj y iTij (6.7c)
i
1. apart from the zero vector, which must belong to every subspace, there is
no other vector common between U1 and U 2 , and
2. every vector v V can be uniquely written as a sum of two vectors u 1
and u 2 belonging to U1 and U 2 , respectively:
v u1 u 2 , u1 U1, u2 U2 (6.8a)
In such a case, we say that the space V is the direct sum () of the
subspaces U1 and U 2 , and write
V U1 U 2 (6.8b)
r s
x i ei y j fj (6.9a)
i 1 j 1
This shows that e1, e2 , e3 , ..., e r , f1, f2 , f3 , ..., fs form a basis in V and
therefore the dimension of V is the sum of the dimensions of U 1 and U 2 :
n n1 n 2 (6.9b)
nk r (6.11d)
17
Block 2 Vector Spaces, Matrices and Tensors
In particular, if the kernel has only the zero vector, then k 0 (because
A surjective function the dimension of a zero vector space is zero) and the mapping is one-to-
(also known as
one. For such a linear operator T, the dimension of the range is the same
surjection, or onto
function) is a function
as the dimension of V.
f that maps an If T is also ‘onto’ (or surjective), that is, the range R is the whole space W,
element x to every then V and W have the same dimension.
element y; that is, for
every y, there is an x Let us consider an example.
such that f(x) = y.
Example 6.3
Prove the statement that if the kernel has only the single vector 0 then the
linear mapping T is one-to-one.
Solution : To prove that
T is one-to-one (or injective), we should show that if
there are two vectors v 1 and v 2 both of which are mapped by T into the zero
An injective function
vector of W, then they must be equal. But T v1 T v 2 means that
is a function f that
maps distinct T v 1 T v 2 T (v 1 v 2 ) 0
elements to distinct
elements; that is, But as 0 is the only vector in V mapped to 0 vector of W, we deduce that
f(x1) = f(x2) implies
v1 v 2 0 or v1 v 2
x1 = x2.
Very often, the spaces U, V and W etc. are the same common space V. Then
we can define product of operators any number of times because the mapped
vector remains in the same space.
We can define powers and polynomials of operators as well. For example, if
T : V V then
T 2v T (Tv ), and so on. (6.12b)
Note that in special theory of relativity, the inner product of space-time position
vectors is not positive definite as you will learn in the second semester course
entitled Classical Electrodynamics.]
In one convention, the inner product of ‘time-like’ vectors with themselves is
negative, and that of ‘space-like’ vectors with themselves is positive. In
addition, there are null vectors whose product with themselves is zero. (In
another convention, the inner product of ‘time-like’ vectors with themselves is
positive, and that of ‘space-like’ vectors with themselves is negative.)
But in spite of this, the inner product is non-degenerate.
When the inner product is not positive definite, the norm squared can be
positive, negative or zero.
A vector with zero norm squared (that is a vector which is orthogonal to itself)
is called a null vector.
A vector with norm squared equal to 1 or 1 is called normalized. So,
orthogonal vectors with norm squared equal to 1 or 1 are called orthonormal
vectors. 19
Block 2 Vector Spaces, Matrices and Tensors
Let us consider an example.
Example 6.4
Show that two non-null, orthogonal vectors are linearly independent.
Solution : Let u and v be two vectors with
u, u a 0, v , v b 0
and u, v 0
To show that u and v are linearly independent, we must show that a linear
combination equated to zero vector would require both coefficients to be zero.
Now, if
cu dv 0 (i)
then we have to show that c and d are zero. By taking the inner product of
Eq. (i) with v and using u, v 0, we get
d v , v db 0, v 0
But since b v therefore, d 0. Similarly, by taking the inner product with
u, we can show that c 0. You can write the steps for this part yourself.
SAQ 4
This symmetric matrix contains all the information about the inner product
because if v v i ei and u u j e j are two vectors, then the bi-linearity of the
product in its two arguments implies
v , u g ij v i u j (6.15b)
There do exist bases which contain (non-zero) null vectors as basis vectors.
But these bases are not orthonormal. You must know how to distinguish
between the zero vector and the null vector. We explain it below.
The zero vector always has zero norm: 0, 0 0. In the Minkowski space of
special relativity, there are non-zero vectors that are called “light-like” vectors.
A light-like vector v has v, v 0. This is because the metric of special
relativity is not positive definite. There are also vectors for which
v, v 0.
The metric of the 4-dimensional vector space of special relativity (called
Minkowski space) is:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
An example of two different non-zero null vectors n not being orthogonal is:
1 1
1 1
n , n
0 0
0 0
But
n , n (n ) (n ) (1)(1)(1) 1(1)(1) 2 0
is 1 or 1.
Let V1 be the one-dimensional subspace spanned by n1, and let V2 be the set
of all vectors in V orthogonal to every vector in V1. Then V2 is a vector
subspace, and every vector v V v can be decomposed as:
v 1 v , n1 n1 [v 1 v , n1 n1]
21
Block 2 Vector Spaces, Matrices and Tensors
where the first term is in V1 and the second term is in V2 . The only vector
common to V1 and V2 is the zero vector and this again follows from
non-degeneracy.
Therefore, recalling the definition of the direct sum [Eq. (6.8b)], we have:
V V1 V2
We can now start with V2 as the starting space and find a non null vector b
V2 such that b, b 0, and construct n2 b / b / b, b with
n2 , n2 2 equal to 1 or 1. We proceed in this manner inductively until
the whole basis is constructed.
Thus, we have a basis {ni } with the metric components
(I )ij ni , n j i ij (no summation on i)
or
1 0 . . . 0
0 2 . . . 0
I (6.17)
0 0 . . . n
Note that whichever route we take to choose orthonormal vectors for a basis,
the number of vectors n with norm 1 and the number of vectors n with
norm 1 is always the same. As dimV n n n is fixed, so is the
number t n n . The number t is called the signature of the metric.
The Minkowski space of special relativity has one time-like unit vector with
0 1 and three space-like vectors with (c i 1, i 1, 2, 3) in any orthonormal
basis. Thus, it has signature 2.
Similarly, the number 1 c1c 2 ... c n det I , the determinant of the matrix I
of metric components (in the orthonormal basis) is a characteristic of the
metric. If {e j } is any basis with g ij e i , e j as the metric components, then
g det g ij (which is always non-zero) has the same sign as det I . We
write this number as sgn (g) in general.
So far, you have studied about real vector spaces. Complex vector spaces are
equally important in physics and you need to learn about them too.
22
Unit 6 Vector Spaces
f 2 g2
23
Block 2 Vector Spaces, Matrices and Tensors
Let e1, e2 , ..., en be unit vectors orthogonal to each other. This means that for
every i , j 1, 2,..., n
ei , e j ij ij 1, if i j and ij 0, if i j (6.20)
This inequality is called Bessel’s inequality and we now give its proof.
Let us call c i (ei , f ) the numbers occurring in the inequality, and let
n
c i ei (6.22a)
i 1
2 (c i ei , c j e j ) (c i c i ij ) c i 2 (6.22b)
i, j i, j i
(f , ) (f , ) 2 c i (f , e i ) ci 2 0 (6.22c)
i i
f 2 (f ) 2 f 2 2 2 (ei , f ) 2
i
n
which proves the inequality: f
2
(e i , f ) 2
i 1
The proof also shows that the inequality becomes equality when f is exactly
n
equal to ci ci .
i 1
24
f , g f g (Schwarz Inequality) (6.23)
Unit 6 Vector Spaces
This is a special case of the Bessel inequality. There is nothing to prove if one
or both vectors are null vectors. Therefore, we assume that g 0. Let
e g / g . Then for this one member orthonormal set e, the Bessel’s
inequality says
f 2 e, f 2 (6.24)
1
g / g , f g, f f
g
f g 2 f g, f g f 2 g 2 f , g f , g
Now f , g f , g 2 Ref , g 2 f , g 2 f g ,
and therefore,
f g 2 f 2 g 2 2 f g
f g 2
which proves the result. The name triangle inequality has obvious connotation
as can be seen in the diagram.
SAQ 5
Prove the following for any f, g ℋ :
a) f g f g
b) f g 2 f g 2 2 f 2 2 g 2
2. af a f
The polarization identity can then be used to define an inner product. In other
words, isn’t every normed space an inner product space?
The answer is that unless the norm satisfies the parallelogram law, it is not
possible to prove the linearity property of the inner product.
6.6.4 Orthonormal Bases
We defined a finite orthonormal (o.n.) set of mutually orthogonal vectors of unit
norm while discussing Bessel’s inequality.
We now consider a general o.n. set (containing countably or even uncountably
many vectors). All we need to check is that each vector in the set is of unit
norm and any two distinct vectors in the set are orthogonal. An o. n. set in a
Hilbert space ℋ is called complete if it is not a proper subset of another o. n.
set.
A complete o.n. set has the property that the only vector orthogonal
to all the
members of the set is the zero vector 0. This is so because if f 0 was such
a vector then we could enlarge the o.n. set by including the unit vector
e f f falsifying the claim that the set cannot be a proper subset of a larger
o.n. set.
A complete o.n. set is called an orthonormal basis.
There are infinitely many choices for o.n. bases in a Hilbert space. In Quantum
Mechanics, we encounter Hilbert spaces of a special type in which o.n. bases
have countably many elements. Such spaces are called separable Hilbert
spaces and we shall restrict ourselves only to separable spaces.
Let e1,..., en ,... be an o.n. basis, and f any vector. Then we can write
(“expand”) f as:
f c i ei , c i (e i , f ) (6.26)
i 1
The proof of this statement is as follows:
First, the meaning of an infinite series of vectors in the above expression is
this. Let
n
n c i ei
i 1
then the real positive numbers f n 0 as n .
As seen in the proof of the Bessel’s inequality, f n is orthogonal to n , and
by the Pythagoras theorem,
n
26
f n 2 f 2 n 2 f 2 ci 2
1
Unit 6 Vector Spaces
n
On the right-hand side, the sum c i 2 keeps growing with n and so positive
1
2
numbers f n keep decreasing with n. There are two possibilities: either it
keeps decreasing indefinitely, in which case f n 0 as was to be proved.
Or, from some value i N onwards all coefficients ci are zero. In this case,
we can see that for any integer j
N N
f cj
c i ei , e j
c i ij
i 1 i 1
Since c j 0, j N, we can again write f c i ei .
i 1
6.7 SUMMARY
In this unit, we have covered the following concepts:
Definition of a vector space (Sec. 6.2, [Eqs. (6.1a to l)], linear
independence of vectors (Sec. 6.2.1, [Eqs. (6.3a to c)]) and linear
subspaces (Sec. 6.2.2).
The concepts of basis and dimension of a vector space (Sec. 6.3),
change of bases (Sec. 6.3.1, [Eqs. (6.5a to c)]), change of components
with change in bases (Sec. 6.3.2, [Eqs. (6.7a to d)]), direct sum of vector
spaces (Sec. 6.3.3, [Eqs. (6.8a, b)]).
Definition of linear operators (Sec. 6.4, [Eqs. (6.10a to c)]), product of
operators (Sec. 6.4.1, [Eqs. (6.12a, b)]) and inner product or metric
(Sec. 6.4.2, [Eqs. (6.13a to c)]).
Definition of orthonormal vectors (Sec. 6.5, Eq. 6.14)), orthonormal bases
(Sec. 6.5.1, [Eqs. (6.15a to c)]), construction of orthonormal bases
(Sec. 6.5.2, [Eqs. (6.16 and 6.17)]) and signature of the metric
(Sec. 6.5.3)
Definition and properties of complex vector spaces and Hilbert space
(Sec. 6.6), Hermitian inner product (Sec. 6.6.1, [Eqs. (6.18a to c)]), 27
Block 2 Vector Spaces, Matrices and Tensors
Pythagoras theorem and Bessel’s inequality (Sec. 6.6.2, [Eqs. (6.19 and
6.21)]), Schwarz and triangle inequalities (Sec. 6.6.3, [Eqs. (6.23 and
6.25)]) and orthonormal bases (Sec. 6.6.3, [Eq. (6.26)]).
( x1 y1, x2 y 2, . . ., xn y n )
and a ( x1, x 2 ,. . ., x n ) (ax1, ax 2 ,. . ., ax n ) for a scalar a.
x1, x 2 , x 3
show that there is a positive constant a > 0 such that f ag .
b) All real 2 2 matrices also form a real vector space, with addition and
multiplication by a real number defined as
a b a b a a b b a b ea eb
, e .
c d c d c c d d c d ec ed
0 0
0 0
1 0 0 1 0 0 0 0
, , ,
0 0 0 0 1 0 0 1
2. We use the hint given in the text: If ax 2 bx 3 0 for all values of x in the
interval [0, 1], it is just enough to choose x 1, then a b 0 and, say,
x 1/ 2 then a / 4 b / 8 0. These equations give a 0 and b 0
proving that x 2 and x 3 are linearly independent.
1 1 0 a b 0
3. a 0 b 1 c 1 b c 0
1 0 0 a 0
implies
a b 0, b c 0, a0
x 1 1 0
y z 0 z 1 (z y ) 0
z 1 0 0
A† A1
because for any real number a, we have ag a g . 29
Block 2 Vector Spaces, Matrices and Tensors
b) Parallelogram law
f g 2 f g, f g f , f g, g f , g g, f
f 2 g 2 f , g g, f
f g 2 f g, f g f , f g, g f , g g, f
f 2 g 2 f , g g, f
Terminal Questions
1. The vector space axioms are satisfied: the zero vector is (0, 0, …, 0), and
the space is n-dimensional with a basis of n linearly independent vectors:
(1, 0, …, 0), (0, 1, …, 0), … (0, 0, …, 1).
Therefore,
f g f g
Similarly
g g f f g f f f g f
which gives
g f f g .
When a real number x and its negative x are both less than equal to
a positive real number y then x y . Using this
f g f g .
b) f g 2 f g, f g f , f g, g f , g g, f
f g 2 f g, f g f , f g, g f , g g, f
Using
g, f f , g *, ig, ig (i ) * (i ) g, g g, g ig,ig
f , ig i f , g , ig, f i f , g *
we get:
f g 2 f g 2 2 f , g 2 f , g * 4 Re f , g ,
and
f ig 2 f ig 2 2i f , g 2i f , g * 2i f , g f , g * 4Im f, g ,
Therefore,
i f ig f ig 4Re f , g 4i lm f , g 4 f , g .
2 2 2 2
f g f g
which is the polarization identity.
c) We first show that Re f , g f g
As f g f g ,
f g 2 f g 2 f 2 g 2 2 f g .
f g 2 f 2 g 2 2Re f , g
31
Block 2 Vector Spaces, Matrices and Tensors
This shows that
Re f , g f g
f ag 2 f 2 a 2 g 2 2a Re f , g
f 2 a 2 g 2 2a f g .
f ag 0 or f ag
fg
g
fg g
f
Fig. 6.2
32