0% found this document useful (0 votes)
97 views22 pages

Inner Product - Peter

The document defines and discusses inner product spaces. It begins by defining an inner product as a bilinear form on a vector space that satisfies certain properties like positivity, additivity, and conjugate symmetry. An inner product space is a vector space equipped with an inner product. The document provides examples of inner products, such as the Euclidean inner product on Rn. It then discusses properties that follow from the definition of an inner product, such as the Cauchy-Schwartz inequality. The document also introduces orthonormal bases and describes the Gram-Schmidt process for constructing an orthonormal basis from a linearly independent set of vectors.

Uploaded by

Alecsandra Rusu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views22 pages

Inner Product - Peter

The document defines and discusses inner product spaces. It begins by defining an inner product as a bilinear form on a vector space that satisfies certain properties like positivity, additivity, and conjugate symmetry. An inner product space is a vector space equipped with an inner product. The document provides examples of inner products, such as the Euclidean inner product on Rn. It then discusses properties that follow from the definition of an inner product, such as the Cauchy-Schwartz inequality. The document also introduces orthonormal bases and describes the Gram-Schmidt process for constructing an orthonormal basis from a linearly independent set of vectors.

Uploaded by

Alecsandra Rusu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

3

Inner product spaces


3.1

Basic definitions and results

Up to now we have studied vector spaces, linear maps, special linear maps.
We can measure if two vectors are equal, but we do not have something
like length, so we cannot compare two vectors. Moreover we cannot say
anything about the position of two vectors.
In a vector space one can define the norm of a vector and the inner
product of two vectors. The notion of the norm permits us to measure the
length of the vectors, and compare two vectors. The inner product of two
vectors, on one hand induces a norm, so the length can be measured, and
on the other hand (at least in the case of real vector spaces), lets us
measure the angle between two vectors, so a full geometry can be
constructed there. Nevertheless in the case of complex vector spaces, the
angle of two vectors is not clearly defined, but the orthogonality is.
Definition 3.1. An inner product on a vector space V over the field F is
a function (bilinear form) x, y : V V R with the properties:
43

44

3. Inner product spaces

(positivity and definiteness) xv, vy 0 and xv, vy 0 iff v 0.


(additivity in the first slot) xu ` v, wy xu, wy ` xv, wy, for all
u, v, w P V.
(homogeneity in the first slot) xv, wy xv, wy for all P F and
v, w P V.
(conjugate symmetry) xv, wy xw, vy for all v, w P V .
An inner product space is a pair pV, x, yq, where V is vector space and
x, y is an inner product on V .
The most important example of an inner product space is Fn . Let
v pv1 , . . . , vn q and w pw1 , . . . wn q and define the inner product by
xv, wy v1 w1 ` ` vn wn .
This is the typical example of an inner product, called the Euclidean inner
product, and when Fn is referred to as an inner product space, one should
assume that the inner product is the Euclidean one, unless explicitly
stated otherwise.

Example 3.2. Let A P M2 pRq, A

be a positive definite
b c
matrix, that is a 0, detpAq 0. Then for every

u
1
.
u pu1 , u2 q, v pv1 , v2 q P R2 we define xu, vy pv1 v2 qA
u2
It can easily be verified that x, y is an inner product on the real linear
space R2 .
If A I2 we obtain the usual inner product xu, vy u1 v1 ` u2 v2 .
From the definition one can easily deduce the following properties of an

Basic definitions and results

45

inner product:
xv, 0y x0, vy 0,
xu, v ` wy xu, vy ` xu, wy,
xu, vy xu, vy,
for all u, v, w P V and P F
Definition 3.3. Let V be a vector space over F. A function
}}:V R
is called a norm on V if:
(positivity) }v} 0, v P V v 0 ;
(homogeneity) }v} || }v}, @ P F, @v P V ;
(triangle inequality) }u ` v} }u} ` }v}, @u, v P V.
A normed space is a pair pV, } }q, where V is a vector space and } } is a
norm on V .
Example 3.4. On the real linear space Rn one can define a norm in
several ways. Indeed, for any x px1 , x2 , . . . , xn q P Rn define its norm as
a
}x} x21 ` x22 ` ` x2n . One can easily verify that the axioms in the
definition of norm are satisfied. This norm is called the euclidian norm.
More generally, for any p P R, p 1 we can define
1

}x} pxp1 ` xp2 ` ` xpn q p , the so called pnorm on Rn .


Another way to define a norm on Rn is }x} maxt|x1 |, |x2 |, . . . , |xn |u.
This is the so called maximum norm.
Definition 3.5. Let X be a nonempty set. A function d : X X R
satisfying the following properties:

46

3. Inner product spaces

(positivity) dpx, yq 0, @x, y P X and dpx, yq 0 x y;


(symmetry) dpx, yq dpy, xq, @x, y P X;
(triangle inequality) dpx, yq dpx, zq ` dpz, yq, @x, y, z P X;
is called a metric or distance on X. A set X with a metric defined on it is
called a metric space.
Example 3.6. Let X be an arbitrary set. One can define a distance on X
by
$
&
dpx, yq

0, if x y
% 1, otherwise.

This metric is called the discrete metric on X. On Rn the Chebyshev


distance is defined as
dpx, yq max |xi yi |, x px1 , x2 , . . . , xn q, y py1 , y2 , . . . , yn q P Rn .
1in

In this course we are mainly interested in the inner product spaces. But
we should point out that an inner product on V defines a norm, by
a
}v} xv, vy for v P V , and a norm on V defines a metric by
dpv, wq }w v}, for v, w P V .
For an inner product space pV, x, yq the following identity is true:
C

i1

i vi ,

j1

G
j wj

m
n

i j xvi , wj y.

i1 j1

Definition 3.7. Two vectors u, v P V are said to be orthogonal if


xu, vy 0.
Theorem 3.8. (Parallelogram law) Let V be an inner product space
and u, v P V . Then
}u ` v}2 ` }u v}2 2p}u}2 ` }v}2 q.

Basic definitions and results

47

Proof.
}u ` v}2 ` }u v}2

xu ` v, u ` vy ` xu v, u vy xu, uy ` xu, vy ` xv, uy ` xv, v


`xu, uy xu, vy xv, uy ` xv, vy

2p}u}2 ` }v}2 q.

Theorem 3.9. (Pythagorean Theorem) Let V be an inner product


space, and u, v P V orthogonal vectors. Then
}u ` v}2 }u}2 ` }v}2 .
Proof.
}u ` v}2

xu ` v, u ` vy

xu, uy ` xu, vy ` xv, uy ` xv, vy

}u}2 ` }v}2 .

Now we are going to prove one of the most important inequalities in


mathematics, namely the Cauchy-Schwartz inequality. There are several
methods of proof for this, we will give one related to our aims.
Consider u, v P V . We want to write u as a sum between a vector collinear
to v and a vector orthogonal to v. Let P F and write u as
u v ` pu vq. Imposing now the condition that v is orthogonal to
pu vq, one obtains

0 xu v, vy xu, vy }v}2 ,
so one has to choose

xu,vy
}v}2 ,

and the decomposition is

48

3. Inner product spaces

xu, vy
xu, vy
v` u
v .
u
}v}2
}v}2
Theorem 3.10. Cauchy-Schwartz Inequality Let V be an inner
product space and u, v P V . Then
|xu, vy| }u} }v}.
The equality holds iff one of u, v is a scalar multiple of the other (u and v
are collinear).
Proof. Let u, v P V . If v 0 both sides of the inequality are 0 and the

xu,vy
desired result holds. Suppose that v 0. Write u xu,vy
}v}2 v ` u }v}2 v .
Taking into account that the vectors

xu,vy
}v}2 v

and u

xu,vy
}v}2 v

are orthogonal,

by the Pythagorean theorem we obtain


2

}u}

2
xu, vy 2
xu, vy

v ` u
v

}v}2
}v}2

2
xu, vy
|xu, vy|2
u

`
v

}v}2
}v}2

|xu, vy|2
,
}v}2

inequality equivalent with the one in the theorem.


We have equality iff u

3.2

xu,vy
}v}2 v

0, that is iff u is a scalar multiple of v.

Orthonormal Bases

Definition 3.11. Let pV, x, yq an inner product space and let I be an


arbitrary index set. A family of vectors A tei P V |i P Iu is called an
orthogonal family, if xei , ej y 0 for every i, j P I, i j. The family A is
called orthonormal if it is orthogonal and }ei } 1 for every i P I.

Orthonormal Bases

49

One of the reason that one studies orthonormal families is that in such
special bases the computations are much more simple.
Proposition 3.12. If pe1 , e2 , . . . , em q is an orthonormal family of vectors
in V , then

}1 e1 ` 2 e2 ` ` m em }2 |1 |2 ` |2 |2 ` ` |m |2
for all 1 , 2 , . . . , m P F.
Proof. Apply Pythagorean Theorem.
Corollary 3.13. Every orthonormal list of vectors is linearly independent.
Proof. Let pe1 , e2 , . . . , em q be an orthonormal list of vectors in V and
1 , 2 , . . . , m P F with
1 e1 ` 2 e2 ` ` m em 0.
It follows that |1 |2 ` |2 |2 ` ` |m |2 0, that is j 0, j 1, m.
An orthonornal basis of an inner product vector space V is a basis of V
which is also an orthonormal list of V . It is clear that every orthonormal
list of vectors of length dim V is an orthonormal basis (because it is
linearly independent, being orthonormal).
Theorem 3.14. Let pe1 , e2 , . . . , en q be an orthonormal basis of an inner
product space V . If v 1 e1 ` 2 e2 ` ` n en P V , then
i xv, ei y, for all i P t1, 2, . . . , nu and
}v}2

|xv, ei y|2

i1

Proof. Since v 1 e1 ` 2 e2 ` ` n en , by taking the inner product in


both sides with ei we have

50

3. Inner product spaces

xv, ei y 1 xe1 , ei y ` 2 xe2 , ei y ` ` i xei , ei y ` ` n xen , ei y i .


The second assertion comes from applying the Pythagorean Theorem
several times.
Up to now we have an image about the usefulness of orthonormal basis.
But how does one go to find them? The next result gives an answer to the
question. The following result is a well known algorithm in linear algebra,
called the Gram-Schmidt procedure. The procedure is pointed here, giving
a method for turning a linearly independent list into an orthonormal one,
with the same span as the original one.
Theorem 3.15. Gram-Schmidt If pv1 , v2 , . . . , vm q is a linearly
independent set of vectors in V , then there exists an orthonormal set of
vectors pe1 , . . . em q in V, such that

spanpv1 , v2 , . . . , vk q spanpe1 , e2 . . . , ek q
for every k P t1, 2, . . . , mu.
Proof. Let pv1 , v2 , . . . , vm q be a linearly independent set of vectors. The
family of orthonormal vectors pe1 , e2 . . . , em q will be constructed
inductively. Start with e1

v1
}v1 } .

Suppose now that j 1 and an

orthonormal family pe1 , e2 , . . . , ej1 q has been constructed such that


spanpv1 , v2 , . . . , vj1 q spanpe1 , e2 , . . . , ej1 q
Consider

ej

vj xvj , e1 ye1 xvj , ej1 yej1


}vj xvj , e1 ye1 xvj , ej1 yej1 }

Since the list pv1 , v2 , . . . , vm q is linearly independent, it follows that vj is


not in spanpv1 , v2 , . . . , vj1 q, and thus is not in spanpe1 , e2 , . . . , ej1 q.

Orthonormal Bases

51

Hence ej is well defined, and }ej } 1.By direct computations it follows


that for 1 k j one has

xej , ek y

vj xvj , e1 ye1 xvj , ej1 yej1


, ek
}vj xvj , e1 ye1 xvj , ej1 yej1 }
xvj , ek y xvj , ek y
}vj xvj , e1 ye1 xvj , ej1 yej1 }
0,

thus pe1 , e2 , . . . ek q is an orthonormal family. By the definition of ej one


can see that vj P spanpe1 , e2 , . . . , ej q, which gives (together with our
hypothesis of induction), that
spanpv1 , v2 , . . . , vj q spanpe1 , e2 , . . . , ej q
Both lists being linearly independent (the first one by hypothesis and the
second one by orthonormality), it follows that the generated subspaces
above have the same dimension j, so they are equal.
Remark 3.16. If in the Gram Schmidt process we do not normalize the
vectors we obtain an orthogonal basis instead of an orthonormal one.
Now we can state the main results in this section
Corollary 3.17. Every finitely dimensional inner product space has an
orhtonormal basis.
Proof. Choose a basis of V , apply the Gram-Schmidt procedure to it and
obtain an orthonormal list of length equal to dim V . It follows that the list
is a basis, being linearly independent.
The next proposition shows that any orthonormal list can be extended to
an orthonormal basis.

52

3. Inner product spaces

Proposition 3.18. Every orhtonormal family of vectors can be extended


to an orthonormal basis of V .
Proof. Suppose pe1 , e2 , . . . , em q is an orthonormal family of vectors.. Being
linearly independent, it can be extended to a basis,
pe1 , e2 , . . . , em , vm`1 , . . . , vn q. Applying now the Gram-Schmidt procedure
to pe1 , e2 , . . . , em , vm`1 , . . . , vn q, we obtain the list
pe1 , e2 , . . . , em , fm`1 , . . . , fn q, (note that the Gram Schmidt procedure
leaves the first m entries unchanged, being already orthonormal). Hence
we have an extension to an orthonormal basis.
Corollary 3.19. Suppose that T P EndpV q. If T has an upper triangular
form with respect to some basis of V , then T has an upper triangular form
with respect to some orthonormal basis of V .
Corollary 3.20. Suppose that V is a complex vector space and
T P EndpV q. Then T has an upper triangular form with respect to some
orthonormal basis of V .

3.3

Orthogonal projections and


minimization problems

Let U V be a subset of an inner product space V . The orthogonal


complement of U , denoted by U K is the set of all vectors in V which are
orthogonal to every vector in U i.e.:
U K tv P V |xv, uy 0, @u P U u.
It can easily be verified that U K is a subspace of V , V K t0u and
t0uK V , as well that U1 U2 U2K U1K .

Orthogonal projections and minimization problems

53

Theorem 3.21. If U is a subspace of V , then


V U UK
Proof. Suppose that U is a subspace of V . We will show that
V U ` UK
Let te1 , . . . , em u be an orthonormal basis of U and v P V . We have

v pxv, e1 ye1 ` ` xv, em yem q ` pv xv, e1 ye1 xv, em yem q


Denote the first vector by u and the second by w. Clearly u P U . For each
j P t1, 2, . . . , mu one has
xw, ej y

xv, ej y xv, ej y

0
Thus w is orthogonal to every vector in the basis of U , that is w P U K ,
consequently
V U ` U K.
We will show now that U X U K t0u. Suppose that v P U X U K . Then v
is orthogonal to every vector in U , hence xv, vy 0, that is v 0. The
relations V U ` U K and U X U K t0u imply the conclusion of the
theorem.
Corollary 3.22. If U1 , U2 are subspaces of V then
U1 pU1K qK .
pU1 ` U2 qK U1K X U2K .
pU1 X U2 qK U1K ` U2K .

54

3. Inner product spaces

Proof. Your job


In a real inner product space we can define the angle of two vectors
xv, wy
{
pv,
wq arccos
}v} }w}
We have

{
vKw pv,
wq .
2

3.4

Linear manifolds

Let V be a vector space over the field F.


Definition 3.23. A set L v0 ` VL tv0 ` v|v P VL u , where v0 P V is a
vector and VL V is a subspace of V is called a linear manifold (or linear
variety). The subspace VL is called the director subspace of the linear
variety.
Remark 3.24. One can easily verify the following.
if v0 P VL then L VL .
v0 P L because v0 v0 ` 0 P v0 ` VL .
for v1 , v2 P L we have v1 v2 P VL .
for every v1 P L we have L v1 ` VL .
VL1 VL2 iff L1 L2 .
Definition 3.25. We would like to emphasize that:
1. The dimension of a linear manifold is the dimension of its director
subspace.
2. Two linear manifolds L1 and L2 are called orthogonal if VL1 KVL2 .

Linear manifolds

55

3. Two linear manifolds L1 and L2 are called parallel if L1 L2 or


L2 L1 .

3.4.1

The equations of a linear manifold

Let L v0 ` VL be a linear manifold in a finitely dimensional vector space


V . For dim L k n dim V one can choose in the director subspace VL
a basis of finite dimension tv1 , . . . , vk u. We have

L tv v0 ` 1 v1 ` ` k vk |i P F, i 1, ku
We can consider an arbitrary basis (fixed) in V , lets say E te1 , . . . , en u
and if we use the column vectors for the coordinates in this basis, i.e.
vrEs px1 , . . . , xn qT , v0rEs px01 , . . . , x0n qT , vjrEs px1j , . . . , xnj qT , j
1, k, one has the parametric equations of the linear manifold

x1 x01 ` 1 x11 ` ` k x1k

&
..
.

%x x 0 ` x ` ` x
n
1 n1
k nk
n
The rank of the matrix pxij qi1,n is k because the vectors v1 , . . . , vk are
j1,k

linearly independent.
It is worthwhile to mention that:
1. a linear manifold of dimension one is called line.
2. a linear manifold of dimension two is called plane.
3. a linear manifold of dimension k is called k plane.
4. a linear manifold of dimension n 1 in an n dimensional vector
space is called hyperplane.

56

3. Inner product spaces

Theorem 3.26. Let us consider V an n-dimensional vector spaces over


the field F. Then any subspace of V is the kernel of a surjective linear
map.
Proof. Suppose VL is a subspace of V of dimension k.. Choose a basis
te1 , . . . , ek u in VL and complete it to a basis te1 , . . . , ek , ek`1 , . . . , en u of V .
Consider U spantek`1 , . . . , en u. Let T : V U given by
T pe1 q 0, . . . T pek q 0, T pek`1 q ek`1 , . . . , T pen q en .
Obviously,
T p1 e1 ` ` n en q 1 T pe1 q ` ` n T pen q k`1 ek`1 ` ` n en
defines a linear map. It is also clear that ker T VL as well that T is
surjective, i.e. Im T U .
Theorem 3.27. Let V, U two linear spaces over the same field F. If
T : V U is a surjective linear map, then for every u0 P U , the set
L tv P V |T pvq u0 u is a linear manifold.
Proof. T being surjective, there exists v0 P V with T pv0 q u0 . We will
show that tv v0 |v P Lu ker T .
Let v P L. We have T pv v0 q T pvq T pv0 q 0, so
tv v0 |v P Lu ker T .
Let v1 P ker T , i.e. T pv1 q 0. Write v1 pv1 ` v0 q v0 . T pv1 ` v0 q u0 ,
so pv1 ` v0 q P L. Hence, v1 P tv v0 |v P Lu or, in other words
ker T tv v0 |v P Lu.
Consequently L v0 ` ker T, which shows that L is a linear manifold.
The previous theorems give rise to the next:
Theorem 3.28. Let V a linear space of dimension n. Then, for every
linear manifold L V of dimension dim L k n, there exists an

Linear manifolds

57

n k-dimensional vector space U , a surjective linear map T : V U and


a vector u P U such that
L tv P V |T pvq uu.
Proof. Indeed, consider L v0 ` VL , where the dimension of the director
subspace VL k. Choose a basis te1 , . . . , ek u in VL and complete it to a
basis te1 , . . . , ek , ek`1 , . . . , en u of V . Consider U spantek`1 , . . . , en u.
Obviously dim U n k. According to a previous theorem the linear map
T : V U, T p1 e1 ` ` k ek ` k`1 ek`1 ` ` n en q
k`1 ek`1 ` ` n en is surjective and ker T VL . Let T pv0 q u. Then,
according to the proof of the previous theorem L tv P V |T pvq uu.
Remark 3.29. If we choose in V and U two bases and we write the linear
map by matrix notation MT v u we have the implicit equations of the
linear manifold L,
$

a11 v1 ` a12 v2 ` ` a1n vn

&
..
.

%a v ` a v ` ` a v
p1 1
p2 2
pn n

u1

up

where p n k dim U rank paij q i1,p .


j1,n

A hyperplane has only one equation


a1 v1 ` ` an vn u0
The director subspace can be seen as
VL tv v1 e1 ` ` vn en |f pvq 0u ker f,
where f is the linear map (linear functional) f : V R with
f pe1 q a1 , . . . , f pen q an .

58

3. Inner product spaces

If we think of the hyperplane as a linear manifold in the euclidean space


Rn , the equation can be written as
xv, ay u0 , where a a1 e1 ` ` an en , u0 P R.
The vector a is called the normal vector to the hyperplane.
Generally in a euclidean space the equations of a linear manifold are
$

xv, v1 y u1

&
..
.

%xv, v y u
p
p
where the vectors v1 , . . . vp are linearly independent. The director
subspace is given by
$

xv, v1 y 0

&
..
.

%xv, v y 0
p
so, the vectors v1 , . . . , vp are orthogonal to the director subspace VL .

3.5

Orthogonal projections. Distances.

In this section we will explain how we can measure the distance between
some linear sets, which are linear manifolds.
Let pV, x, yq be an inner product space and consider the vectors
vi P V , i 1, k.
The determinant


xv1 , v1 y


xv2 , v1 y
Gpv1 , . . . , vk q
......


xvk , v1 y

xv1 , v2 y

...

xv2 , v2 y

...

...

...

xvk , v2 y . . .



xv1 , vk y


xv2 , vk y





xvk , vk y

Orthogonal projections. Distances.

59

is called the Gram determinant of the vectors v1 . . . vk .


Proposition 3.30. In an inner product space the vectors v1 , . . . , vk are
linearly independent iff Gpv1 , . . . , vk q 0.
Proof. Let us consider the homogenous system

x1
0


x2 0

G
.. .. .
. .

xk

This system can be written as


$

xv1 , vy 0

&
..
.
where v x1 v1 ` . . . xk vk .

%xv , vy 0
k

The following statements are equivalent.


The vectors v1 , . . . , vk are linearly dependent. There exist
x1 , . . . , xk P F, not all zero such that v 0. The homogenous system
has a nontrivial solution. det G 0.
Proposition 3.31. If te1 , . . . , en u are linearly independent vectors and
tf1 , . . . , fn u are vectors obtained by Gram Schmidt orthogonalization
process, one has:
Gpe1 , . . . , en q Gpf1 , . . . , fn q }f1 }2 . . . }fn }2
Proof. In Gpf1 , . . . , fn q replace fn by en a1 f1 an1 fn1 and we
obtain
Gpf1 , . . . , fn q Gpf1 , . . . , fn1 , en q.
By an inductive process the relation in the theorem follows. Obviously
Gpf1 , . . . , fn q }f1 }2 . . . }fn }2 because in the determinant we have only
on the diagonal xf1 , f1 y, . . . , xfn , fn y.

60

3. Inner product spaces

Remark 3.32. Observe that:


d
Gpe1 , . . . ek q
}fk }
Gpe1 , . . . , ek1 q
fk ek a1 f1 . . . ak1 fk1 ek vk one obtains ek fk ` vk ,
vk P spante1 , . . . , ek1 u and fk P spante1 , . . . , ek1 uK , so fk is the
orthogonal complement of ek with respect to the space generated by
te1 . . . , ek1 u.

3.5.1

Distance problems

The distance between a vector and a subspace


Let U be a subspace of the inner product space V . The distance between a
vector v and the subspace U is
dpv, U q inf dpv, wq inf }v w}
wPU

wPU

Proposition 3.33. The distance between a vector v P V and a subspace U


is given by
d
dpv, U q }v K }

Gpe1 , . . . , ek , vq
,
Gpe1 , . . . , ek q

where v v1 ` v K , v1 P U, v K P U K and e1 , . . . , ek is a basis in U .


Proof. First we prove that }v K } }v v1 } }v u},

@u P U . We have

}v K }

}v u}

xv K , v K y

xv K ` v1 u, v K ` v1 uy

xv K , v K y

xv K , v K y ` xv1 u, v1 uy.
b

The second part of the equality, i.e. }v K }


the previous remark.

Gpe1 ,...,ek ,vq


Gpe1 ,...,ek q ,

follows from

Orthogonal projections. Distances.

61

Definition 3.34. If e1 , . . . , ek are vectors in V the volume of the kparallelepiped constructed on the vectors e1 , . . . , ek is defined by
a
Vk pe1 , . . . , ek q Gpe1 , . . . , ek q.
We have the following inductive relation
Vk`1 pe1 , . . . , ek , ek`1 q Vk pe1 , . . . , ek qdpek`1 , spante1 , . . . , ek uq.
The distance between a vector and a linear manifold
Let L v0 ` VL be a linear manifold, and let v be a vector in a finitely
dimensional inner product space V . The distance induced by the norm is
invariant by translations, that is, for all v1 , v2 P V one has
dpv1 , v2 q dpv1 ` v0 , v1 ` v0 q }v1 v2 } }v1 ` v0 pv2 ` v0 q}
That means that we have
dpv, Lq inf dpv, wq inf dpv, v0 ` vL q
vL PVL

wPL

inf dpv v0 , vL q
vL PVL

dpv v0 , VL q.
Finally,
d
dpv, Lq dpv v0 , VL q

Gpe1 , . . . , ek , v v0 q
,
Gpe1 , . . . , ek q

where e1 , . . . , ek is a basis in VL .
Let us consider the hyperplane H of equation
xv v0 , ny 0 .
The director subspace is VH xv, ny 0 and the distance
dpv, Hq dpv v0 , VH q.

62

3. Inner product spaces

One can decompose v v0 n ` vH , where vH is the orthogonal


projection of v v0 on VH and n is the normal component of v v0 with
respect to VH . It means that
dpv, Hq }n}
Let us compute a little now, taking into account the previous observations
about the tangential and normal part:
xv v0 , ny xn ` vH , ny
xn, ny ` xvH , ny
}n}2 ` 0
So, we obtained
|xv v0 , ny|
||}n} }n}
}n}
that is
dpv, Hq

|xv v0 , ny|
}n}

In the case that we have an orthonormal basis at hand, the equation of the
hyperplane H is
a1 x1 ` ` ak xk ` b 0 ,
so the relation is now

dpv, Hq

|a1 v1 ` ` ak vk ` b|
a
.
a21 ` ` a2k

The distance between two linear manifolds


For A and B sets in a metric space, the distance between them is defined
as
dpA, Bq inftdpa, bq|a P A , b P Bu.

Orthogonal projections. Distances.

63

For two linear manifolds L1 v1 ` V1 and L2 v2 ` V2 it easily follows:


dpL1 , L2 q dpv1 ` V1 , v2 ` V2 q dpv1 v2 , V1 V2 q
dpv1 v2 , V1 ` V2 q.

(3.1)
(3.2)

This gives us the next proposition.


Proposition 3.35. The distance between the linear manifolds
L1 v1 ` V1 and L2 v2 ` V2 is equal to the distance between the vector
v1 v2 and the sum space V1 ` V2 .
If we choose a basis in V1 ` V2 , lets say e1 , . . . , ek , then this formula
follows:

d
dpL1 , L2 q

Gpe1 , . . . , ek , v1 v2 q
.
Gpe1 . . . ek q

Some analytic geometry


In this section we are going to apply distance problems in euclidean spaces.
Consider the vector space Rn with the canonical inner product, that is: for
x px1 , . . . , xn q, y py1 , . . . , yn q P Rn the inner product is given by
xx, yy

xk yk .

i1

Consider D1 , D2 two lines (one dimensional linear manifolds), M a point


(zero dimensional linear manifold, we assimilate with the vector
xM 0M ), P a two dimensional linear manifold (a plane), and H an
n 1 dimensional linear manifold (hyperplane). The equations of these
linear manifolds are:
D1 : x

x1 ` sd1

D2 : x

x2 ` td2

M : x

xM

P : x

xP ` v 1 ` v 2

H :

xx, ny ` b 0,

64

3. Inner product spaces

where s, t, , , b P R. Recall that two linear manifolds are parallel if the


director space of one of them is included in the director space of the other.
Now we can write down several formulas for distances between linear
manifolds.

d
dpM, D1 q

GpxM x1 , d1 q
;
Gpd1 q

dpM, P q

GpxM xP , v 1 , v 2 q
;
Gpv 1 , v 2 q

dpD1 , D2 q

Gpx1 x2 , d1 , d2 q
if D1 D2
Gpd1 , d2 q

dpD1 , D2 q
dpM, Hq
dpD1 , P q

Gpx1 x2 , d1 q
if D1 k D2
Gpd1 , q

|xxM , ny ` b|
}n}
d
Gpx1 xP , d1 , v 1 ` v 2 q
if D1 P

Gpd1 , v 1 , v 2 q

You might also like