Nonex Kap2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Chapter 2

Approximation problem

In this chapter we study the best approximation problem in Hilbert and Banach spaces. In
the focus are the questions of existence and uniqueness. The metric projection operators
describe the best approximations. We will see in the next chapters that properties of the
metric projections correspond to geometric properties of Banach spaces described by the
duality mapping.
Metric projection operators are widely used in different areas of mathematics such
as functional and numerical analysis, theory of optimization and approximation and for
problems of optimal control; see Chapter 5.
The list of contributions to the theory of approximation is really very huge. Textbooks
are for instance Achieser [1], Butzer and Berens [7], Cheney [8], Deutsch [9], de Vore and
Lorentz [10], Holmes [12], Mhaskar and Pai [18], [20], Singer [22]. To original articles we
refer at those places where they are used or where they have deep relations to the results
presented.

2.1 Best approximation problem


Let (X, d) be a metric space and let C be a nonempty subset of X . For every x ∈ X, the
distance between the point x and the set C is denoted by dist(x, C) and is defined by the
following minimum equation

dist(x, C) = inf d(x, v) . (2.1)


v∈C

We call the problem (2.1) the best approximation problem associated to C and w ∈ C
with d(x, w) = dist(x, C) a best approximation of x in C . We refer to the map X �
x �−→ dist(x, C) ∈ R as the distance function for C . As a first result we have

Lemma 2.1. Let (X, d) be a metric space and let C be a nonempty subset of X . Then the
distance function dist(·, C) is Lipschitz continuous with Lipschitz constant L ≤ 1 .

Proof:
Let x, y ∈ X, w ∈ C . Then dist(x, C) ≤ d(x, w) ≤ d(x, y)+d(y, w) and hence dist(x, C)−
d(x, y) ≤ d(y, w) . We conclude dist(x, C)−d(x, y) ≤ dist(y, C) . By symmetry, dist(y, C)−
d(x, y) ≤ dist(x, C) . Thus, |dist(x, C) − dist(y, C)| ≤ d(x, y) . �

31
Remark 2.2. Let (X, d) be a metric space and let C be a nonempty subset of X . If C is
additional bounded we may consider the concept of farthest points: For z in C we call z a
farthest point of x in C if
d(x, z) = far(x, C) := sup d(x, u) .
u∈C

This induces the mapping FC : C ⇒ C, defined by


FC : C � x �−→ {z ∈ C : d(x, z) = far(x, C)} .
It is easily prooved that far : C −→ C is nonexpansive. �

The basic questions in the best approximation problem concerns the existence, uniquness
and quality of the dependence of the best approximation on the data (X, d, C, x)
of the problem. We shall discuss these questions in this chapter. In the next chapter we
present various applications of the best approximation problem.

The metric projection operator PC is defined as follows:


PC (x) := {z ∈ C : d(x, z) = dist(x, C)} , x ∈ X .
This operator PC is a set valued mapping from X into X with range in C . We write:
PC : X ⇒ X . Obviously, PC (x) = {x} for all x ∈ C . The domain dom(PC ) is the set
{x ∈ X|PC (x) �= ∅} .
Clearly, PC (x) is a closed subset of C if C is closed. If PC (x) �= ∅ for every x ∈ X, then
C is called proximal. Obviously, a necessary condition for proximality is the closedness
of C . If PC (x) is a singleton for every x ∈ X, then C is said to be a Chebyshev set.
Clearly, in this situation PC is a projection: PC ◦ PC = PC .

A natural extension of the best approximation problem is to find a best approxi-


mating pair relative to two sets C, D in a metric space, i.e.:
Given subsets C, D in the metric space (X, d) find (u, w) ∈ C × D with
dist(C, D) := inf d(u � , w � ) = d(u, w) . (2.2)
u � ∈C,w � ∈D

Best approximating pairs may not exist in general. If D reduces to a singleton {x} then
the problem (2.2) reduces to the problem (2.1). On the other hand, when the problem
(2.2) is consistent, i.e. C∩D �= ∅, then problem (2.2) reduces to the well known feasibilty
problem for two sets and its solution set is {(x, x) : x ∈ C ∩ D} . The feasibility problem
captures a wide range of problems in applied mathematics and engineering. In Chapter 5
we shall study methods to solve the feasibility problem, namely the alternate projection
methods using the metric projections PC , PD .

In the middle of the 19-th century Chebyshev proved that in the Banach space C[0, 1]
of continuous functions on [0, 1] (endowed by the supremum norm) the subspace of poly-
nomials of degree ≤ n and the subset Rm,n of all rational functions
a 0 + a1 t + · · · + a n t n
b0 + b 1 t + · · · + b m t m

32
for fixed m, n are Chebyshev sets. In finite-dimensional spaces each nonempty closed
subset is proximal due to the continuity of the norm and the Heine-Borel theorem. More-
over, in finite-dimensional euclidean spaces, Chebyshev sets are completely described by
the Theorem of Motzkin (see [19] and [6, 13, 16, 19, 23]): a nonempty closed subset C in
the euclidean space Rn is a Chebyshev set if and only if C is convex. Therefore, convexity
should be an important tool to study projection operators.

Convexity in a metric space needs some additional structure (geodesics). This the
reason why we restrict ourselves to the case of metric projection operators in Hilbert
spaces and Banach spaces. Then we may use the concept of convexity in an obvious
manner; see the preliminaries in the preface. Additionally, differentiability of nonlinear
mappings can be considered in order to study necessary and sufficient conditions for the
best approximation.

The general theory of best approximation may be considered as the mathematical


study that is motivated by the desire to find answers to the following basic questions:

Existence Which subsets are proximal?

Uniqueness Which subsets are Chebyshev sets?

Characterization How does one recognize when a given element is a best approxima-
tion?

Error of approximation How does one compute the error of approximations or at least
sharp upper bounds for it?

Computation of best approximations Can one describe some useful algorithms for
computing a best approximation?

Continuity of the best approximation process How does the metric projection vary
as a function of the element to be approximated?

Applications What are problems in the applied sciences which are important motiva-
tions for developing the theory further?

In this monograph not all questions are answered in the same completeness and sharpness.

2.2 Proximal sets


The term proximal set was proposed by Killgrove and used first by Phelps. Let us start
with some examples which show different cases concerning proximality.

Example 2.3.

(1) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the l2 -norm �(u, v)�2 := |u|2 + |v|2 ,
C := {(u, v) : �(u, v)� ≤ 1} . Then C is proximal and PC (x) = {(u, v)�(u, v)�−1
2 } for
x∈/ C.

33
(2) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the supremum �(u, v)�∞ :=
sup{|u|, |v|}, C := {(0, v) : v ∈ R} . Then C is proximal and PC (x) = {(0, v) : |v| ≤ 1}
for x = (1, 0) .

(3) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the l1 -norm �(u, v)�1 := |u| + |v|
C := {(u, v) : v = ±u} . Then C is proximal and PC (x) = {(0, v) : |v| ≤ 1} for
x = (0, 1) .

Definition 2.4. Let (X, d) be a metric space and let C ⊂ X . C is called approximatively
compact if for each x ∈ X and for each sequence (un )n∈N in C with limn d(un , x) =
dist(x, C) there exists a subsequence (unk )k∈N and u ∈ C with limk unk = u . �

Theorem 2.5. Let (X, d) be a metric space and let C be a nonempty subset of X . Then
we have:

(a) If C is approximately compact then C is proximal.


(b) If C is compact then C is proximal.

Proof:
Ad (a) Let x ∈ X . Let (un )n∈N be a minimal sequence. The point u in the definition 2.4
is in C and by the continuity of the metric u ∈ PC (x) .
Ad (b) Obviously, a compact set is approximately compact. �

Theorem 2.6. Let X be a reflexive Banach space. Then every nonempty closed convex
subset C of X is proximal.

Proof:
Let x ∈ X and let (un )n∈N be a minimal sequence in C ; limn �x − un � = dist(x, C), un ∈
C, n ∈ N . Then the sequence (un )n∈N is bounded. Since X is reflexive this sequence
contains a weakly convergent set. Let u be a weak cluster point of (un )n∈N . Since C is a
closed convex set C is weakly closed; see [4]. Therefore u ∈ C . Due to the fact that the
norm is sequential weakly lower semicontinuous we have �x − u� = dist(x, C) . �

Remark 2.7. Theorem 2.6 can be reformulated in a stronger form, namely: A Banach
space X is reflexive if and only if every nonempty closed convex subset C of X is proximal.
See [17]. �

Example 2.8. Consider the Banach space l1 . For any n ∈ N, let en be the sequence
in l1 with n-th entry 1 and all other entries 0 . Let bn := (n + 1)/nen , n ∈ N, and let
C := co(b1 , b2 , . . . , bn , . . . ) . Then C is a nonempty convex closed subset of l1 which is not
proximal. Notice, that l1 is a non-reflexive Banach space (with dual space l∞ ) . �

Lemma 2.9. Let X be a Banach space and let λ ∈ X ∗ \{θ} . Then the following conditions
are equivalent:

(a) ker(λ) is proximal.


(b) λ attains its norm, i.e. �λ, x� = �λ� for some x ∈ S1 .

34
Proof:
Set U := ker(λ) .
(a) =⇒ (b) Since λ �= θ there exists z ∈ X\U . Because U is proximal there exists u ∈ U
with �z − u� = dist(z, U) . Let x := (z − u)�z − u�−1 . Clearly, �x� = 1 and

dist(x, U) = �z − u�−1 dist(z − u, U) = �z − u�−1 dist(z, U) = 1 .

Let y ∈ X . Then y = �λ, y��λ, x�−1 x + v for some v ∈ U . Now, since U is a subspace
with θ ∈ U we obtain
|�λ, y�| |�λ, y�| |�λ, y�|
= dist(x, U) = inf �x − z�
|�λ, x�| |�λ, x�| |�λ, x�| z∈U
� �λ, y� �
� �
= inf � x + u − z� = inf �y − z�
z∈U �λ, x� z∈U

= dist(y, U) ≤ �y�

Therefore, |�λ, y�| ≤ �λ, x�|�y� for all y ∈ X . Hence, �λ� ≤ |�λ, x�| ≤ �λ� since �x� = 1 .
Thus, �λ� = |�λ, y�| .
(b) =⇒ (a) Suppose λ attains its norm at x ∈ S1 , i.e. �λ, x� = �λ� for some x ∈ S1 . Let
u ∈ U . Then
�λ, x� �λ, x − u� �λ��x − u�
1= = ≤ = �x − u� .
�λ� �λ� �λ�
Therefore, 1 ≤ dist(x, U) ≤ �x − θ� = 1 . Thus θ ∈ PU (x) . Let y ∈ X . We can write
y = ax + u for some a ∈ R, u ∈ U . It follows PU (y) = PU (ax + u) = aPU (x) + u �= ∅ .
Hence, U is proximal. �

Lemma 2.10. Let H be a Hilbert space and let C, C1 , . . . , Cm be nonempty closed convex
subsets of H . The we have:

(1) PC (x) = PC−y (x − y) + y for all x, y ∈ H .

(2) PrC (x) = rPC ( 1r x) for all x ∈ H, r �= 0 .

(3) (PC1 ◦ · · · ◦ PCm )n (x) = ((PC1 −y ◦ · · · ◦ PCm −y )n (x − y) + y for all x, y ∈ H, n ∈ N .

Proof:
Ad (1) Let x, y ∈ H . Then for all u ∈ C

�x − PC−y (x − y) − y� = �x − y − (u − y)� = �x − u�

which shows that PC (x) = PC−y (x − y) + y .


Ad (2) Let x ∈ Hs, r > 0 . Then for all u ∈ C

1 1 1 1
�x − rPC ( x)� = r� x − PC ( x)� ≤ r� x − u� = �x − ru�
r r r r
which shows that PrC (x) = rPC ( 1r x) .
Ad (3) This is a consequence of (1) . �

35
2.3 Chebyshev sets

The term Chebyshev set was introduced by Stechkin in honor of the founder of best approx-
imation theory. Let us start with some examples which show different cases concerning
Chebyshev sets.

Example 2.11.

(1) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the l2 -norm �(u, v)�2 := |u|2 + |v|2 ,
C := {(u, v) : �(u, v)� ≤ 1} . Then C is a Chebyshev set since PC (x) = {(u, v)�(u, v)�−1
2 }
for all x = (u, v) �= θ .

(2) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the supremum �(u, v)�∞ :=


sup{|u|, |v|}, C := {(0, v) : v ∈ R} . Then C is proximal but not a Chebyshev set.
In fact, PC (x) = {(0, v) : |v| ≤ 1} for x = (1, 0) .

(3) X := R2 := {x = (u, v) : u, v ∈ R} endowed with the l1 -norm �(u, v)�1 := |u| + |v|
C := {(u, v) : v = ±u} . Then C is proximal but not a Chebyshev set. In fact,
PC (x) = {(0, v) : |v| ≤ 1} for x = (0, 1) .

X := R2 := {x = (u, v) : u, v ∈ R} endowed with the norm �(u, v)� := |u − v| +


(4) √
u2 + v2 , C := {(u, 0) : |u| ≤ 1} . Then C is a Chebyshev set.

Definition 2.12. A normed space (X , � · �) is called strictly convex or rotund, if for


all x, y ∈ X with �x� = �y� = 1, x �= y, we have �x + y� < 2 . �

Example 2.13. The Banach space (C[0, 1], � · �∞ ) is not strictly convex. To show this,
choose x(t) := 1, y(t) := t , t ∈ [0, 1] . Then we have x �= y and �x�∞ = �y�∞ = 1 . But
it holds �x + y�∞ = 2 .

Lemma 2.14. Let X be a Banach space. Then the following assertions are equivalent:

(a) X is strictly convex.


(b) If x, y ∈ X with �x� = �y� = 1, x �= y, we have �tx + (1 − t)y� < 1 .
(c) If u, v ∈ X and �u + v� = �u� + �v� then u = tv for some t ≥ 0 .
(d) Every point in the boundary S1 of B1 is an extremal point of B1 .

Proof:
(a) =⇒ (b) Let x, y ∈ X with �x� = �y� = 1, x �= y . Assume that �t∗ x + (1 − t∗ )y� = 1
for some t∗ ∈ (0, 1) . Obviously (by (a) t∗ �= 12 . Consider the case t∗ < 12 . Using the point
2t∗ x + (1 − 2t∗ )x we obtain the contradiction �t∗ x + (1 − t∗ )y� < 1 . The case t∗ > 12 is
similar.
(b) =⇒ (c) Let u, v ∈ X with �u + v� = �u� + �v� . If u = θ or v = θ nothings has to
be proved. Let u, v �= θ . Then we have
� �u� u �v� v �
� �
� + � = 1.
�u� + �v� �u� �u� + �v� �v�

36
Then by (b)
u v
=
�u� �v�
and we have u = �u��v�−1 v .
(c) =⇒ (a) Let x, y ∈ X with �x� = �y� = 1 . If �x + y� = 2 then �x + y� = �x� + �y� .
This implies x = ty for some t ≥ 0 . Obviously t = 1 and we have x = y .
(a) ⇐⇒ (d) If x belongs to the segment [u, v] we may assume without loss of generality
that x = 12 (u + v) . Then the equivalence of (a) and (d) is obviously true. �
Theorem 2.15. Let X be a Banach space. Then the following conditions are equivalent:
(a) X is reflexive and strictly convex.
(b) Every nonempty convex closed subset C of X is a Chebyshev set.
Proof:
(a) =⇒ (b) We know from Theorem 2.6 already that every closed convex subset C
is proximal. Let a := dist(x, C) and let z, z � ∈ PC (x) . Then �x − z� = �x − z � � = a .
Since u := 12 (z + z � ) ∈ C we have �x − u� ≥ a . Using the triangle inequality we obtain
�x − u� ≤ a . This implies �x − u� = a and since X is strictly convex we conclude
u = z = z� .
(b) =⇒ (a) We know from Remark 2.16 that X is a reflexive space. Applying (b) to
the set B1 we obtain that X is strictly convex. �
Remark 2.16. Theorem 2.15 can be reformulated in a stronger form, namely: a reflexive
Banach space X is strictly convex if and only if every nonempty closed convex subset C
of X is a Chebyshev set; see [17]. �

In a Hilbert space every nonempty closed convex subset is a Chebyshev set since a
Hilbert space is reflexive (see the preliminaries in the preface) and strictly convex due to
the parallelogram identity. We shall give a proof of the fact that a nonempty closed convex
subset is a Chebyshev set based on the parallelogram identity and the completeness only.
Theorem 2.17. Let H be a Hilbert space and let C be a nonempty convex closed subset
of H . Then C is Chebyshev set. Additionally, we have the property that each minimal
sequence for the minimization of C � y �−→ �x − y� ∈ R converges to PC (x) .
Proof:
We prove PC (x) is not empty.
Let (yn )n∈N be sequence with limn �x − yn � = a := dist(x, C) . We want to show that
(yn )n∈N is a Cauchy sequence. For m, n we have

�yn − ym �2 = �(x − ym ) − (x − yn )�2


= −�(x − ym ) + (x − yn )�2 + 2�x − ym �2 + 2�x − yn �2
1
= −4� (ym + yn ) − x�2 + 2�x − ym �2 + 2�x − yn �2
2
≤ −4a2 + 2�x − ym �2 + 2�x − yn �2

This shows that limm,n �yn − ym � = 0 and (yn )n∈N is a Cauchy sequence. Therefore there
exists y := limn yn and by the continuity of the norm we have �x − y� = a, i.e. y ∈ PC (x) .

37
We prove that PC (x) is a singleton.
Let u, v ∈ PC (x) : �x − u� = �x − v� = a . By the convexity of C we have 12 (u + v) ∈ C
and therefore
1 1 1
�x − (u + v)� ≤ �x − u� + �x − v� = a .
2 2 2
Hence
1
�u−v�2 = �(x−u)−(x−v)�2 = 2�x−u�2 +�x−v�2 −4�x− (u+v)�2 ≤ 2a2 +2a2 −4a2 = 0
2
and we conclude u = v .
The additional assertion follows from the first part of our proof. �
Remark 2.18. There are some very surprising results concerning the chebyshevian prop-
erty: First, there exist no finite-dimensional Chebyshev subspace in L1 [0, 1] . Second, there
exists a separable reflexive Banach space without any finite-dimensional Chebyshev sub-
space.
A very important open problem is: Does there exist a non-convex Chebyschev subset in
the Hilbert space. This question goes back to Efimov, Klee and Stechkin who have adduced
plausible evidence to support the conjecture that there exist non-convex Chebyshev subsets;
see [2]. On the other hand, they gave sufficient conditions for Chebyshev sets in Hilbert
space beeing convex. Here is a list of such results concerning this question:
(1) If C is a Chebyshev set in R2 then C is convex (Bunt [6], 1934).
(2) If C is a Chebyshev set in Rn then C is convex (Kritikos [16], 1938).
(3) If C is a boundedly compact1 Chebyshev set, then C is convex (Efimov and Stechkin
[11], 1959).
(4) If C is a weakly closed Chebyshev set in a Hilbert space then C is convex (Klee [13],
1961).
(5) If a set C is a Chebyshev set in a Hilbert space and each half space intersects C in a
proximal set, then C is convex (Singer [21], 1967).
(6) If C is an approximately compact Chebyshev set in a Hilbert space, then C is convex
(Klee [14], 1967).
(7) If C is a Chebyshev set in a Hilbert space with a continuous metric projection then C
is convex (Asplund [2], 1969).

2.4 Well posedness of the approximation problem


Definition 2.19. Let X be a Banach space and let C be a nonempty subset of X . Then
the approximation problem (2.1) is called stable if every sequence (xn )n∈N in C with
limn �x − xn � = dist(x, C) (minimizing sequence) satisfies limn dist(xn , PC (x)) = 0 . If in
addition the solution set PC (x) is a singleton set, the approximation problem (2.1) is called
strongly solvable. �
1
C is called boundedly compact if for any r > 0 the set Br ∩ C is compact.

38
The property that the best approximation problem (2.1) is strongly solvable is saying
that the best approximation problem is well posed in the sense of Hadamard: there
exists a unique solution which depends continously on the point being approximated.

Corollary 2.20. Let H be a Hilbert space and let C be a nonempty closed convex subset
of H . Then the best approximation problem (2.1) is strongly solvable.

Proof:
See Theorem 2.17. �

For the discussion of the well posedness-property in Banach spaces we have to introduce
some other facts concerning the geometric properties of Banach spaces.

Definition 2.21. A Banach space X is an E-space if the following conditions hold:

(1) X is reflexive.

(2) X is strictly convex.

(3) If x ∈ S1 is the weak limit of the sequence (xn )n∈N in S1 then x is the strong limit of
this sequence.

Definition 2.22. Let X be a Banach space. We say that X has a Kadek-Klee norm if
for every sequence (xn )n∈N in X with w − limn xn = x and limn �xn � = �x� the sequence
(xn )n∈N converges in norm to x . �

A Banach space has a Kadek-Klee norm if the weak and strong convergence coincides
on the unit sphere. It is easy to check that the property (3) in Definition 2.21 is equivalent
to the fact that X has the Kadek-Klee property.
Clearly, since weak convergence is equivalent to the strong convergence in a finite-
dimensional normed space, every finite-dimensional normed space has a Kadek-Klee norm.
Each Hilbert space, each uniformly convex space and all lp −, Lp (Ω)−spaces, 1 < p < ∞,
have a Kadek-Klee norm.

Remark 2.23. A more suggestive definition of an E-space is the following: A Banach


space X is an E-space if it is strictly convex and every weakly closed subset is approximately
compact; see Theorem 2.24 and [12, 15]. The equivalence of this definition with that given
in Definition 2.21 can proved by using two deep results in functional analysis: firstly, a
Banach space is reflexive iff its closed unit ball is weakly compact; secondly, a nonempty
closed convex subset C of a Banach space X is weakly compact iff every λ ∈ X ∗ attains
its maximum on C . The last result is a very useful result since it characterizes weak
compactness without using the weak topology. We know from the Bishop-Phelps theorem
(see [5]) that in a Banach space the set of linear functionals which attain their maximum
on a nonempty bounded closed convex subset is a (norm) dense subset of X ∗ . �

Theorem 2.24. Let X be an E-space and let C be a nonempty closed convex subset of
X . Then the best approximation problem (2.1) is strongly solvable.

39
Proof:
Clearly, C is Chebyshev set by (1), (2) in Definition 2.21. We have to show that the
approximation problem (2.1) is stable.
Let x ∈ X and let (xn )n∈N be sequence in C with limn �x − xn � = dist(x, C) . Obviously,
the sequence (xn )n∈N is bounded. Since X is reflexive there exists a weakly convergent
subsequence (xnk )k∈N . Let z := w − limk xnk . Since C is closed and convex it is weakly
closed too and we have z ∈ C . Then x − z = w − limk (x − xnk ) and by the weak lower
semicontinuity of the norm

�x − z� ≤ lim inf �x − xnk � = dist(x, C) .


k

This implies z = PC (x) and we conclude that (xn )n∈N converges weakly to z = PC (x) .
If a := dist(x, C) = 0 then z = x and limn xn = x since limn �x − xn � = a = 0 . Now
assume a > 0 . Then we may assume that �x − xn � ≥ a/2 > 0, n ∈ N . Since X is reflexive
there exists a subsequence (xnk )k∈N with
x−z x − xnk
= w − lim n .
a k �x − x k �

We know �x − z� = a . Now we apply (3) in Definition 2.21 and obtain


x−z x − xnk
= lim n
a k �x − x k �

and finally, x − z = limk (x − xnk ) . �

In Section 4.1 we will see that the property to be an E-space can be described as the
uniform convexity of the dual space. There, we can give examples of E-spaces. The crucial
point is the verification of the condition (3) in the definition of E-spaces.

2.5 Lower bounds for the approximation error


Here we are interested in upper and lower bounds for the error“ dist(x, C) for approxi-

mating x by vectors in C . Of course, upper bounds are easily obtained from the inequality

dist(x, C) ≤ �x − u� , u ∈ C .

Therefore, we have to develop an approach to obtain lower bounds. The idea is to find
a way to write down the infimum dist(x, C) = inf u∈C �x − u� by a supremum over a set
which can be handled. Usually, such a way uses duality arguments.
Let X be a Banach space, let C be a nonempty closed convex subset and let x ∈ X \C .
Then we have

inf �x − u� = inf (�x − y� + δC (y)) = inf (�y� + δC (x − y)) ,


u∈C y∈X y∈X

where δC is the characteristic function of convex analysis:



0 if z ∈ C
δC (z) := .
∞ if z ∈ /C

40
Now, we may use the duality theorem for convex programs (see Theorem 10.62):

inf (�y� + δC (x − y)) = sup sup �λ, x − u� (2.3)


y∈X λ∈X ∗ ,�λ�≤1 u∈C

since we have

ν∗ (λ) = δB1 (λ) , δC (x − ·)∗ (λ) = sup�λ, x − u� , λ ∈ X ∗ ,


u∈C

where ν : X � z �−→ �z� ∈ R is the norm function.


Theorem 2.25. Let X be a Banach space, let C be a nonempty closed convex subset of
X and let x ∈ X \C . Then

dist(x, C) = max (�λ, x� − sup�λ, u�) (2.4)


λ∈X ∗ ,�λ�≤1 u∈C

Proof:
We have to argue that the supremunm in (2.3) is actually a maximum. But this is
consequence of the fact that the norm function is continuous and has range in R . �
The result above may be interpreted that the distance from a point x to a set C may
be seen as the maximum of the distances from x to hyperplanes that seperate x and C ;
see Theorem 10.62.

Now, the formula in (2.4) provides us with an easy way of obtaining lower bounds
for dist(x, C):

If λ ∈ X ∗ with �λ� ≤ 1 then �λ, x� − sup�λ, u� ≤ dist(x, C) . (2.5)


u∈C

2.6 Appendix: Convexity I


2.7 Conclusions and comments
[3]

2.8 Exercises
1.) Let X be a strictly convex Banach space and let C ⊂ X be a nonempty closed
subset of X . Suppose y ∈ PC (x) . Then

PC (tx + (1 − t)y) = {y} for all t ∈ (0, 1) .

2.) Let C be a nonempty subset of the euclidean space Rn . Show the equivalence of
the following conditions:
(a) C is a Chebyshev set
(b) C is closed and convex
3.) Let R2 be equipped with the l1 -norm and let C := {x = (x1 , x2 ) : x2 = ±x1 } .
Compute PC (x) for x = (0, 1) and show that C is no Chebyshev set.

41
4.) Let R" equipped with the norm
1
�(x1 , x2 )�12 := |x1 − x2 | + (x21 + x22 ) 2 .

Then the unit ball C := {(x1 , x2 )�12 ≤ 1} is a Chebyshev set.


5.) Consider the space C[0, 1] of continuous functions on [0, 1] equipped with the supre-
mum norm � · �∞ . Find an equivalent norm � · � such that (C[0, 1], � · � ) is strictly
convex.
6.) Let X be a Banach space and let C be a nonempty subset of X . We set
^ := {x ∈ X : �x� = inf �x − y�} .
C
y∈C

Show:
(a) C ∩ C^ = ∅ or C ∩ C
^ = {θ} .
^ is closed.
(b) C
^.
(c) C is a Chebyshev set if and only if X = C ⊕ C
7.) Let H be a Hilbert space and let C be a nonempty closed convex subset of H . We
define:
�PC (x)�
�PC � := sup .
x∈H\{θ} �x�
Show:

0 if C = {θ}
(1) �PC � = if θ ∈ C .
1 � {θ}
if C =
(2) �PC � = ∞ if θ ∈
/ C.
8.) Let H be a Hilbert space and let C be a nonempty closed convex subset of H . We
define:
�PC (x)�
�PC � := sup .
x∈H\{θ} �x�
Show:

sup �PA (x)�
x∈B\{θ} if B �= {θ}
(1) �PA ◦ PB � = �x�
0 if B = {θ}
(2) If A, B are closed convex subsets of H with θ ∈ A ∩ B then �PA ◦ PB � ≤ 1 .
9.) Let X be a normed space. Then the following conditions are equivalent:
(a) X is uniformly convex.
(b) If (xn )n∈N , (yn )n∈N are sequences in X with �xn � = �yn � = 1, n ∈ N, then we
have:
1
From lim � (xn + yn )� = 1 we conclude lim(xn − yn ) = θ .
n 2 n

42
(c) If (xn )n∈N , (yn )n∈N are sequences in X with lim supn �xn � ≤ 1, lim supn �yn � ≤
1, n ∈ N, we have:
1
From lim(� (xn + yn )�) = 1 we conclude lim(xn − yn ) = θ .
n 2 n

10.) Let X be a uniformly convex Banach space. If (xn )n∈N is a sequence in X with

1
α := lim �xn � = lim � (xn + xm )� ,
n n,m 2

Then this sequence is convergent.


11.) Let X be a uniformly convex Banach space and let (xn )n∈N be a sequence in X.
Then the following conditions are equivalent:
(a) limn xn = x .
(b) w − limn xn = x , limn �xn � = �x� .
12.) Let H be a Hilbert space, let A, B be a closed subsets of H and let x ∈ H . Show:
If PB (x) ∈ A, then PA (x) = PB (x) .
13.) Let X be a Banach space and let U ⊂ X be closed subspace which is a Chebyshev
space. Show: PU−1 (x) = x + PU−1 (θ) for all x ∈ X .
14.) Let X be a Banach space and let U ⊂ X be closed subspace which is a Chebyshev
space. Set Uθ := PU−1 (θ) . Show X = U ⊕ Uθ .
15.) Let X be a Banach space and let H := Hλ,α := {x ∈ X : �λ, x� = α} be a hyperplane
(λ ∈ X ∗ , λ �= θ, α ∈ R). Show dist(x, H) = |�λ, x� − α|�λ�−1 for all x ∈ X .
16.) Let X be a Banach space and let H := Hλ,α := {x ∈ X : �λ, x� = α} be a hyperplane
(λ ∈ X ∗ , λ �= θ, α ∈ R) which is a Chebyshev set. Show that PH is linear.
17.) Define f : R � r �−→ 12 dist(r, 2Z) ∈ R . Set C := graph(f) and consider R2
endowed with the l1 -norm. Show that C is a nonconvex Chebyshev set.
18.) Let H be a Hilbert space and let C be a nonempty closed convex subset fof H .
Suppose that U : H −→ H is a surjective linear isometry. Then U(C) is a
nonempty closed convex subset of H and PU(C) = U ◦ PC ◦ U∗ .
19.) Consider Rn endowed with the l2 -norm. Let
�n
U := {x = (x1 , . . . , xn ) : xi = 0} .
i=1

(a) Show that U is a linear subspace of H with dim U = n − 1 .


(b) Compute PU (ei ), i = 1, 2, . . . . (Here, ei = (δij ) .)
(c) Compute PU (x) for all x ∈ Rn .
20.) Consider Rn endowed with the l2 -norm. Let

C := {x = (x1 , . . . , xn ) : x1 ≤ x2 ≤ · · · ≤ xn } .

Show that C is a Chebyshev set and a convex cone. Compute PC (x) for all x ∈ Rn .

43
21.) Let X be a Banach space and let U be a linear subspace. Denote by U⊥ the set

{λ ∈ X ∗ : �λ, u� = 0 for all u ∈ U} .

Show: For all x ∈ X \U we have

dist(x, U) = max |�λ, x�| .


λ∈U⊥ ∩B1

22.) Consider the Banach space c0 of the real sequences converging to zero. c0 is
endowed with the supremum norm. Let


n
U := {x = (x )n∈N : 2−n xn = 0} .
n=1

Show:
(a) U is a closed linear subspace of c0 .
(b) For every x ∈ c0 \U there exists no u ∈ U with dist(x, U) .
23.) Let A, B, C be convex subsets of Rn Suppose that B is closed and C is bounded.
Show that A + C ⊂ B + C implies A ⊂ B .

44
Bibliography

[1] N.I. Achieser. Theory of Approximation. Dover Publications, New York, 1992.

[2] E. Asplund. Chebyshev sets in Hilbert space. Trans. Amer. Math. Soc., 240:235–240,
1969.

[3] A. Assadi, H. Haghshenas, and T.D. Narang. A look at proximal and Chebyschev
sets in Banach spaces. Le Matematiche, LXIX:71–87, 2014.
[4] J. Baumeister. Konvexe Analysis, 2014. Skriptum WiSe 2014/15, Goethe–Universität
Frankfurt/Main.

[5] J.M. Borwein and A.S. Lewis. Convex Analysis and Nonlinear Optimization. Theory and
Examples. Springer, New York, 2006.

[6] L. Bunt. Multivariate quadrature on adaptive sparse grids. Computing, 71:89–114, 2003.

[7] P.P. Butzer and H. Berens. Semi-groups of operators and approximation. Springer, New
York, 1967.

[8] E.W. Cheney. Introduction to approximation theory, 2nd ed. AMS publsihing, Providence,
1982.

[9] F. Deutsch. Best approximation in inner product spaces. Springer, New York, 2001.

[10] R.A. DeVore and G.G. Lorentz. Constructive Approximation. Springer, Berlin, 1993.

[11] N.V. Efimov and S.B. Stechkin. Support properties of sets in Banach spaces and Chebyshev
sets. Dokl. Akad. Nauk SSSR, 127:254–257, 1959.

[12] R. Holmes. A Course on Optimization and Best Approximation. Springer, 1971.

[13] V. Klee. Convexity of Chebyshev sets. Math. Ann., 142:292–304, 1961.

[14] V. Klee. Remarks on nearest points in normed linear spaces. In Proc. Colloquium on
Convexity, pages 168–176. Univ. Mat. Inst. Copenhagen, 1967.

[15] P. Kosmol and D. Müller-Wichards. Optimization in function spaces:: with stability con-
siderations in Orlicz spaces. de Gruyter, Berlin, 2011.

[16] M. Kritikos. Sur quelques propriéetées des ensembles convexes. Bull. Math. de la Soc.
Roumaine des sciences, 40:87–92, 1938.

[17] J. Li. The metric projection and its applications to solving variational inequalities in
Banach spaces. Fixed point Theory, 5:285–298, 2004.

45
[18] H.N. Mhaskar and D.V. Pai. Fundamentals of approximation theory. CRC Press, Boca
Raton, 2000.

[19] T.S. Motzkin. Sur quelques proprietes characteritiques des ensembles convexes. Rend.
Acad. dei Lincei (Roma), 21:562–567, 1935.

[20] M.J.D. Powell. Approximation theory and methods. Cambridge IUniversity Press, New
York, 1981.

[21] I. Singer. Some open problems on best approximations, 1967. Seminaire Choquet.

[22] I. Singer. The theory of best approximation and functional analysis, volume 13 of Series in
applied mathematics. SIAM, Philadelphia, 1974.

[23] I. Vlasov. Chebyshev sets in Banach spaces. Soviet Math. Dokl., pages 1373–1374, 2.

46

You might also like