Donato

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Journal of Functional Analysis 261 (2011) 2083–2093

www.elsevier.com/locate/jfa

The infinite dimensional Lagrange multiplier rule for


convex optimization problems
Maria Bernadette Donato
Department of Mathematics, University of Messina, Viale Ferdinando Stagno d’Alcontres, 31, 98166 Messina, Italy
Received 28 January 2011; accepted 6 June 2011
Available online 8 July 2011
Communicated by L.C. Evans

Abstract
In this paper an infinite dimensional generalized Lagrange multipliers rule for convex optimization prob-
lems is presented and necessary and sufficient optimality conditions are given in order to guarantee the
strong duality. Furthermore, an application is presented, in particular the existence of Lagrange multipliers
associated to the bi-obstacle problem is obtained.
© 2011 Elsevier Inc. All rights reserved.

Keywords: Lagrange multiplier rule; Convex optimization problems; Strong duality; Assumption S; Bi-obstacle problem

1. Introduction

The convex optimization problem we are concerned with is the following.


Let X be a linear topological space, let Y be a real normed space ordered by a convex cone C
and let Z be a real normed space. Let S be a convex subset of X and let f : S → R be a given
functional and let g : S → Y be a given mapping and h : S → Z be an affine-linear mapping.
Setting
 
K = x ∈ S: g(x) ∈ −C, h(x) = θZ , (1)

where θZ is the zero element in the space Z, we consider the optimization problem

“find x0 ∈ K such that f (x0 ) = min f (x)” (2)


x∈K

E-mail address: [email protected].

0022-1236/$ – see front matter © 2011 Elsevier Inc. All rights reserved.
doi:10.1016/j.jfa.2011.06.006
2084 M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093

and, as usual, we call its Lagrange dual problem the problem


    
max
∗ ∗
inf f (x) + u, g(x) + v, h(x) (3)
u∈C , v∈Z x∈S

where
 
C ∗ = u ∈ Y ∗ : u, y  0, ∀y ∈ C

is the dual cone of C.


In papers [22,15,2], the authors give sufficient conditions in order to have that the strong
duality between a convex optimization problem in an infinite dimensional space and its Lagrange
dual problem is guaranteed, i.e., the extremal values of the two problems are equals.
It is worth remarking that these usual conditions use concepts of interior, core, intrinsic core
or strong quasi-relative interior which require the nonemptiness of the ordering cone which de-
fines the cone constraints in convex optimization and variational inequalities. Since many infinite
dimensional equilibrium problems have ordering cone empty, these usual conditions cannot be
used to guarantee the strong duality. This is the case of all optimization problems or variational
inequalities connected with network equilibrium problems, the obstacle problems, the elastic
plastic torsion problems (see [1,5–9,11–14,16,19,20,23]) which use positive cones of Lp (Ω) or
Sobolev spaces. Recently, in [9,8,10,18] the authors overcome this important difficulty by intro-
ducing a condition called Assumption S which ensures the strong duality.
Assumption S is the following. Firstly, we recall the concept of tangent cone.
Given a point x ∈ X and a subset C of X, the set

TC (x) = h ∈ X: h = lim λn (xn − x), λn ∈ R and λn > 0, ∀n ∈ N,
n→∞

xn ∈ C, ∀n ∈ N and lim xn = x
n→∞

is called the tangent cone to C at x. Of course, if TC (x) = ∅, then x ∈ cl C. If x ∈ cl C and C is


convex, then we have

TC (x) = cl cone C − {x} ,

where

cone(C) = {λx: x ∈ C, λ ∈ R, λ  0}

and cl denotes the closure.

Definition 1. Given three functions f , g, h and a set K as in (1), we say that Assumption S is
fulfilled at a point x0 ∈ K if and only if


(0, θY , θZ ) ∩ ]−∞, 0[ × {θY } × {θZ } = ∅,
TM (4)

where
 

=
M f (x) − f (x0 ) + α, g(x) + y, h(x) : x ∈ S \ K, α  0, y ∈ C .
M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093 2085

A clear geometrical meaning of Assumption S is that the tangent cone to the subset M
of
R × Y × Z at the point (0, θY , θZ ) does not contain ]−∞, 0[ × {θY } × {θZ }. M
is a particular type
of conic extension of the image of the optimization problem (2) in the image space R × Y × Z.
From an analytic point of view the meaning of Assumption S is that f (xn ) − f (x0 ) + αn , with
αn  0 for all n ∈ N, positively converges to zero when xn does not belong to K but the limits
of the constraint sequences λn (g(xn ) + yn ) and λn (h(xn )) with yn ∈ C and λn > 0 for all n ∈ N,
vanish. Then Assumption S essentially required to show that a particular limit is nonnegative.
After all the calculus of a limit could not be an exorbitant price to pay considering the importance
to have a necessary and sufficient condition for the strong duality.
Now, we recall the main theorem on strong duality theory.

Theorem 1. (See [10].) Assume that the functions f : S → R, g : S → Y are convex and that
h : S → Z is an affine-linear mapping. Assume that Assumption S is fulfilled at the optimal so-
lution x0 ∈ K to (2). Then also problem (3) is solvable and if u ∈ C ∗ , v ∈ Z ∗ are the optimal
solutions to (3), we have
 
u, g(x0 ) = 0

and the optimal values of the two problems coincide, namely


    
f (x0 ) = max inf f (x) + u, g(x) + v, h(x) .
u∈C , v∈Z ∗ x∈S

Assumption S is also a necessary condition for the strong duality, in fact the following corol-
lary holds.

Corollary 1. If the strong duality between problems (2) and (3) holds, then Assumption S is
fulfilled.

Proof. See Corollary 3.1 of [3]. 2

An important consequence of the strong duality is the usual relationship between a saddle
point of the so-called Lagrange functional
   
L(x, u, v) = f (x) + u, g(x) + v, h(x) , ∀x ∈ S, ∀u ∈ C ∗ , ∀v ∈ Z ∗ ,

and the solution to (2) and (3). In fact one has the following theorem.

Theorem 2. (See [8] and [9].) Let the assumptions of Theorem 1 be fulfilled. Then x0 ∈ K is an
optimal solution to problem (2) if and only if there exist u ∈ C ∗ and v ∈ Z ∗ such that (x0 , u, v)
is a saddle point of the Lagrangean functional, namely

L(x0 , u, v)  L(x0 , u, v)  L(x, u, v), ∀x ∈ S, ∀u ∈ C ∗ , ∀v ∈ Z ∗

and, moreover, it results that


 
u, g(x0 ) = 0.
2086 M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093

In Section 2 of this paper we investigate a generalized Lagrange multipliers rule for the op-
timization problem (2) and formulate a multiplier rule as necessary and sufficient optimality
conditions.
In details we will prove the following theorem.

Theorem 3. Let X be a linear topological space, let Y be a real normed space ordered by a
convex cone C and let Z be a real normed space. Let S be a convex subset of X and let f : S → R
be a given convex functional and let g : S → Y be a given convex mapping and h : S → Z be an
affine-linear mapping. Assume that f , g, h have a directional derivative at x0 ∈ K solution to
problem (2) in every direction x − x0 with arbitrary x ∈ S. Moreover assume that Assumption S
is fulfilled at the minimal point x0 ∈ K. Then there exist u ∈ C ∗ , v ∈ Z ∗ such that
 
f (x0 ) + u g (x0 ) + v h (x0 ) (x − x0 )  0, ∀x ∈ S (5)

and

u g(x0 ) = 0. (6)

Vice versa, if (5) and (6) hold, then x0 is the minimal solution of problem (2) and Assumption S
is verified.

It is worth to compare Theorem 3 with well-known results presented in the literature, as,
for example, with Theorem 5.3 and Corollary 5.4 of [17] for the necessary conditions and with
Theorem 5.14 of [17] for the sufficient conditions. In fact, let us observe that our main result,
Theorem 3, generalize Theorem 5.3 of [17], with regard to the case when h is an affine-linear
mapping. Our assumptions are very general and the Kurcyusz–Robinson–Zowe regularity con-
dition (see [21] and [25]):

g (x0 )  C + {g(x0 )}
cone S − {x } + cone = Y × Z,
h (x0 )
0
{θZ }

in our theorem, is replaced by Assumption S.


Finally, Section 3 is devoted to the application of Assumption S to the study of the bi-obstacle
problem.

2. Proof of Theorem 3

Let us start remarking that, in virtue of Theorems 1 and 2, there exist u ∈ C ∗ , v ∈ Z ∗ solutions
to dual problem (3) and one has that
 
u, g(x0 ) = 0

and
    
f (x0 ) = max

inf f (x) + u, g(x) + v, h(x) .
u∈C , v∈Z ∗ x∈S
M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093 2087

Moreover, setting
   
L(x, u, v) = f (x) + u, g(x) + v, h(x) , ∀x ∈ S, ∀u ∈ C ∗ , ∀v ∈ Z ∗ ,

it results that (x0 , u, v) is a saddle point of the Lagrangean functional, namely

L(x0 , u, v)  L(x0 , u, v)  L(x, u, v), ∀x ∈ S, ∀u ∈ C ∗ , ∀v ∈ Z ∗ . (7)

Let us consider now the right-hand side of (7), we get


       
f (x0 ) + u, g(x0 ) + v, h(x0 )  f (x) + u, g(x) + v, h(x) , ∀x ∈ S.

Taking into account that u, g(x0 ) = 0 and h(x0 ) = 0, we obtain


   
f (x) + u, g(x) + v, h(x)  f (x0 ), ∀x ∈ S.

So we have that x0 is a minimal point of functional f (x) + u, g(x) + v, h(x) in S.
In virtue of well-known theorems (see for example Theorem 3.8 of [17]), since the functional
f (x) + u, g(x) + v, h(x) has directional derivative at x0 in every direction x − x0 with arbi-
trary x ∈ S, one has the thesis
    
f (x0 ) + u, g (x0 ) + v, h (x0 ) (x − x0 )  0, ∀x ∈ S.

Vice versa, let us assume that (5) and (6) hold. The functional f (x) + u, g(x) + v, h(x), in
virtue of assumptions on f , g, h, u, v, is convex. In fact, f (x) is a convex functional and v(h(x))
is affine because v is linear and h is an affine-linear mapping. Moreover g is a convex mapping
with respect to the ordering cone C, namely ∀x, y ∈ S, ∀λ, μ ∈ R one has

g(λx + μy) − λg(x) + μg(y) ∈ −C.

Since u ∈ C ∗ , we get
 
u g(λx + μy) − λg(x) + μg(y)  0.

Hence, it follows that u(g(x)) is convex, because


   
u g(λx + μy)  u λg(x) + μg(y) = λu g(x) + μu g(y) .

In virtue of Theorem 3.8, case b, of [17], we have that x0 is the minimal point of the functional
f (x) + u, g(x) + v, h(x), namely:
        
f (x0 ) + u, g(x0 ) + v, h(x0 ) = min f (x) + u, g(x) + v, h(x) .
x∈S

In particular, from (6), for every x ∈ K, we get


   
f (x0 )  f (x) + u, g(x) + v, h(x)  f (x), (8)

since u, g(x)  0 and h(x) = 0 for all x ∈ K.


2088 M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093

Now, let us show that the strong duality and Assumption S hold true. In fact, from (8), one has
    
f (x0 )  inf f (x) + u, g(x) + v, h(x) . (9)
x∈S

Furthermore, for all u ∈ C ∗ and v ∈ Z ∗ and taking into account that u, g(x0 )  0, we get
        
inf f (x) + u, g(x) + v, h(x)  f (x0 ) + u, g(x0 ) + v, h(x0 )  f (x0 )
x∈S
    
 inf f (x) + u, g(x) + v, h(x) .
x∈S

Then
         
sup inf f (x) + u, g(x) + v, h(x)  f (x0 )  inf f (x) + u, g(x) + v, h(x)
u∈C ∗ , v∈Z ∗ x∈S x∈S
    
 sup inf f (x) + u, g(x) + v, h(x) ,
u∈C ∗ , v∈Z ∗ x∈S

namely
         
max

inf f (x) + u, g(x) + v, h(x) = inf f (x) + u, g(x) + v, h(x)
u∈C , v∈Z ∗ x∈S x∈S
   
= f (x0 ) + u, g(x0 ) + v, h(x0 )
= f (x0 ) = min f (x).
x∈S

So, the strong duality holds and in virtue of Corollary 1 also Assumption S is fulfilled. 2

Corollary 2. If S = X, then
 
f (x0 ) + u g (x0 ) + v h (x0 ) (h) = 0, ∀h ∈ X. (10)

Furthermore, if f and g are Gateaux differentiable on X, then we also get from (10) that
 
f (x0 ) + u g (x0 ) + v h (x0 ) = 0.

3. Application to the bi-obstacle problem

Let Ω ⊂ Rn be an open bounded domain, either convex or with C 1,1 boundary. Let us consider
the linear elliptic operator of second order


n n
∂ ∂ ∂u
Lu = − aij + bi + cu (11)
∂xj ∂xj ∂xi
i,j =1 i=1

with associated bilinear form on H01 (Ω) × H01 (Ω) given by


  
∂u ∂v ∂u
n n
a(u, v) = aij + bi v + cuv dx (12)
∂xj ∂xi ∂xi
Ω i,j =1 i=1
M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093 2089

where
⎧ n



⎪ aij (x)ξi ξj  a|ξ |2 a.e. on Ω, ∀ξ ∈ Rn ,


i,j =1

⎪ a > 0, aij ∈ C 1 (Ω), bi ∈ L∞ (Ω),



⎩ c > 0 such large that a(u, u)  αu2 1 , α > 0, ∀u ∈ H 1 (Ω).
H (Ω) 0
0

Let ψ(x), ψ ∗ (x) ∈ H 1 (Ω), ψ(x)  ψ ∗ (x) a.e. in Ω, ψ(x)  0  ψ ∗ (x) a.e. on ∂Ω and consider
the set K = {u ∈ L2 (Ω): ψ  u  ψ ∗ a.e. in Ω}.
Then the following result holds true (see [4, Corollaire I.1] and [24]).

Theorem 4. Assume that Lψ and Lψ ∗ are measures with (Lψ − f )+ and (Lψ ∗ − f )− ∈
Lp (Ω), p  2. Then, for every f ∈ Lp (Ω) there exists u ∈ K ∩ W 2,p (Ω) ∩ H01 (Ω) unique
solution to the variational inequality
 
Lu(v − u) dx  f (v − u) dx, ∀v ∈ K
Ω Ω

such that
    − 
uW 2,p (Ω)  c f Lp + (Lψ − f )+ Lp +  Lψ ∗ − f Lp .

Now, in this section, we would like to apply the infinite dimensional Lagrange multiplier rule
of the previous section to the variational inequality

(Lu − f )(v − u) dx  0, ∀v ∈ K (13)
Ω

where
 
K = u ∈ L2 (Ω): ψ  u  ψ ∗ a.e. in Ω .

Firstly, let us show that Assumption S is verified. To this aim let us rewrite variational inequal-
ity (13) as an optimization problem. Setting

f (v) = (Lu − f )(v − u) dx, ∀v ∈ K
Ω

we get

f (v)  0, ∀v ∈ K

and u is a minimal point of the problem

min f (v) = f (u) = 0. (14)


v∈K
2090 M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093

We can show the following lemma.

Lemma 1. Let u ∈ K be a solution to variational inequality (13). Let us set


 
Ω+ = x ∈ Ω: u(x) = ψ(x) ,
 
Ω0 = x ∈ Ω: ψ(x) < u(x) < ψ ∗ (x) ,
 
Ω− = x ∈ Ω: u(x) = ψ ∗ (x) .

Then one has

Lu − f  0 a.e. in Ω+ ,
Lu − f = 0 a.e. in Ω0 ,
Lu − f  0 a.e. in Ω− .

Proof. Let us observe that we have



f (v) = (Lu − f )(v − u) dx
Ω
 
= (Lu − f )(v − ψ) dx + (Lu − f )(v − u) dx
Ω+ Ω0


+ (Lu − f ) v − ψ ∗ dx  0, ∀v ∈ K.
Ω−

Let us assume as test function:



⎨= w in Ω+ : ψ < w < ψ ∗ , ∀w ∈ L2 (Ω+ ),
v =u in Ω0 ,

= ψ∗ in Ω− ,

then

f (w) = (Lu − f )(w − ψ) dx  0, ∀ψ < w < ψ ∗ . (15)
Ω+

Since w − ψ > 0 in Ω+ then Lu − f  0 a.e. in Ω+ . In fact, if, by contradiction, there exists a


subset E of Ω+ with m(E) > 0 such that

Lu − f < 0 in E,

choosing

=ψ in Ω+ /E,
w
=s in E: ψ < s < ψ ∗
M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093 2091

we get

f (s) = (Lu − f )(s − ψ) dx < 0
E

that contradicts (15). Hence

Lu − f  0 a.e. in Ω+ .

In the same way we can show other cases. 2

Now we prove the following lemma.

Lemma 2. Problem (14) verifies Assumption S at the minimal point u ∈ K.

Proof. In our case, we have

X = S = L2 (Ω), Y = L2 (Ω) × L2 (Ω),

the dual cone of the ordering cone C of Y is C ∗ = {(α, β) ∈ L2 (Ω) × L2 (Ω): α  0, β  0


a.e. in Ω} and g(v) = (g1 (v), g2 (v)) = (ψ − v, v − ψ ∗ ). Of course in our case C = C ∗ . Further-
more
 

=
M f (v) + α, ψ − v + y1 , v − ψ ∗ + y2 : v ∈ L2 (Ω) \ K, α  0, y = (y1 , y2 ) ∈ C

and

TM

(0, θL2 (Ω) , θL2 (Ω) )
  
= y: y = lim λn f (vn ) + αn , ψ − vn + y1n , vn − ψ ∗ + y2n − (0, θL2 (Ω) , θL2 (Ω) ) ,
n→+∞

with λn > 0, lim f (vn ) + αn = 0, lim λn (ψ − vn + y1n ) = θL2 (Ω) ,
n→+∞ n→+∞

lim λn vn − ψ ∗ + y2n = θL2 (Ω) , lim (ψ − vn + y1n ) = θL2 (Ω) ,
n→+∞ n→+∞
 
lim vn − ψ ∗ + y2n = θL2 (Ω) , vn ∈ L2 (Ω) \ K, αn  0, yn = (y1n , y2n ) ∈ C .
n→+∞

In order to achieve Assumption S, we must show that, if we have

(l, θL2 (Ω) , θL2 (Ω) )


   
= lim λn f (vn ) + αn , lim λn (ψ − vn + y1n ), lim λn vn − ψ ∗ + y2n
n→+∞ n→+∞ n→+∞

belongs to TM

(0, θL2 (Ω) , θL2 (Ω) ), then l must be nonnegative.
2092 M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093

It results
 

l = lim λn f (vn ) + αn = lim λn (Lu − f )(vn − u) dx + αn
n→+∞ n→+∞
Ω
 
= lim λn (Lu − f )(vn − ψ) dx + (Lu − f )(vn − u) dx
n→+∞
Ω+ Ω0
 

+ (Lu − f ) vn − ψ ∗ dx + αn
Ω−
 
= lim λn (Lu − f )(vn − ψ − y1n ) dx + (Lu − f )y1n dx
n→+∞
Ω+ Ω+
  

+ (Lu − f ) vn − ψ ∗ + y2n dx + (Lu − f )(−y2n ) dx + αn .
Ω− Ω−

Taking into account that



lim (Lu − f )λn (vn − ψ − y1n ) dx = 0,
n→+∞
Ω+


lim (Lu − f )λn vn − ψ ∗ + y2n dx = 0
n→+∞
Ω−

and

(Lu − f )λn y1n  0 in Ω+ , (Lu − f )λn (−y2n )  0 in Ω− ,

and αn  0, it follows that l  0. Hence Assumption S holds. 2

Since the other assumptions, required by Theorem 3 and Corollary 2, are fulfilled, then there
exists (λ, μ) ∈ C ∗ such that:

(i) λ(ψ − u) dx = 0 ⇔ λ(ψ − u) = 0 a.e. in Ω
Ω

and

 
μ u − ψ ∗ dx = 0 ⇔ μ u − ψ∗ = 0 a.e. in Ω,
Ω

(ii) (Lu − f ) − λ + μ = 0 a.e. in Ω.


M.B. Donato / Journal of Functional Analysis 261 (2011) 2083–2093 2093

In particular, we can obtain explicitly the values of Lagrangean multipliers λ and μ. In fact,
when λ > 0 one has u = ψ , μ = 0 and λ = Lψ − f . Whereas, when μ > 0 one has u = ψ ∗ ,
λ = 0 and μ = −(Lψ ∗ − f ).

References

[1] A. Barbagallo, A. Maugeri, Duality theory for a dynamic oligopolistic market equilibrium problem, Optimization 60
(2011) 29–52.
[2] J.M. Borwein, A.S. Lewis, Partially finite convex programming, part I: Quasi relative interiors and duality theory,
Math. Program. 57 (1992) 15–48.
[3] R.I. Bot, E.R. Csetnek, A. Moldovan, Revisiting some dual theorems via the quasirelative interior in convex opti-
mization, J. Optim. Theory Appl. 139 (2008) 67–84.
[4] H. Brezis, Problémes unilatéraux, J. Math. Pures Appl. 51 (1972) 1–168.
[5] M.G. Cojocaru, P. Daniele, A. Nagurney, Projected dynamical systems and evolutionary variational inequalities via
Hilbert spaces and applications, J. Optim. Theory Appl. 127 (2005) 549–563.
[6] P. Daniele, Dynamic Networks and Evolutionary Variational Inequalities, New Dimensions in Networks, Edward
Elgar Publishing, Cheltenam, UK, Northampton, MA, USA, 2006.
[7] P. Daniele, Evolutionary variational inequalities and applications to complex dynamic multi-level models, Transp.
Res. Part E 46 (2010) 855–880.
[8] P. Daniele, S. Giuffré, General infinite dimensional duality and applications to evolutionary network equilibrium
problems, Optim. Lett. 1 (2007) 227–243.
[9] P. Daniele, S. Giuffré, G. Idone, A. Maugeri, Infinite dimensional duality and applications, Math. Ann. (2007)
221–239.
[10] P. Daniele, S. Giuffré, A. Maugeri, Remarks on general infinite dimensional duality with cone and equality con-
straints, Commun. Appl. Anal. 13 (4) (2009) 567–578.
[11] M.B. Donato, A. Maugeri, M. Milasi, C. Vitanza, Duality theory for a dynamic Walrasian pure exchange economy,
Pac. J. Optim. 4 (3) (2008) 537–547.
[12] M.B. Donato, M. Milasi, Lagrangean variables in infinite dimensional spaces for a dynamic economic equilibrium
problem, Nonlinear Anal. 74 (2011) 5048–5056.
[13] M.B. Donato, M. Milasi, C. Vitanza, Quasi-variational inequalities for a dynamic competitive economic equilibrium
problem, J. Inequal. Appl. (2009) 1–17.
[14] M.B. Donato, M. Milasi, C. Vitanza, A new contribution to a dynamic competitive equilibrium problem, Appl.
Math. Lett. 23 (2) (2010) 148–151.
[15] R.B. Holmes, Geometric Functional Analysis, Springer, Berlin, 1975.
[16] G. Idone, A. Maugeri, Generalized constraints qualification and infinite dimensional duality, Taiwanese J. Math. 13
(2009) 1711–1722.
[17] J. Jahn, Introduction to the Theory of Nonlinear Optimization, Springer-Verlag, Berlin, Heidelberg, New York,
1996.
[18] A. Maugeri, F. Raciti, On general infinite dimensional complementarity problems, Optim. Lett. 2 (2008) 71–90.
[19] A. Maugeri, C. Vitanza, Time dependent equilibrium problems, in: Chinchuluun, Migdalas, Pardalos, Pitsoulis
(Eds.), Pareto Optimality, Game Theory and Equilibria, in: Springer Optim. Appl., Springer, 2008, pp. 249–266.
[20] M. Milasi, C. Vitanza, Variational inequality and evolutionary market disequilibria: the case of quantity formulation,
in: F. Giannessi, A. Maugeri (Eds.), Variational Analysis and Applications, Springer, 2005, pp. 681–696.
[21] S.M. Robinson, Stability theory for systems of inequalities in nonlinear programming, part II: Differentiable non-
linear systems, SIAM J. Numer. Anal. 13 (1976) 497–513.
[22] R.T. Rockafellar, Conjugate Duality and Optimization, CBMS-NSF Regional Conf. Ser. in Appl. Math., vol. 16,
Society for Industrial and Applied Mathematics, Philadelphia, 1974.
[23] L. Scrimali, Infinite dimensional duality theory applied to investment strategies in environmental policy, submitted
for publication.
[24] G. Stampacchia, On a problem of numerical analysis connected with the theory of variational inequalities, in:
Sympos. Math., vol. X, Academic Press, 1972, pp. 281–293.
[25] J. Zowe, S. Kurcyusz, Regularity and stability for the mathematical programming problem in Banach spaces, Appl.
Math. Optim. 5 (1979) 49–62.

You might also like