Gradient Methods For Minimizing Composite Objective Function

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

CORE DISCUSSION PAPER

2007/76
Gradient methods for minimizing
composite objective function
Yu. Nesterov

September 2007
Abstract
In this paper we analyze several new methods for solving optimization problems with
the objective function formed as a sum of two convex terms: one is smooth and given by a
black-box oracle, and another is general but simple and its structure is known. Despite to
the bad properties of the sum, such problems, both in convex and nonconvex cases, can be
solved with eciency typical for the good part of the objective. For convex problems of the
above structure, we consider primal and dual variants of the gradient method (converge
as O
_
1
k
_
), and an accelerated multistep version with convergence rate O
_
1
k
2
_
, where k is
the iteration counter. For all methods, we suggest some ecient line search procedures
and show that the additional computational work necessary for estimating the unknown
problem class parameters can only multiply the complexity of each iteration by a small
constant factor. We present also the results of preliminary computational experiments,
which conrm the superiority of the accelerated scheme.
Keywords: Local Optimization, Convex Optimization, Nonsmooth optimization, Com-
plexity theory, Black-box model, Optimal methods, Structural Optimization, l
1
-regulari-
zation.

Center for Operations Research and Econometrics (CORE), Catholic University of Louvain (UCL),
34 voie du Roman Pays, 1348 Louvain-la-Neuve, Belgium; e-mail: [email protected].
The research results presented in this paper have been supported by a grant Action de recherche concert`e
ARC 04/09-315 from the Direction de la recherche scientique - Communaut`e francaise de Belgique.
The scientic responsibility rests with its author(s).
1 Introduction
Motivation. In the last years, several advances in Convex Optimization were based on
development of dierent models for optimization problems. Starting from the theory of
self-concordant functions [12], it was becoming more and more clear that the proper use of
the problems structure can lead to very ecient optimization methods, which signicantly
overpass the limitations of the black-box Complexity Theory (see Section 4.1 in [8] for
discussion). For the recent examples, we can mention the development of smoothing
technique [9], or the special methods for minimizing convex objective function up to certain
relative accuracy [10]. In both cases, the proposed optimization schemes strongly employ
the particular structure of corresponding optimization problem.
In this paper, we develop new optimization methods for approximating a global min-
imum of composite convex objective function (x). Namely, we assume that
(x) = f(x) + (x), (1.1)
where f(x) is a dierentiable convex function dened by a black-box oracle, and (x) is
a general closed convex function. However, we assume that function (x) is simple. This
means that we are able to nd a closed-form solution for minimizing the sum of with
some simple auxiliary functions. Let us give several examples.
1. Constrained minimization. Let Q be a closed convex set. Dene as an
indicator function of the set Q:
(x) =
_
0, if x Q,
+, otherwise.
Then, the unconstrained minimization of composite function (1.1) is equivalent to mini-
mizing function f over the set Q. We will see that our assumption on simplicity of function
reduces to ability of nding in a closed form a Euclidean projection of arbitrary point
onto the set Q.
2. Barrier representation of feasible set. Assume that the objective function of
convex constrained minimization problem
nd f

= min
xQ
f(x)
is given by a black-box oracle, but the feasible set Q is described by a -self-concordant
barrier F(x) [12]. Dene (x) =

F(x), (x) = f(x) + (x), and x

= arg min
xQ
f(x).
Then, for arbitrary x int Q, by general properties of self-concordant barriers we get
f( x) f(x

) +( x), x x

) +

F( x), x

x)
f

+|( x)|

| x x

| +.
Thus, a point x, with small norm of the gradient of function , approximates well the
solution of the constrained minimization problem. Note that the objective function does
not belong to any standard class of convex problems formed by functions with bounded
derivatives of certain degree.
1
3. Sparse least squares. In many applications, it is necessary to minimize the
following objective:
(x) =
1
2
|Ax b|
2
2
+|x|
1
def
= f(x) + (x),
(1.2)
where A is a matrix of corresponding dimension and | |
k
denotes the standard l
k
-norm.
The presence of additive l
1
-term very often increases the sparsity of the optimal solution
(see [1, 16]). This feature was observed a long time ago (see, for example, [2, 5, 14, 15]).
Recently, this technique became popular in signal processing and statistics [6, 17].
1)
From the formal point of view, the objective (x) in (1.2) is a nonsmooth convex func-
tion. Hence, the standard black-box gradient schemes need O(
1

2
) iterations for generating
its -solution. The structural methods based on the smoothing technique [9] need O(
1

)
iterations. However, we will see that the same problem can be solved in O(
1

1/2
) iterations
of a special gradient-type scheme.
Contents. In Section 2 we introduce the composite gradient mapping. Its objective
function is formed as a sum of objective of the usual gradient mapping [7] and the general
nonsmooth convex term . For the particular case (1.2), this construction was proposed
in [18]. In this section, we present dierent properties of this object, which are important
for complexity analysis of optimization methods. In Section 3 we study the behavior of
the simplest gradient scheme based on the composite gradient mapping. We prove that in
convex and nonconvex cases we have exactly the same complexity results as in the usual
smooth situation ( 0). For example, in the case of convex f with Lipschitz continuous
gradient, the Gradient Method converges as O(
1
k
), where k is the iteration counter. It is
important that our version of the Gradient Method has an adjustable stepsize strategy,
which needs in average one additional computation of the function value per iteration.
In the next Section 4, we introduce a machinery of estimate sequences and apply it rst
for justifying the rate of convergence of the dual variant of the gradient method. After, we
present an accelerated version, which converges as O(
1
k
2
). As compared with the previous
variants of accelerated schemes (e.g. [8], [9]), our new scheme can eciently adjust the
initial estimate of the unknown Lipschitz constant. In Section 5 we give examples of
applications of the accelerated scheme. We show how to minimize functions with known
strong convexity parameter (Section 5.1), how to nd a point with a small residual in
the system of the rst-order optimality conditions (Section 5.2), and how to approximate
unknown parameter of strong convexity (Section 5.3). In the last Section 6 we present
the results of preliminary testing of the proposed optimization methods.
Notation. In what follows E, denotes a nite-dimensional real vector space, and E

the
dual space, which is formed by all linear functions on E. The value of function s E

at
x E is denoted by s, x). By xing a positive denite self-adjoint operator B : E E

,
we can dene the following Euclidean norms:
|h| = Bh, h)
1/2
, h E,
|s|

= s, B
1
s)
1/2
, s E

.
(1.3)
1)
An interested reader can nd a good survey of the literature, existing minimization technique, and new
methods in [3] and [4].
2
In particular case of coordinate vector space E = R
n
, we have E = E

. Then, usually B
is taken as a unit matrix, and s, x) denotes the standard coordinate-wise inner product.
Further, for function f(x), x E, we denote by f(x) its gradient at x:
f(x +h) = f(x) +f(x), h) +o(|h|), h E.
Clearly f(x) E

. For convex function we denote by (x) its subdierential at x.


Finally, the directional derivative of function is dened in the usual way:
D(y)[u] = lim
0
1

[(y +u) (y)].


2 Composite gradient mapping
In this paper, we consider a problem of approximating a local minimum of function
(x)
def
= f(x) + (x)
(2.1)
over a convex set Q, where function f is dierentiable, and function is closed and convex
on Q. For characterizing a solution to our problem, dene the cone of feasible directions
and the corresponding dual cone, which is called normal:
T(y) = u = (x y), x Q, 0 E,
^(y) = s : s, x y) 0, x Q E

, y Q.
Then, the rst-order necessary optimality conditions at the point of local minimum x

can be written as follows:

def
= f(x

) +

^(x

),
(2.2)
where

(x

). In other words,

, u) 0 u T(x

). (2.3)
Since is convex, the latter condition is equivalent to the following:
D(x

)[u] 0 u T(x

). (2.4)
Note that in the case of convex f, any of the conditions (2.2) - (2.4) is sucient for point
x

to be a point of global minimum of function over Q.


The last variant of the rst-order optimality conditions is convenient for dening an
approximate solution to our problem.
Denition 1 The point x Q satises the rst-order optimality conditions of local min-
imum of function over the set Q with accuracy 0 if
D( x)[u] u T( x), |u| = 1. (2.5)
3
Note that in the case T( x) = E with 0 / f( x) + ( x), this condition reduces to
the following inequality:
min
|u|=1
D( x)[u] = min
|u|=1
max
( x)
f( x) +, u)
= min
|u|1
max
( x)
f( x) +, u) = max
( x)
min
|u|1
f( x) +, u)
= min
( x)
|f( x) +|

.
For nding a point x satisfying condition (2.5), we are going to use the composite
gradient mapping. Namely, at any y Q dene
m
L
(y; x) = f(y) +f(y), x y) +
L
2
|x y|
2
+ (x),
T
L
(y) = arg min
xQ
m
L
(y; x),
(2.6)
where L is a positive constant.
2
Then, we can dene a constrained analogue of the gradient
direction of smooth function, the vector
g
L
(y) = L B(y T
L
(y)) E

. (2.7)
(In case of an ambiguity with objective function, we use notation g
L
(y)[].) It is easy
to see that for Q E and 0 we get g
L
(y) = (y) f(x) for any L > 0. Our
assumption on simplicity of function means exactly the feasibility of operation (2.6).
Let us mention the main properties of the composite gradient mapping. Almost all of
them follow from the rst-order optimality condition for problem (2.6):
f(y) +LB(T
L
(y) y) +
L
(y), x T
L
(y)) 0, x Q, (2.8)
where
L
(y) (T
L
(y)). In what follows, we denote

t
(T
L
(y)) = f(T
L
(y)) +
L
(y) (T
L
(y)). (2.9)
We are going to show that the above subgradient inherits all important properties of the
gradient of smooth convex function.
From now on, we assume that the rst part of the objective function (2.1) has Lipschitz-
continuous gradient:
|f(x) f(y)|

L
f
|x y|, x, y Q, (2.10)
From (2.10) and convexity of Q, one can easily derive the following useful inequality (see,
for example, [13]):
[f(x) f(y) f(y), x y)[
L
f
2
|x y|
2
, x, y Q. (2.11)
First of all, let us estimate a local variation of function . Denote
S
L
(y) =
|f(T
L
(y))f(y)|

|T
L
(y)y|
L
f
.
2)
Recall that in the usual gradient mapping [7] we have () 0. Our modication is inspired by [18].
4
Theorem 1 At any y Q,
(y) (T
L
(y))
2LL
f
2L
2
|g
L
(y)|
2

, (2.12)

t
(T
L
(y)), y T
L
(y))
LL
f
L
2
|g
L
(y)|
2

. (2.13)
Moreover, for any x Q, we have

t
(T
L
(y)), x T
L
(y))
_
1 +
1
L
S
L
(y)
_
|g
L
(y)|

|T
L
(y) x|

_
1 +
L
f
L
_
|g
L
(y)|

|T
L
(y) x|.
(2.14)
Proof:
For the sake of notation, denote T = T
L
(y) and =
L
(y). Then
(T)
(2.10)
f(y) +f(y), T y) +
L
f
2
|T y|
2
+ (T)
(2.8), x=y
f(y) +LB(T y) +, y T) +
L
f
2
|T y|
2
+ (T)
= f(y) +
L
f
2L
2
|T y|
2
+ (T) +, y T)
(y)
2LL
f
2
|T y|
2
.
Taking into account the denition (2.7), we get (2.12). Further,
f(T) +, y T) = f(y) +, y T) f(T) f(y), T y)
(2.8), x=y
LB(y T), y T) f(T) f(y), T y)
(2.10)
(L L
f
)|T y|
2
(2.7)
=
LL
f
L
2
|g
L
(y)|
2

.
Thus, we get (2.13). Finally,
f(T) +, T x)
(2.8)
f(T), T x) +f(y) +LB(T y), x T)
= f(T) f(y), T x) g
L
(y), x T)
(2.7)

_
1 +
1
L
S
L
(y)
_
|g
L
(y)|

|T x|,
and (2.14) follows. 2
Corollary 1 For any y Q, and any u T(T
L
(y)), |u| = 1, we have
D(T
L
(y))[u]
_
1 +
L
f
L
_
|g
L
(y)|

. (2.15)
5
In this respect, it is interesting to investigate the dependence of |g
L
(y)|

in L.
Lemma 1 The norm of the gradient direction |g
L
(y)|

is increasing in L, and the norm


of the step |T
L
(y) y| is decreasing in L.
Proof:
Indeed, consider the function
() = min
xQ
_
f(y) +f(y), x y) +
1
2
|x y|
2
+ (x)
_
.
The objective function of this minimization problem is jointly convex in x and . There-
fore, () is convex in . Since the minimum of this problem is attained at a single point,
() is dierentiable and

t
() =
1
2
|
1

[T
1/
(y) y]|
2
=
1
2
|g
1/
(y)|
2

.
Since () is convex,
t
() is an increasing function of . Hence, |g
1/
(y)|

is a decreasing
function of .
For the second statement follows from concavity of function
(L) = min
xQ
_
f(y) +f(y), x y) +
L
2
|x y|
2
+ (x)
_
.
2
Now let us look at the output of the composite gradient mapping from a global per-
spective.
Theorem 2 For any y Q we have
m
L
(y; T
L
(y)) (y)
1
2L
|g
L
(y)|
2

, (2.16)
m
L
(y; T
L
(y)) min
xQ
_
(x) +
L+L
f
2
|x y|
2
_
.
(2.17)
If function f is convex, then
m
L
(y; T
L
(y)) min
xQ
_
(x) +
L
2
|x y|
2
_
.
(2.18)
Proof:
Note that function m
L
(y; x) is strongly convex in x with convexity parameter L. Hence,
(y) m
L
(y; T
L
(y)) = m
L
(y; y) m
L
(y; T
L
(y))
L
2
|y T
L
(y)|
2
=
1
2L
|g
L
(y)|
2

.
Further, if f is convex, then
m
L
(y; T
L
(y)) = min
xQ
_
f(y) +f(y), x y) +
L
2
|x y|
2
+ (x)
_
min
xQ
_
f(x) + (x) +
L
2
|x y|
2
_
= min
xQ
_
(x) +
L
2
|x y|
2
_
.
6
For nonconvex f, we can plug into the same reasoning the following consequence of (2.11):
f(y) +f(y), x y) f(x) +
L
f
2
|x y|
2
.
2
Remark 1 In view of (2.10), for L L
f
we have
(T
L
(y)) m
L
(y; T
L
(y)). (2.19)
Hence, in this case inequality (2.18) guarantees
(T
L
(y)) min
xQ
_
(x) +
L
2
|x y|
2
_
.
(2.20)
Finally, let us prove a useful inequality for strongly convex .
Lemma 2 Let function be strongly convex with convexity parameter

> 0. Then for


any y Q we have
|T
L
(y) x

|
1

_
1 +
1
L
S
L
(y)
_
|g
L
(y)|

_
1 +
L
f
L
_
|g
L
(y)|

, (2.21)
where x

is a unique minimum of on Q.
Proof:
Indeed, in view of inequality (2.14), we have:
_
1 +
L
f
L
_
|g
L
(y)|

|T
L
(y) x

|
_
1 +
1
L
S
L
(y)
_
|g
L
(y)|

|T
L
(y) x

|

t
(T
L
(y)), T
L
(y) x

|T
L
(y) x

|
2
,
and (2.21) follows. 2
Now we are ready to analyze dierent optimization schemes based on the composite
gradient mapping. In the next section, we describe the simplest one.
7
3 Gradient method
Dene rst the gradient iteration with the simplest backtracking strategy for the line
search parameter (we call its termination condition the full relaxation).
Gradient Iteration ((x, M)
Set: L := M.
Repeat: T := T
L
(x),
if (T) > m
L
(x; T) then L := L
u
,
Until: (T) m
L
(x; T).
Output: ((x, M).T = T, ((x, M).L = L,
((x, M).S = S
L
(x).
(3.1)
If there exists an ambiguity in the objective function, we use notation (

(x, M).
For running the gradient scheme, we need to choose an initial optimistic estimate L
0
for the Lipschitz constant L
f
:
0 < L
0
L
f
, (3.2)
and two adjustment parameters
u
> 1 and
d
1. Let y
0
Q be our starting point. For
k 0, consider the following iterative process.
Gradient Method (/(y
0
, L
0
)
y
k+1
= ((y
k
, L
k
).T,
M
k
= ((y
k
, L
k
).L,
L
k+1
= maxL
0
, M
k
/
d
.
(3.3)
Thus, y
k+1
= T
M
k
(y
k
). Since function f satises inequality (2.11), in the loop (3.1),
the value L can keep increasing only if L L
f
. Taking into account condition (3.2), we
obtain the following bounds:
L
0
L
k
M
k

u
L
f
. (3.4)
8
Moreover, if
d

u
, then
L
k
L
f
, k 0. (3.5)
Note that in (3.1) there is no explicit bound on the number of repetition of the loop.
However, it is easy to see that the total amount of calls of oracle N
k
after k iterations of
(3.3) cannot be too big.
Lemma 3 In the method (3.3), for any k 0 we have
N
k

_
1 +
ln
d
ln u
_
(k + 1) +
1
lnu

_
ln
uL
f

d
L
0
_
+
. (3.6)
Proof:
Denote by n
i
1 the number of calls of the oracle at iteration i 0. Then
L
i+1

1

d
L
i

n
i
1
u
.
Thus,
n
i
1 +
ln
d
ln u
+
1
ln u
ln
L
i+1
L
i
.
Hence, we can estimate
N
k

k

i=0
n
i
=
_
1 +
ln
d
ln
u
_
(k + 1) +
1
ln
u
ln
L
k+1
L
0
.
In remains to note that L
k+1
(3.4)
max
_
L
0
,

u

d
L
f
_
. 2
A reasonable choice of the adjustment parameters is as follows:

u
=
d
= 2
(3.6)
N
k
2(k + 1) + log
2
L
f
L
0
, L
k
(3.5)
L
f
.
(3.7)
Thus, the performance of the Gradient Method (3.3) is well described by the estimates for
the iteration counter, Therefore, in the rest part of this section we will focus on estimating
the rate of convergence of this method in dierent situations.
Let us start from the general nonconvex case. Denote

k
= min
0ik
1
2M
i
|g
M
i
(y
i
)|
2

,
i
k
= 1 + arg min
0ik
1
2M
i
|g
M
i
(y
i
)|
2

.
Theorem 3 Let function be bounded below on Q by some constant

. Then

k

(y
0
)

k+1
. (3.8)
Moreover, for any u T(y
i
k
) with |u| = 1 we have
D(y
i
k
)[u]
(1+u)L
f
L
1/2
0

_
2((y
0
)

)
k+1
. (3.9)
9
Proof:
Indeed, in view of the termination criterion in (3.1), we have
(y
i
) (y
i+1
) (y
i
) m
M
i
(y
i
; T
M
i
(y
i
))
(2.16)

1
2M
i
|g
M
i
(y
i
)|
2

.
Summing up these inequalities for i = 0, . . . , k, we obtain (3.8).
Denote j
k
= i
k
1. Since y
i
k
= T
M
j
k
(y
j
k
), for any u T(y
i
k
) with |u| = 1 we have
D(y
i
k
)[u]
(2.15)

_
1 +
L
f
M
j
k
_
|g
M
j
k
(y
j
k
)|

=
_
1 +
L
f
M
j
k
_

_
2M
j
k

k
(3.8)

M
j
k
+L
f
M
1/2
j
k

_
2((y
0
)

)
k+1
(3.4)

(1+u)L
f
L
1/2
0

_
2((y
0
)

)
k+1
.
2
Let us describe now the behavior of the Gradient Method (3.3) in convex case.
Theorem 4 Let function f be convex on Q. Assume that it attains a minimum on Q at
point x

and that the level sets of are bounded:


|y x

| R y Q : (y) (y
0
). (3.10)
If (y
0
) (x

)
u
L
f
R
2
, then (y
1
) (x

)

u
L
f
R
2
2
. Otherwise, for any k 0 we
have
(y
k
) (x

)
2uL
f
R
2
k+2
.
(3.11)
Moreover, for any u T(y
i
k
) with |u| = 1 we have
D(y
i
k
)[u]
4(1+u)L
f
R
k+3

_
L
f
L
0
.
(3.12)
Proof:
Since (y
k+1
) (y
k
) for all k 0, we have the bound |y
k
x

| R valid for all


generated points. Consider
y
k
() = x

+ (1 )y
k
Q [0, 1].
Then,
(y
k+1
) m
M
k
(y
k
; T
M
k
(y
k
))
(2.18)
min
yQ
_
(y) +
M
k
2
|y y
k
|
2
_
(y = y
k
()) min
01
_
(x

+ (1 )y
k
) +
M
k

2
2
|y
k
x

|
2
_
(3.4)
min
01
_
(y
k
) ((y
k
) (x

)) +

u
L
f
R
2
2

2
_
.
10
If (y
0
) (x

)
u
L
f
R
2
, then the optimal solution of the latter optimization problem
is = 1 and we get
(y
1
) (x

)

u
L
f
R
2
2
.
Otherwise, the optimal solution is
=
(y
k
)(x

)
uL
f
R
2

(y
0
)(x

)
uL
f
R
2
1,
and we obtain
(y
k+1
) (y
k
)
[(y
k
)(y

)]
2
2
u
L
f
R
2
. (3.13)
From this inequality, denoting
k
=
1
(y
k
)(x

)
, we get

k+1

k
+

k+1
2
k
uL
f
R
2

k
+
1
2uL
f
R
2
.
Hence, for k 0 we have

k

1
(y
0
)(x

)
+
k
2
u
L
f
R
2

k+2
2
u
L
f
R
2
.
Further, let us x an integer m, 0 < m < k. Since
(y
i
) (y
i+1
)
1
2M
i
|g
M
i
(y
i
)|
2

, i = 0, . . . , k,
we have
(k m+ 1)
k

k

i=m
1
2M
i
|g
M
i
(y
i
)|
2

(y
m
) (y
k+1
)
(y
m
) (x

)
(3.11)

2
u
L
f
R
2
m+2
.
Denote j
k
= i
k
1. Then, for any u T(y
i
k
) with |u| = 1, we have
D(y
i
k
)[u]
(2.15)

_
1 +
L
f
M
j
k
_
|g
M
j
k
(y
j
k
)|

=
_
1 +
L
f
M
j
k
_

_
2M
j
k

k
(3.11)
2
M
j
k
+L
f
M
1/2
j
k

_
uL
f
R
2
(m+2)(k+1m)
(3.4)
2(1 +
u
)L
f
R
_
L
f
L
0
(m+2)(k+1m)
.
Choosing m =
k
2
|, we get (m+ 2)(k + 1 m)
_
k+3
2
_
2
. 2
Theorem 5 Let function be strongly convex on Q with convexity parameter

. If

L
f
2
u
, then for any k 0 we have
(y
k
) (x

)
_

u
L
f

_
k
((y
0
) (y

))
1
2
k
((y
0
) (y

)).
(3.14)
Otherwise,
(y
k
) (x

)
_
1

4
u
L
f
_
k
((y
0
) (y

)).
(3.15)
11
Proof:
Since is strongly convex, for any k 0 we have
(y
k
) (x

2
|y
k
x

|
2
. (3.16)
Denote y
k
() = x

+ (1 )y
k
Q, [0, 1]. Then,
(y
k+1
)
(2.18)
min
01
_
(x

+ (1 )y
k
) +
M
k

2
2
|y
k
x

|
2
_
(3.4)
min
01
_
(y
k
) ((y
k
) (x

)) +

u
L
f
2

2
|y
k
x

|
2
_
(3.16)
min
01
_
(y
k
)
_
1

u
L
f

_
((y
k
) (x

))
_
.
The minimum of the last expression is achieved for

= min
_
1,

2
u
L
f
_
. Hence, if

2
u
L
f
1, then

= 1 and we get
(y
k+1
) (x

)

u
L
f

((y
k
) (y

))
1
2
((y
k
) (y

)).
If

2uL
f
1, then

2uL
f
and
(y
k+1
) (x

)
_
1

4
u
L
f
_
((y
k
) (y

)).
2
Remark 2 1) In Theorem 5, the condition number
L
f

can be smaller than one.


2) For strongly convex , the bounds on the directional derivatives can be obtained by
combining the inequalities (3.14), (3.15) with the estimate
(y
k
) (x

)
(2.12):L=L
f

1
2L
f
|g
L
f
(y
k
)|
2

and inequality (2.15). Thus, inequality (3.14) results in the bound


D(y
k+1
)[u] 2
_

u
L
f

_
k/2

_
2L
f
((y
0
)

),
(3.17)
and inequality (3.15) leads to the bound
D(y
k+1
)[u] 2
_
1

4uL
f
_
k/2

_
2L
f
((y
0
)

),
(3.18)
which are valid for all u T(y
k+1
) with |u| = 1.
12
4 Accelerated scheme
In the previous section, we have seen that, for convex f, the gradient method (3.3) con-
verges as O(
1
k
). However, it is well known that on the convex problems the usual gradient
scheme can be accelerated (e.g. Chapter 2 in [8]). Let us show that the same acceleration
can be achieved for composite objective function.
Consider the problem
min
xE
[ (x) = f(x) + (x) ],
(4.1)
where function f is convex and satises (2.10), and function is closed and strongly
convex on E with convexity parameter

0. We assume this parameter to be known.


The case

= 0 corresponds to convex . Denote by x

the optimal solution to (4.1).


In problem (4.1), we allow dom ,= E. Therefore, the formulation (4.1) covers also
the constrained problems instances. Note that for (4.1), the rst-order optimality condi-
tions (2.8) dening the composite gradient mapping can be written in a simpler form:
T
L
(y) dom,
f(y) +
L
(y) = LB(y T
L
(y)) g
L
(y),
(4.2)
where
L
(y) (T
L
(y)).
For justifying the rate of convergence of dierent schemes as applied to (4.1), we will
use the machinery of estimate functions in its newer variant [11]. Taking into account the
special form of the objective in (4.1), we update recursively the following sequences.
A minimizing sequence x
k

k=0
.
A sequence of increasing scaling coecients A
k

k=0
:
A
0
= 0, A
k
def
= A
k1
+a
k
, k 1.
Sequence of estimate functions

k
(x) = l
k
(x) +A
k
(x) +
1
2
|x x
0
|
2
k 0, (4.3)
where x
0
dom is our starting point, and l
k
(x) are linear functions in x E.
However, as compared with [11], we will add a possibility to update the estimates for
Lipschitz constant L
f
, using the initial guess L
0
satisfying (3.2), and two adjustment
parameters
u
> 1 and
d
1.
For the above objects, we maintain recursively the following relations:

1
k
: A
k
(x
k
)

k
min
x

k
(x),

2
k
:
k
(x) A
k
(x) +
1
2
|x x
0
|
2
, x E.
_

_
, k 0. (4.4)
These relations clearly justify the following rate of convergence of the minimizing sequence:
(x
k
) (x

)
|x

x
0
|
2
2A
k
, k 1. (4.5)
13
Denote v
k
= arg min
xE

k
(x). Since

k
1, for any x E we have
A
k
(x
k
) +
1
2
|x v
k
|
2
1
1
k
A
k

k
+
1
2
|x v
k
|
2

k
(x)
1
2
k
A
k
(x) +
1
2
|x x
0
|
2
.
Hence, taking x = x

, we get two useful consequences of (4.4):


|x

v
k
| |x

x
0
|, |v
k
x
0
| 2|x

x
0
|, k 1. (4.6)
Note that the relations (4.4) can be used for justifying the rate of convergence of a dual
variant of the gradient method (3.3). Indeed, for v
0
dom dene
0
(x) =
1
2
|x v
0
|
2
,
and choose L
0
satisfying condition (3.2).
Dual Gradient Method T((v
0
, L
0
), k 0.
y
k
= ((v
k
, L
k
).T, M
k
= ((v
k
, L
k
).L,
L
k+1
= maxL
0
, M
k
/
d
, a
k+1
=
1
M
k
,

k+1
(x) =
k
(x) +
1
M
k
[f(v
k
) +f(v
k
), x v
k
) + (x)].
(4.7)
Since is simple, the points v
k
are easily computable.
Note that the relations
1
0
and
2
k
, k 0, are trivial. Relations
1
k
can be justied
by induction. Dene x
0
= y
0
,
k
= min
0ik1
(y
i
), and x
k
: (x
k
) =
k
for k 1. Then

k+1
= min
x
_

k
(x) +
1
M
k
[f(v
k
) +f(v
k
), x v
k
) + (x)]
_
1
1
k
A
k

k
+ min
x
_
1
2
|x v
k
|
2
+
1
M
k
[f(v
k
) +f(v
k
), x v
k
) + (x)]
_
(2.6)
= A
k

k
+a
k+1
m
M
k
(v
k
; y
k
)
(3.1)
A
k

k
+a
k+1
(y
k
) A
k+1

k+1
.
Thus, relations
1
k
are valid for all k 0. Since the values M
k
satisfy bounds (3.4), for
method (4.7) we obtain the following rate of convergence:
(x
k
) (x

)
uL
f
2k
|x

v
0
|
2
, k 1. (4.8)
Note that the constant in the right-hand side of this inequality is four times smaller than
the constant in (3.11). However, each iteration in the dual method is two times more
expensive as compared to the primal version (3.3).
However, the method (4.7) does not implement the best way of using the machinery
of estimate functions. Let us look at the accelerated version of (4.7). As parameters, it
14
has the starting point x
0
dom, the lower estimate L
0
> 0 for the Lipschitz constant
L
f
, and a lower estimate [0,

] for the convexity parameter of function .


Accelerated method /(x
0
, L
0
, )
Initial settings:
0
(x) =
1
2
|x x
0
|
2
, A
0
= 0.
Iteration k 0
Set: L := L
k
.
Repeat: Find a from quadratic equation
a
2
A
k
+a
= 2
1+A
k
L
. ()
Set y =
A
k
x
k
+av
k
A
k
+a
, and compute T
L
(y).
if
t
(T
L
(y)), y T
L
(y)) <
1
L
|
t
(T
L
(y))|
2

, then L := L
u
.
Until:
t
(T
L
(y)), y T
L
(y))
1
L
|
t
(T
L
(y))|
2

. ()
Define: y
k
:= y, M
k
:= L, a
k+1
:= a,
L
k+1
:= M
k
/
d
, x
k+1
:= T
M
k
(y
k
),

k+1
(x) :=
k
(x) +a
k+1
[f(x
k+1
) +f(x
k+1
), x x
k+1
) + (x)].
(4.9)
As compared with Gradient Iteration (3.1), we use a damped relaxation condition ()
as a stopping criterion of the internal cycle of (4.9).
Lemma 4 Condition (**) in (4.9) is satised for any L L
f
.
Proof:
Denote T = T
L
(y). Multiplying the representation

t
(T) = f(T) +
L
(y)
(4.2)
= LB(y T) +f(T) f(y)
(4.10)
by vector y T, we obtain

t
(T), y T) = L|y T|
2
f(y) f(T), y T)
(4.10)
=
1
L
_
|
t
(T)|
2
+ 2Lf(y) f(T), y T) |f(y) f(T)|
2

f(y) f(T), y T)
=
1
L
|
t
(T)|
2
+f(y) f(T), y T)
1
L
|f(y) f(T)|
2

.
15
Hence, for L L
f
condition (**) is satised. 2
Thus, we can always guarantee
L
k
M
k

u
L
f
. (4.11)
If
d

u
, then the upper bound (3.5) remains valid.
Let us establish a relation between the total number of calls of oracle N
k
after k
iterations, and the value of the iteration counter.
Lemma 5 In the method (4.9), for any k 0 we have
N
k
2
_
1 +
ln
d
ln u
_
(k + 1) +
1
lnu
ln
2
u
L
f

d
L
0
. (4.12)
Proof:
Denote by n
i
1 the number of calls of the oracle at iteration i 0. At each cycle of the
internal loop we call the oracle twice for computing f(y) and f(T
L
(y)). Therefore,
L
i+1
=
1

d
L
i

0.5n
i
1
u
.
Thus,
n
i
= 2
_
1 +
ln
d
ln
u
+
1
ln
u
ln
L
i+1
L
i
_
.
Hence, we can compute
N
k
=
k

i=0
n
i
= 2
_
1 +
ln
d
ln
u
_
(k + 1) +
1
ln
u
ln
L
k+1
L
0
.
In remains to note that L
k+1
(4.11)

d
L
f
. 2
Thus, each iteration of (4.9) needs approximately two times more calls of oracle than
one iteration of the Gradient Method:

u
=
d
= 2 N
k
4(k + 1) + log
2
L
f
L
0
, L
k
L
f
. (4.13)
However, we will see that the rate of convergence of (4.9) is much higher.
Let us start from two auxiliary statements.
Lemma 6 Assume

. Then the sequences x


k
, A
k
and
k
, generated by the
method /(x
0
, L
0
, ), satisfy relations (4.4) for all k 0, .
Proof:
Indeed, in view of initial settings of (4.9), A
0
= 0 and

0
= 0. Hence, for k = 0, both
relations (4.4) are trivial.
Assume now that relations
1
k
,
2
k
are valid for some k 0. In view of
2
k
, for any
x E we have

k+1
(x) A
k
(x) +
1
2
|x x
0
|
2
+a
k+1
[f(x
k+1
) +f(x
k+1
), x x
k+1
) + (x)]
(A
k
+a
k+1
)(x) +
1
2
|x x
0
|
2
,
16
and this is
2
k+1
. Let us show that the relation
1
k+1
is also valid.
Indeed, in view of (4.3), function
k
(x) is strongly convex with convexity parameter
1 +A
k
. Hence, in view of
1
k
, for any x E, we have

k
(x)

k
+
1+A
k
2
|x v
k
|
2
A
k
(x
k
) +
1+A
k
2
|x v
k
|
2
. (4.14)
Therefore

k+1
= min
xE

k
(x) +a
k+1
[f(x
k+1
) +f(x
k+1
), x x
k+1
) + (x)]
(4.14)
min
xE
_
A
k
(x
k
) +
1+A
k
2
|x v
k
|
2
+a
k+1
[(x
k+1
) +
t
(x
k+1
), x x
k+1
)]
_
min
xE
(A
k
+a
k+1
)(x
k+1
) +A
k

t
(x
k+1
), x
k
x
k+1
)
+a
k+1

t
(x
k+1
), x x
k+1
) +
1+A
k
2
|x v
k
|
2

(4.9)
= min
xE
A
k+1
(x
k+1
) +
t
(x
k+1
), A
k+1
y
k
a
k+1
v
k
A
k
x
k+1
)
+a
k+1

t
(x
k+1
), x x
k+1
) +
1+A
k
2
|x v
k
|
2

= min
xE
A
k+1
(x
k+1
) +A
k+1

t
(x
k+1
), y
k
x
k+1
)
+a
k+1

t
(x
k+1
), x v
k
) +
1+A
k
2
|x v
k
|
2
.
Thus, we have proved inequality

k+1
A
k+1
(x
k+1
) +A
k+1

t
(x
k+1
), y
k
x
k+1
)
a
2
k+1
2(1+A
k
)
|
t
(x
k+1
)|
2

.
On the other hand, by termination criterion in (4.9), we have

t
(x
k+1
), y
k
x
k+1
)
1
M
k
|
t
(x
k+1
)|
2

.
It remains to note that in (4.9) we choose a
k+1
from the quadratic equation
A
k+1
A
k
+a
k+1
=
M
k
a
2
k+1
2(1+A
k
)
.
Thus,
1
k+1
is valid. 2
Thus, in order to use inequality (4.5) for deriving the rate of convergence of method
/(x
0
, L
0
, ), we need to estimate the rate of growth of the scaling coecients A
k

k=0
.
Lemma 7 For any 0, the scaling coecients grow as follows:
A
k

k
2
2uL
f
, k 0. (4.15)
For > 0, the rate of growth is linear:
A
k

1
uL
f

_
1 +
_

2uL
f
_
2(k1)
, k 1.
(4.16)
17
Proof:
Indeed, in view of equation () in (4.9), we have:
A
k+1
A
k+1
(1 +A
k
) =
M
k
2
(A
k+1
A
k
)
2
=
M
k
2
_
A
1/2
k+1
A
1/2
k
_
2
_
A
1/2
k+1
+A
1/2
k
_
2
2A
k+1
M
k
_
A
1/2
k+1
A
1/2
k
_
2
(4.11)
2A
k+1

u
L
f
_
A
1/2
k+1
A
1/2
k
_
2
.
Thus, for any k 0 we get A
1/2
k

k

2
u
L
f
. If > 0, then, by the same reasoning as
above, we obtain
A
k
A
k+1
< A
k+1
(1 +A
k
) 2A
k+1

u
L
f
_
A
1/2
k+1
A
1/2
k
_
2
.
Hence, A
1/2
k+1
A
1/2
k
_
1 +
_

2uL
f
_
. Since A
1
=
1
M
0
(4.11)

1
uL
f
, we come to (4.16). 2
Now we can summarize all our observations.
Theorem 6 Let the gradient of function f be Lipschitz continuous with constant L
f
. And
let the parameter L
0
satisfy condition (3.2). Then the rate of convergence of the method
/(x
0
, L
0
, 0) as applied to the problem (4.1) can be estimated as follows:
(x
k
) (x

)

u
L
f
|x

x
0
|
2
k
2
, k 1.
(4.17)
If in addition the function is strongly convex, then the sequence x
k

k=1
generated by
/(x
0
, L
0
,

) satises both (4.17) and the following inequality:


(x
k
) (x

)

u
L
f
2
|x

x
0
|
2

_
1 +
_

2
u
L
f
_
2(k1)
, k 1.
(4.18)
In the next section we will show how to apply this result in order to achieve some
specic goals for dierent optimization problems.
5 Dierent minimization strategies
5.1 Strongly convex objective with known parameter
Consider the following convex constrained minimization problem:
min
xQ

f(x), (5.1)
where Q is a closed convex set, and

f is a strongly convex function with Lipschitz con-
tinuous gradient. Assume the convexity parameter

f
to be known. Denote by
Q
(x) an
indicator function of set Q:

Q
(x) =
_
0, x Q,
+, otherwise.
18
We can solve the problem (5.1) by two dierent techniques.
1. Reallocating the prox-term in the objective. For (0,

f
], dene
f(x) =

f(x)

2
|x x
0
|
2
, (x) =
Q
(x) +

2
|x x
0
|
2
. (5.2)
Note that function f in (5.2) is convex and its gradient is Lipschitz continuous with
L
f
= L

f
. Moreover, the function (x) is strongly convex with convexity parameter
. On the other hand,
(x) = f(x) + (x) =

f(x) +
Q
(x).
Thus, the corresponding unconstrained minimization problem (4.1) coincides with con-
strained problem (5.1). Since all conditions of Theorem 6 are satised, the method
/(x
0
, L
0
, ) has the following performance guarantees:

f(x
k
)

f(x

)
u(L

f
)|x

x
0
|
2
2
_
1+
_

2
u
(L

f
)
_
2(k1)
, k 1.
(5.3)
This means that an -solution of problem (5.1) can be obtained by this technique in
O
_
_
L

ln
1

_
(5.4)
iterations. Note that the same problem can be solve also by the Gradient Method (3.3).
However, in accordance to (3.15), its performance guarantee is much worse; it needs
O
_
L

ln
1

_
iterations.
2. Restart. For problem (5.1), dene the following components of composite objec-
tive function in (4.1):
f(x) =

f(x), (x) =
Q
(x). (5.5)
Let us x an upper bound N 1 for the number of iterations in /. Consider the following
two-level process:
Choose u
0
Q.
Compute u
k+1
as a result of N iterations of /(u
k
, L
0
, 0), k 0.
(5.6)
In view of denition (5.5), we have

f(u
k+1
)

f(x

)
(4.17)


u
L

f
|x

u
k
|
2
N
2

2
u
L

f
[

f(u
k
)

f(x

)]

f
N
2
.
Thus, taking N = 2
_

u
L

f
, we obtain

f(u
k+1
)

f(x

)
1
2
[

f(u
k+1
)

f(x

)].
Hence, the performance guarantees of this technique are of the same order as (5.4).
19
5.2 Approximating the rst-order optimality conditions
In some applications, we are interested in nding a point with small residual of the system
of the rst-order optimality conditions. Since
D(T
L
(x))[u]
(2.15)

_
1 +
L
f
L
_
|g
L
(x)|

(2.12)
(L +L
f
)
_
(x)(x

)
2LL
f
u T(T
L
(x)), |u| = 1,
(5.7)
the upper bounds on this residual can be obtained from the estimates on the rate of
convergence of method (4.9) in the form (4.17) or (4.18). However, in this case, the rst
inequality does not give a satisfactory result. Indeed, it can guarantee that the right-hand
side of inequality (5.7) vanishes as O(
1
k
). This rate is typical for the Gradient Method
(see (3.12)), and from accelerated version (4.9) we can expect much more. Let us show
how we can achieve a better result.
Consider the following constrained optimization problem:
min
xQ
f(x), (5.8)
where Q is a closed convex set, and f is a convex function with Lipschitz continuous
gradient. Let us x a tolerance parameter > 0 and a starting point x
0
Q. Dene
(x) =
Q
(x) +

2
|x x
0
|
2
.
Consider now the unconstrained minimization problem (4.1) with composite objective
function (x) = f(x) + (x). Note that function is strongly convex with parameter

= . Hence, in view of Theorem 6, the method /(x


0
, L
0
, ) converges as follows:
(x
k
) (x

)

u
L
f
2
|x

x
0
|
2

_
1 +
_

2
u
L
f
_
2(k1)
.
(5.9)
For simplicity, we can choose
u
=
d
in order to have L
k
L
f
for all k 0.
Let us compute now T
k
= ((x
k
, L
k
).T and M
k
= ((x
k
, L
k
).L. Then
(x
k
) (x

) (x
k
) (T
k
)
(2.16)

1
2M
k
|g
M
k
(x
k
)|
2

, L
0
M
k

u
L
f
,
and we obtain the following estimate:
|g
M
k
(x
k
)|

(5.9)

u
L
f
|x

x
0
|
_
1 +
_

2
u
L
f
_
1k
.
(5.10)
In our case, the rst-order optimality conditions (4.2) for computing T
M
k
(x
k
) can be
written as follows:
f(x
k
) +B(T
k
x
0
) +
k
= g
M
k
(x
k
), (5.11)
where
k

Q
(T
k
). Note that for any y Q we have
0 =
Q
(y)
Q
(T
k
) +
k
, y T
k
) =
k
, y T
k
). (5.12)
20
Hence, for any direction u T(T
k
) with |u| = 1 we obtain
f(T
k
), u)
(2.10)
f(x
k
), u)
L
f
M
k
|g
M
k
(x
k
)|

(5.11)
= g
M
k
(x
k
) B(T
k
x
0
)
k
, u)
L
f
M
k
|g
M
k
(x
k
)|

(5.12)
= |T
k
x
0
|
_
1 +
L
f
M
k
_
|g
M
k
(x
k
)|

.
Assume now that the size of the set Q does not exceed R, and = L
0
. Let us choose
the number of iterations k from inequality
_
1 +
_
L
0
2uL
f
_
1k
.
Then the residual of the rst-order optimality conditions satises the following inequality:
f(T
k
), u) R
_
L
0
+
u
L
f

_
1 +
L
f
L
0
__
, u T(T
k
), |u| = 1. (5.13)
For that, the required number of iterations k is at most of the order O
_
1

ln
1

_
.
5.3 Unknown parameter of strongly convex objective
In Section 5.1 we have discussed two ecient strategies for minimizing strongly convex
function with known estimate of convexity parameter

f
. However, usually this informa-
tion is not available. We can easily get only an upper estimate for this value, for example,
by inequality

f
S
L
(x) L

f
, x Q.
Let us show that such a bound can be also used for designing an ecient optimization
strategy for strongly convex functions.
For problem (5.1), assume that we have some guess for the parameter

f
and a
starting point u
0
Q. Denote
0
(x) =

f(x) +
Q
(x). Let us choose
x
0
= (

0
(u
0
, L
0
).T, M
0
= (

0
(u
0
, L
0
).L, S
0
= (

0
(u
0
, L
0
).S,
and minimize the composite objective (5.2) by method /(x
0
, M
0
, ), endowed by the
following stopping criterion:
Compute: v
k
= (

0
(x
k
, L
k
).T, M
k
= (

0
(x
k
, L
k
).L.
Stop the stage: if (A): | g
M
k
(x
k
)[
0
] |


1
2
| g
M
0
(u
0
)[
0
] |

,
or (B):
M
k
A
k

_
1 +
S
0
M
0
_

1
4

2
.
(5.14)
If the stage was terminated by Condition (A), then we call it successful. In this case, we
run the next stage, taking v
k
as a new starting point and keeping the estimate of the
convexity parameter unchanged.
21
Suppose that the stage was terminated by Condition (B) (that is an unsuccessful
stage). If would be a correct lower bound for the convexity parameter

f
, then
1
2M
k
|g
M
k
(x
k
)[
0
]|
2

(2.16)


f(x
k
)

f(x

)
(4.5)

1
2A
k
|x
0
x

|
2
(2.21)

1
2A
k

2

_
1 +
S
0
M
0
_
|g
M
0
(u
0
)[
0
]|
2

.
Hence, in view of Condition (B), in this case the stage must be terminated by Condi-
tion (A). Since this did not happen, we conclude that >

f
. Therefore, we redene
:=
1
2
, and run again the stage keeping the old starting point x
0
.
We are not going to present all details of the complexity analysis of the above strategy.
It can be shown that, for generating an -solution of problem (5.1) with strongly convex
objective, it needs
O
_

1/2

f
ln

f
_
+O
_

1/2

f
ln

f
ln

_
,

f
def
=
L

f
,
calls of oracle. The rst term in this bound corresponds to the total amount of calls of
oracle at all unsuccessful stages. The factor
1/2

f
ln

f
represents an upper bound on the
length of any stage independently on the variant of its termination.
6 Computational experiments
We tested the above mentioned algorithms on a set of randomly generated Sparse Least
Squares problems of the form
Find

= min
xR
n
_
(x)
def
=
1
2
|Ax b|
2
2
+|x|
1
_
, (6.1)
where A (a
1
, . . . , a
n
) is an mn dense matrix with m < n. All problems were generated
with known optimal solutions, which can be obtained from the dual representation of the
initial primal problem (6.1):
min
xR
n
_
1
2
|Ax b|
2
2
+|x|
1
_
= min
xR
n
max
uR
m
_
u, b Ax)
1
2
|u|
2
2
+|x|
1
_
= max
uR
m
min
xR
n
_
b, u)
1
2
|u|
2
2
A
T
u, x) +|x|
1
_
= max
uR
m
_
b, u)
1
2
|u|
2
2
: |A
T
u|

1
_
.
(6.2)
Thus, the problem dual to (6.1) consists in nding a Euclidean projection of the vector
b R
m
onto the dual polytop
T = y R
m
: |A
T
y|

1.
This interpretation explains the changing sparsity of the optimal solution x

() to the
following parametric version of problem (6.1):
min
xR
n
_

(x)
def
=
1
2
|Ax b|
2
2
+|x|
1
_
(6.3)
22
Indeed, for > 0, we have

(x) =
2
_
1
2
|A
x

|
2
2
+|
x

|
1
_
.
Hence, in the dual problem, we project vector
b

onto the polytop T. The nonzero com-


ponents of x

() correspond to the active facets of T. Thus, for big enough, we have


b

int T, which means x

() = 0. When decreases, we get x

() more and more


dense. Finally, if all facets of T are in general position, we get in x

() exactly m nonzero
components as 0.
In our computational experiments, we compare three minimization methods. Two
of them maintain recursively relations (4.4). This feature allows to classify them as the
primal-dual methods. Indeed, denote

(u) =
1
2
|u|
2
2
b, u).
As we have seen in (6.2),
(x) +

(u) 0 x R
n
, u T. (6.4)
Moreover, the lower bound is achieved only at the optimal solutions of the primal and
dual problems. For some sequence z
i

i=1
, and a starting point z
0
dom, relations
(4.4) ensure
A
k
(x
k
) min
xR
n
_
k

i=1
a
i
[f(z
i
) +f(z
i
), x z
i
)] +A
k
(x) +
1
2
|x z
0
|
2
_
. (6.5)
In our situation, f(x) =
1
2
|Ax b|
2
2
, (x) = |x|
1
, and we choose z
0
= 0. Denote
u
i
= b Az
i
. Then f(z
i
) = A
T
u
i
, and therefore
f(z
i
) f(z
i
), z
i
) =
1
2
|u
i
|
2
+A
T
u
i
, z
i
) = b, u
i
)
1
2
|u
i
|
2
2
=

(u
i
).
Denoting
u
k
=
1
A
k
k

i=1
a
i
u
i
, (6.6)
we obtain
A
k
[(x
k
) +

( u
k
)] A
k
(x
k
) +
k

i=1
a
i

(u
i
)
(6.5)
min
xR
n
_
k

i=1
a
i
f(z
i
), x) +A
k
(x) +
1
2
|x|
2
_
x=0
0.
In view of (6.4), u
k
cannot be feasible:

( u
k
) (x
k
)

= min
uT

(u).
(6.7)
Let us measure the level of infeasibility of these points. Note that the minimum of
optimization problem in (6.5) is achieved at x = v
k
. Hence, the corresponding rst-order
optimality conditions ensure
|
k

i=1
a
i
A
T
u
i
+Bv
k
|

A
k
.
23
Therefore, [a
i
, u
k
)[ 1 +
1
A
k
[(Bv
k
)
(i)
[, i = 1, . . . , n. Assume that the matrix B in (1.3)
is diagonal:
B
(i,j)
=
_
d
i
, i = j,
0, otherwise.
Then, [a
i
, u
k
)[ 1
d
i
A
k
[v
(i)
k
[, and
( u
k
)
def
=
_
n

i=1
1
d
i
( [a
i
, u
k
)[ 1 )
2
+
_
1/2

1
A
k
|v
k
|
(4.6)

2
A
k
|x

|, (6.8)
where ()
+
= max, 0. Thus, we can use function () as a dual infeasibility measure.
In view of (6.7), it is a reasonable stopping criterion for our primal-dual methods.
For generating the random test problems, we apply the following strategy.
Choose m

m, the number of nonzero components of the optimal solution x

of
problem (6.1), and parameter > 0 responsible for the size of x

.
Generate randomly a matrix B R
mn
with elements uniformly distributed in the
interval [1, 1].
Generate randomly a vector v

R
m
with elements uniformly distributed in [0, 1].
Dene y

= v

/|v

|
2
.
Sort the entries of vector B
T
y

in the order of decrease of their absolute values. For


the sake of notation, assume that this is a natural ordering.
For i = 1, . . . , n, dene a
i
=
i
b
i
with
i
> 0 chosen in accordance to the following
rule:

i
=
_

_
1
[b
i
,y

)[
for i = 1, . . . , m

,
1, if [b
i
, y

)[ 0.1 and i > m

i
[b
i
,y

)[
, otherwise,
where
i
are uniformly distributed in [0, 1].
For i = 1, . . . , n, generate the components of the primal solution:
[x

]
(i)
=
_

i
sign(a
i
, y

)), for i m

,
0, otherwise,
where
i
are uniformly distributed in
_
0,

_
.
Dene b = y

+Ax

.
Thus, the optimal value of the randomly generated problem (6.1) can be computed as

=
1
2
|y

|
2
2
+|x

|
1
.
In the rst series of tests, we use this value in the termination criterion.
24
Let us look rst at the results of minimization of two typical random problem instances.
The rst problem is relatively easy.
Problem 1: n = 4000, m = 1000, m

= 100, = 1.
PG DG AC
Gap k #Ax SpeedUp k #Ax SpeedUp k #Ax SpeedUp
1 1 4 0.21% 1 4 0.85% 1 4 0.14%
2
1
3 8 0.20% 3 12 0.81% 4 28 1.24%
2
2
10 29 0.24% 8 38 0.89% 8 60 2.47%
2
3
28 83 0.32% 25 123 1.17% 14 108 4.06%
2
4
159 476 0.88% 156 777 3.45% 40 316 17.50%
2
5
557 1670 1.53% 565 2824 6.21% 74 588 29.47%
2
6
954 2862 1.31% 941 4702 5.17% 98 780 25.79%
2
7
1255 3765 0.86% 1257 6282 3.45% 118 940 18.62%
2
8
1430 4291 0.49% 1466 7328 2.01% 138 1096 12.73%
2
9
1547 4641 0.26% 1613 8080 2.13% 156 1240 8.19%
2
10
1640 4920 0.14% 1743 8713 0.61% 173 1380 4.97%
2
11
1722 5167 0.07% 1849 9243 0.33% 188 1500 3.01%
2
12
1788 5364 0.04% 1935 9672 0.17% 202 1608 1.67%
2
13
1847 5539 0.02% 2003 10013 0.09% 216 1720 0.96%
2
14
1898 5693 0.01% 2061 10303 0.05% 230 1836 0.55%
2
15
1944 5831 0.01% 2113 10563 0.05% 248 1968 0.31%
2
16
1987 5961 0.00% 2164 10817 0.03% 265 2112 0.19%
2
17
2029 6085 0.00% 2217 11083 0.02% 279 2224 0.10%
2
18
2072 6215 0.00% 2272 11357 0.01% 305 2432 0.06%
2
19
2120 6359 0.00% 2331 11652 0.00% 314 2504 0.03%
2
20
2165 6495 0.00% 2448 12238 0.00% 319 2544 0.02%
In this table, the column Gap shows the relative decrease of the initial residual. In
the rest part of the table, we can see the computational results of three methods:
Primal gradient method (3.3) abbreviated as PG.
Dual version of the gradient method (4.7) abbreviated as DG.
Accelerated gradient method (4.9) abbreviated as AC.
In all methods, we use the following values of the parameters:

u
=
d
= 2, x
0
= 0, L
0
= max
1in
|a
i
|
2
L
f
, = 0.
Let us explain the remaining columns of this table. For each method, the column
k shows the number of iterations necessary for reaching the corresponding reduction
of the initial gap in the function value. Column Ax shows the necessary number of
matrix-vector multiplications. Note that for computing the value f(x) we need one mul-
tiplication. If in addition, we need to compute the gradient, we need one more multiplica-
tion. For example, in accordance to the estimate (4.13), each iteration of (4.9) needs four
computations of the pair function/gradient. Hence, in this method we can expect eight
matrix-vector multiplications per iteration. For the Gradient Method (3.3), we need in
25
average two calls of oracle. However, one of them is done in the line-search procedure
(3.1) and it requires only the function value. Hence, in this case we expect to have three
matrix-vector multiplications per iteration. In the above table, we can observe a remark-
able accuracy of our predictions. Finally, the column SpeedUp represents the absolute
accuracy of current approximate solution in percents to the worst-case estimate given by
the corresponding rate of convergence. Since the exact L
f
is unknown, we use L
0
instead.
We can see that all methods usually signicantly outperform the theoretically predicted
rate of convergence. However, for all of them, there are some parts of trajectory where
the worst-case predictions are quite accurate. This is even more evident from our second
table, which corresponds to a more dicult problem instance.
Problem 2: n = 5000, m = 500, m

= 100, = 1.
PG DG AC
Gap k #Ax SpeedUp k #Ax SpeedUp k #Ax SpeedUp
1 1 4 0.24% 1 4 0.96% 1 4 0.16%
2
1
2 6 0.20% 2 8 0.81% 3 24 0.92%
2
2
5 17 0.21% 5 24 0.81% 5 40 1.49%
2
3
11 33 0.19% 11 45 0.77% 8 64 1.83%
2
4
38 113 0.30% 38 190 1.21% 19 148 5.45%
2
5
234 703 0.91% 238 1189 3.69% 52 416 20.67%
2
6
1027 3081 1.98% 1026 5128 7.89% 106 848 43.08%
2
7
2402 7206 2.31% 2387 11933 9.17% 160 1280 48.70%
2
8
3681 11043 1.77% 3664 18318 7.05% 204 1628 39.54%
2
9
4677 14030 1.12% 4664 23318 4.49% 245 1956 28.60%
2
10
5410 16230 0.65% 5392 26958 2.61% 288 2300 19.89%
2
11
5938 17815 0.36% 5879 29393 1.41% 330 2636 13.06%
2
12
6335 19006 0.19% 6218 31088 0.77% 370 2956 8.20%
2
13
6637 19911 0.10% 6471 32353 0.41% 402 3212 4.77%
2
14
6859 20577 0.05% 6670 33348 0.21% 429 3424 2.71%
2
15
7021 21062 0.03% 6835 34173 0.13% 453 3616 1.49%
2
16
7161 21483 0.01% 6978 34888 0.05% 471 3764 0.83%
2
17
7281 21842 0.01% 7108 35539 0.05% 485 3872 0.42%
2
18
7372 22115 0.00% 7225 36123 0.03% 509 4068 0.24%
2
19
7438 22313 0.00% 7335 36673 0.02% 525 4192 0.12%
2
20
7492 22474 0.00% 7433 37163 0.01% 547 4372 0.07%
In this table, we can see that the Primal Gradient Method still signicantly outper-
forms the theoretical predictions. This is not too surprising since it can, for example,
automatically accelerate on strongly convex functions (see Theorem 5). All other meth-
ods require in this case some explicit changes in their schemes.
However, despite to all these discrepancies, the main conclusion of our theoretical
analysis seems to be conrmed: the accelerated scheme (4.9) signicantly outperforms
the primal and dual variants of the Gradient Method.
In the second series of tests, we studied the abilities of the primal-dual schemes (4.7)
and (4.9) in decreasing the infeasibility measure () (see (6.8)). This problem, at least for
the Dual Gradient Method (4.7), appears to be much harder than the primal minimization
26
problem (6.1). Let us look at the following results.
Problem 3: n = 500, m = 50, m

= 25, = 1.
DG AC
Gap k #Ax SpeedUp k #Ax SpeedUp
1 2 8 2.5 10
0
8.26% 2 16 3.6 10
0
2.80%
2
1
5 25 1.4 10
0
9.35% 7 56 8.8 10
1
15.55%
2
2
13 64 6.0 10
1
13.17% 11 88 5.3 10
1
20.96%
2
3
26 130 3.9 10
1
12.69% 15 120 4.4 10
1
19.59%
2
4
48 239 2.7 10
1
12.32% 21 164 3.1 10
1
19.21%
2
5
103 514 1.6 10
1
13.28% 35 276 1.8 10
1
25.83%
2
6
243 1212 8.3 10
2
15.64% 54 432 1.0 10
1
31.75%
2
7
804 4019 3.0 10
2
25.93% 86 688 4.6 10
2
39.89%
2
8
1637 8183 6.3 10
3
26.41% 122 976 1.8 10
2
40.22%
2
9
3298 16488 4.6 10
4
26.6% 169 1348 5.3 10
3
38.58%
2
10
4837 24176 1.8 10
7
19.33% 224 1788 7.7 10
4
34.28%
2
11
4942 24702 1.2 10
14
9.97% 301 2404 8.0 10
5
30.88%
2
12
5149 25734 1.3 10
15
5.16% 419 3352 2.7 10
5
29.95%
2
13
5790 28944 1.3 10
15
2.92% 584 4668 5.3 10
6
29.11%
2
14
6474 32364 0.0 2.67% 649 5188 4.1 10
7
29.48%
In this table we can see the computational cost for decreasing the initial value of in
2
14
10
4
times. Note that both methods require more iterations than for Problem 1,
which was solved up to accuracy in the objective function of the order 2
20
10
6
.
Moreover, for reaching the required level of , method (4.7) has to decrease the residual
in the objective up to machine precision, and the norm of gradient mapping up to 10
12
.
The accelerated scheme is more balanced: the nal residual in is of the order 10
6
, and
the norm of the gradient mapping was decreased only up to 1.3 10
3
.
Let us look at a bigger problem.
Problem 4: n = 1000, m = 100, m

= 50, = 1.
DG AC
Gap k #Ax SpeedUp k #Ax SpeedUp
1 2 8 3.7 10
0
6.41% 2 12 4.2 10
0
1.99%
2
1
5 24 2.0 10
0
7.75% 7 56 1.4 10
0
11.71%
2
2
15 74 1.0 10
0
11.56% 12 96 8.7 10
1
15.49%
2
3
37 183 6.9 10
1
14.73% 17 132 6.8 10
1
16.66%
2
4
83 414 4.5 10
1
16.49% 26 208 4.7 10
1
20.43%
2
5
198 989 2.4 10
1
19.79% 42 336 2.5 10
1
26.76%
2
6
445 2224 7.8 10
2
22.28% 65 520 1.0 10
1
32.41%
2
7
1328 6639 2.2 10
2
33.25% 91 724 3.6 10
2
31.50%
2
8
2675 13373 4.1 10
3
33.48% 125 996 1.1 10
2
30.07%
2
9
4508 22535 5.6 10
5
28.22% 176 1404 2.6 10
3
27.85%
2
10
4702 23503 2.7 10
10
14.7% 240 1916 4.4 10
4
26.08%
2
11
4869 24334 2.2 10
15
7.61% 328 2620 7.7 10
5
26.08%
2
12
6236 31175 2.2 10
15
4.88% 465 3716 6.5 10
6
26.20%
2
13
12828 64136 2.2 10
15
5.02% 638 5096 2.4 10
6
24.62%
2
14
16354 81766 4.4 10
15
5.24% 704 5628 7.8 10
7
24.62%
27
As compared with Problem 3, in Problem 4 the sizes are doubled. This makes almost
no dierence for the accelerated scheme, but for the Dual Gradient Method, the compu-
tational expenses grow substantially. The further increase of dimension makes the latter
scheme impractical. Let us look how these methods work at Problem 1 with () being a
termination criterion.
Problem 1a: n = 4000, m = 1000, m

= 100, = 1.
DG AC
Gap k #Ax SpeedUp k #Ax SpeedUp
1 2 8 2.3 10
1
2.88% 2 12 2.4 10
1
0.99%
2
1
5 24 1.2 10
1
3.44% 8 60 8.1 10
0
7.02%
2
2
17 83 5.8 10
0
6.00% 13 100 4.6 10
0
10.12%
2
3
44 219 3.5 10
0
7.67% 20 160 3.5 10
0
11.20%
2
4
100 497 2.7 10
0
8.94% 28 220 2.9 10
0
12.10%
2
5
234 1168 1.9 10
0
10.51% 44 348 2.1 10
0
14.79%
2
6
631 3153 1.0 10
0
14.18% 78 620 1.0 10
0
23.46%
2
7
1914 9568 1.0 10
2
21.50% 117 932 2.9 10
1
26.44%
2
8
3704 18514 4.6 10
7
20.77% 157 1252 6.8 10
2
23.88%
2
9
3731 18678 1.4 10
14
15.77% 212 1688 5.3 10
3
21.63%
2
10
Line search failure ... 287 2288 2.0 10
4
19.87%
2
11
391 3120 2.5 10
5
18.43%
2
12
522 4168 7.0 10
6
16.48%
2
13
693 5536 4.5 10
7
14.40%
2
14
745 5948 3.8 10
7
13.76%
The reason of the failure of the Dual Gradient Method is quite interesting. In the
end, it generates the points with very small residual in the value of the objective function.
Therefore, the termination criterion in the gradient iteration (3.1) cannot work properly
due to the rounding errors. In the accelerated scheme (4.9), this does not happen since
the decrease of the objective function and the dual infeasibility measure is much more
balanced. In some sense, this situation is natural. We have seen that at the current
test problems all methods converge faster in the end. On the other hand, the rate of
convergence of the dual variables u
k
is limited by the rate of growth of coecients a
i
in the representation (6.6). For the Dual Gradient Method, these coecients are almost
constant. For the accelerated scheme, they grow proportionally to the iteration counter.
We hope that the above numerical examples clearly demonstrate the advantages of the
accelerated gradient method (4.9) with the adjustable line search strategy. It is interesting
to check numerically how this method works in other situations. Of course, the rst
candidates to try are dierent applications of the smoothing technique [9]. However, even
for the Sparse Least Squares problem (6.1) there are many potential improvements. Let
us discuss one of them.
Note that we treated the problem (6.1) by a quite general model (2.1) ignoring the
important fact that the function f is quadratic. The characteristic property of quadratic
functions is that they have a constant second derivative. Hence, it is natural to select the
operator B in metric (1.3) taking into account the structure of the Hessian of function f.
Let us dene B = diag (A
T
A) diag (
2
f(x)). Then
|e
i
|
2
= Be
i
, e
i
) = |a
i
|
2
2
= |Ae
i
|
2
2
=
2
f(x)e
i
, e
i
), i = 1, . . . , n,
28
where e
i
is a coordinate vector in R
n
. Therefore,
L
0
def
= 1 L
f
max
|u|=1

2
f(x)u, u) n.
Thus, in this metric, we have very good lower and upper bounds for the Lipschitz
constant L
f
. Let us look at the corresponding computational results. We solve the
Problem 1 (with n = 4000, m = 1000, and = 1) up to accuracy Gap = 2
20
for
dierent sizes m

of the support of the optimal vector, which are gradually increased from
100 to 1000.
Problem 1b.
PG AC
m

k #Ax k #Ax
100 42 127 58 472
200 53 160 61 496
300 69 208 70 568
400 95 286 77 624
500 132 397 84 680
600 214 642 108 872
700 330 993 139 1120
800 504 1513 158 1272
900 1149 3447 196 1576
1000 2876 8630 283 2272
Recall that the rst line of this table corresponds to the previously discussed version of
Problem 1. For the reader convenience, in the next table we repeat the nal results on
the latter problem again, adding the computational results for m

= 1000, both with no


diagonal scaling.
PG AC
m

k #Ax k #Ax
100 2165 6495 319 2544
1000 42509 127528 879 7028
Thus, for m

= 100, the diagonal scaling makes Problem 1 very easy. For easy
problems, the simple and cheap methods have denite advantage with respect to more
complicated strategies. When m

increases, the scaled problems become more and more


dicult. Finally, we can see again the superiority of the accelerated scheme. Needless to
say that at this moment of time, we have no plausible explanation for this phenomena.
Our last computational results clearly show that an appropriate complexity analysis
of the Sparse Least Squares problem remains a challenging topic for the future research.
29
References
[1] S.Chen, D.Donoho, and M.Saunders. Atomic decomposition by basis pursuit. SIAM
Journal of Scientic Computation, 20 33-61 (1998)
[2] J.Claerbout and F.Muir. Robust modelling of eratic data. Geophysics, 38 826-844
(1973)
[3] M.Figueiredo, R.Novak, and S.Wright. Gradient projection for sparse reconstruc-
tion: application to compressed sensing and other inverse problems. Submitted to
publication.
[4] S.-J. Kim, K.Koh, M.Lustig, S.Boyd, and D.Gorinevsky. A method for large-scale l
1
-
regularized least-squares problems with applications in signal processing and statis-
tics. Research Report, Stanford University, March 20, 2007.
[5] S.Levy and P.Fullagar. Reconstruction of a sparse spike train from a portion of its
spectrum and application to high-resolution deconvolution. Geophysics, 46 1235-
1243 (1981)
[6] A.Miller. Subset selection in regression. Chapman and Hall, London (2002)
[7] A.Nemirovsky and D.Yudin. Informational complexity and ecient methods for
solution of convex extremal problems. J.Wiley & Sons, New-York (1983)
[8] Yu. Nesterov. Introductory Lectures on Convex Optimization. Kluwer, Boston
(2004)
[9] Yu. Nesterov. Smooth minimization of non-smooth functions. Mathematical Pro-
gramming (A), 103 (1) 127-152 (2005).
[10] Yu. Nesterov. Rounding of convex sets and ecient gradient methods for linear
programming problems. Accepted by Optimization Methods and Software (2007).
[11] Yu. Nesterov. Accelerating the cubic regularization of Newtons method on convex
problems. Accepted by Mathematical Programming. DOI 10.1007/s10107-006-0089-
x.
[12] Yu. Nesterov and A. Nemirovskii. Interior point polynomial methods in convex
programming: Theory and Applications. SIAM, Philadelphia (1994)
[13] J.Ortega and W.Rheinboldt. Iterative solution of nonlinear equations in several
variables. Academic Press, New York (1970)
[14] F.Santosa and W.Symes. Linea inversion of band-limited reection histograms.
SIAM Journal of Scientic and Statistical Computing, 7 1307-1330 (1986)
[15] H.Taylor, S.Bank, and J.McCoy. Deconvolution with the l
1
norm. Geophysics, 44
39-52 (1979)
[16] R.Tibshirani. Regression shrinkage and selection via the lasso. Journal Royal Sta-
tistical Society B, 58 267-288 (1996)
[17] J.Tropp. Just relax: convex programming methods for identifying sparse signals.
IEEE Transactions on Information Theory, 51 1030-1051 (2006)
[18] S.Wright. Solving l
1
-regularized regression problems. Talk at International Confer-
ence Combinatorics and Optimization, Waterloo, June 2007.
30

You might also like