0% found this document useful (0 votes)
31 views15 pages

Defazio NIPS2014

SAGA is a new incremental gradient method that can handle non-strongly convex composite objectives. It improves upon the convergence rates of SAG and SVRG in the strongly convex case. Unlike SDCA, SAGA directly supports non-strongly convex problems and adapts to any inherent strong convexity. Experimental results demonstrate SAGA's effectiveness.

Uploaded by

Montassar Mhamdi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views15 pages

Defazio NIPS2014

SAGA is a new incremental gradient method that can handle non-strongly convex composite objectives. It improves upon the convergence rates of SAG and SVRG in the strongly convex case. Unlike SDCA, SAGA directly supports non-strongly convex problems and adapts to any inherent strong convexity. Experimental results demonstrate SAGA's effectiveness.

Uploaded by

Montassar Mhamdi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

SAGA: A Fast Incremental Gradient Method With

Support for Non-Strongly Convex Composite


Objectives

Aaron Defazio Francis Bach


Ambiata ∗ INRIA - Sierra Project-Team
Australian National University, Canberra École Normale Supérieure, Paris, France

Simon Lacoste-Julien
INRIA - Sierra Project-Team
École Normale Supérieure, Paris, France

Abstract
In this work we introduce a new optimisation method called SAGA in the spirit of
SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient
algorithms with fast linear convergence rates. SAGA improves on the theory be-
hind SAG and SVRG, with better theoretical convergence rates, and has support
for composite objectives where a proximal operator is used on the regulariser. Un-
like SDCA, SAGA supports non-strongly convex problems directly, and is adap-
tive to any inherent strong convexity of the problem. We give experimental results
showing the effectiveness of our method.

1 Introduction
Remarkably, recent advances [1, 2] have shown that it is possible to minimise strongly convex
finite sums provably faster in expectation than is possible without the finite sum structure. This is
significant for machine learning problems as a finite sum structure is common in the empirical risk
minimisation setting. The requirement of strong convexity is likewise satisfied in machine learning
problems in the typical case where a quadratic regulariser is used.
In particular, we are interested in minimising functions of the form
n
1X
f (x) = fi (x),
n i=1
where x ∈ Rd , each fi is convex and has Lipschitz continuous derivatives with constant L. We will
also consider the case where each fi is strongly convex with constant µ, and the “composite” (or
proximal) case where an additional regularisation function is added:
F (x) = f (x) + h(x),
where h : R → R is convex but potentially non-differentiable, and where the proximal operation
d d

of h is easy to compute — few incremental gradient methods are applicable in this setting [3][4].
Our contributions are as follows. In Section 2 we describe the SAGA algorithm, a novel incremental
gradient method. In Section 5 we prove theoretical convergence rates for SAGA in the strongly
convex case better than those for SAG [1] and SVRG [5], and a factor of 2 from the SDCA [2]
convergence rates. These rates also hold in the composite setting. Additionally, we show that

The first author completed this work while under funding from NICTA. This work was partially supported
by the MSR-Inria Joint Centre and a grant by the European Research Council (SIERRA project 239993).

1
like SAG but unlike SDCA, our method is applicable to non-strongly convex problems without
modification. We establish theoretical convergence rates for this case also. In Section 3 we discuss
the relation between each of the fast incremental gradient methods, showing that each stems from a
very small modification of another.

2 SAGA Algorithm
We start with some known initial vector x0 ∈ Rd and known derivatives fi0 (φ0i ) ∈ Rd with φ0i = x0
for each i. These derivatives are stored in a table data-structure of length n, or alternatively a n × d
matrix. For many problems of interest, such as binary classification and least-squares, only a single
floating point value instead of a full gradient vector needs to be stored (see Section 4). SAGA is
inspired both from SAG [1] and SVRG [5] (as we will discuss in Section 3). SAGA uses a step size
of γ and makes the following updates, starting with k = 0:

SAGA Algorithm: Given the value of xk and of each fi0 (φki ) at the end of iteration k, the updates
for iteration k + 1 is as follows:
1. Pick a j uniformly at random.
2. Take φk+1
j = xk , and store fj0 (φk+1
j ) in the table. All other entries in the table remain
unchanged. The quantity φj is not explicitly stored.
k+1

3. Update x using fj0 (φk+1


j ), fj0 (φkj ) and the table average:
" n
#
1X 0 k
w k+1 k
=x −γ fj0 (φk+1 ) − fj0 (φkj ) + f (φ ) , (1)
j
n i=1 i i

xk+1 = proxhγ wk+1 . (2)




The proximal operator we use above is defined as


 
1
proxγ (y) := argmin h(x) +
h 2
kx − yk . (3)
x∈Rd 2γ
In the strongly convex case, when a step size of γ = 1/(2(µn+L)) is chosen, we have the following
convergence rate in the composite and hence also the non-composite case:
 k  
2 µ 2 n
E xk − x∗ x0 − x∗ f (x0 ) − f 0 (x∗ ), x0 − x∗ − f (x∗ ) .
 
≤ 1− +
2(µn + L) µn + L
We prove this result in Section 5. The requirement of strong convexity can be relaxed from needing
to hold for each fi to just holding on average, but at the expense of a worse geometric rate (1 −
µ
6(µn+L) ), requiring a step size of γ = 1/(3(µn + L)).

In the non-strongly convex case, we have established the convergence rate in terms of the average
Pk
iterate, excluding step 0: x̄k = k1 t=1 xt . Using a step size of γ = 1/(3L) we have
 
4n 2L 0 2
E F (x̄k ) − F (x∗ ) ≤ x − x∗ + f (x0 ) − f 0 (x∗ ), x0 − x∗ − f (x∗ ) .
 
k n
This result is proved in the supplementary material. Importantly, when this step size γ = 1/(3L) is
used, our algorithm automatically adapts to the level of strong convexity µ > 0 naturally present,
giving a convergence rate of (see the comment at the end of the proof of Theorem 1):
  k  
2 1 µ 2 2n 
E xk − x∗ x0 − x∗ f (x0 ) − f 0 (x∗ ), x0 − x∗ − f (x∗ ) .

≤ 1 − min , +
4n 3L 3L
Although any incremental gradient method can be applied to non-strongly convex problems via the
addition of a small quadratic regularisation, the amount of regularisation is an additional tunable
parameter which our method avoids.

3 Related Work
We explore the relationship between SAGA and the other fast incremental gradient methods in this
section. By using SAGA as a midpoint, we are able to provide a more unified view than is available
in the existing literature. A brief summary of the properties of each method considered in this section
is given in Figure 1. The method from [3], which handles the non-composite setting, is not listed as
its rate is of the slow type and can be up to n times smaller than the one for SAGA or SVRG [5].

2
SAGA SAG SDCA SVRG FINITO
Strongly Convex (SC) 3 3 3 3 3
Convex, Non-SC* 3 3 7 ? ?
Prox Reg. 3 ? 3[6] 3 7
Non-smooth 7 7 3 7 7
Low Storage Cost 7 7 7 3 7
Simple(-ish) Proof 3 7 3 3 3
Adaptive to SC 3 3 7 ? ?
Figure 1: Basic summary of method properties. Question marks denote unproven, but not experimentally
ruled out cases. (*) Note that any method can be applied to non-strongly convex problems by adding a small
amount of L2 regularisation, this row describes methods that do not require this trick.

SAGA: midpoint between SAG and SVRG/S2GD

In [5], the authors make the observation that the variance of the standard stochastic gradient (SGD)
update direction can only go to zero if decreasing step sizes are used, thus preventing a linear conver-
gence rate unlike for batch gradient descent. They thus propose to use a variance reduction approach
(see [7] and references therein for example) on the SGD update in order to be able to use constant
step sizes and get a linear convergence rate. We present the updates of their method called SVRG
(Stochastic Variance Reduced Gradient) in (6) below, comparing it with the non-composite form
of SAGA rewritten in (5). They also mention that SAG (Stochastic Average Gradient) [1] can be
interpreted as reducing the variance, though they do not provide the specifics. Here, we make this
connection clearer and relate it to SAGA.
We first review a slightly more generalized version of the variance reduction approach (we allow the
updates to be biased). Suppose that we want to use Monte Carlo samples to estimate EX and that
we can compute efficiently EY for another random variable Y that is highly correlated with X. One
variance reduction approach is to use the following estimator θα as an approximation to EX: θα :=
α(X −Y )+EY , for a step size α ∈ [0, 1]. We have that Eθα is a convex combination of EX and EY :
Eθα = αEX + (1 − α)EY . The standard variance reduction approach uses α = 1 and the estimate
is unbiased Eθ1 = EX. The variance of θα is: Var(θα ) = α2 [Var(X) + Var(Y ) − 2 Cov(X, Y )],
and so if Cov(X, Y ) is big enough, the variance of θα is reduced compared to X, giving the method
its name. By varying α from 0 to 1, we increase the variance of θα towards its maximum value
(which usually is still smaller than the one for X) while decreasing its bias towards zero.
Both SAGA and SAG can be derived from such a variance reduction viewpoint: here X is the SGD
direction sample fj0 (xk ), whereas Y is a past stored gradient fj0 (φkj ). SAG is obtained by using
α = 1/n (update rewritten in our notation in (4)), whereas SAGA is the unbiased version with α = 1
(see (5) below). For the same φ’s, the variance of the SAG update is 1/n2 times the one of SAGA,
but at the expense of having a non-zero bias. This non-zero bias might explain the complexity of
the convergence proof of SAG and why the theory has not yet been extended to proximal operators.
By using an unbiased update in SAGA, we are able to obtain a simple and tight theory, with better
constants than SAG, as well as theoretical rates for the use of proximal operators.
" n
#
fj0 (xk ) − fj0 (φkj ) 1X 0 k
(SAG) x k+1 k
=x −γ + f (φ ) , (4)
n n i=1 i i
" n
#
1 X
(SAGA) xk+1 = xk − γ fj0 (xk ) − fj0 (φkj ) + f 0 (φk ) , (5)
n i=1 i i
" n
#
0 k 0 1X 0
(SVRG) x k+1 k
= x − γ fj (x ) − fj (x̃) + f (x̃) . (6)
n i=1 i

The SVRG update (6) is obtained by using Y = fj0 (x̃) with α = 1 (and is thus unbiased – we note
that SAG is the only method that we present in the related work that has a biased update direction).
The vector x̃ is not updated every step, but rather the loop over k appears inside an outer loop, where
x̃ is updated at the start of each outer iteration. Essentially SAGA is at the midpoint between SVRG
and SAG; it updates the φj value each time index j is picked, whereas SVRG updates all of φ’s as
a batch. The S2GD method [8] has the same update as SVRG, just differing in how the number of
inner loop iterations is chosen. We use SVRG henceforth to refer to both methods.

3
SVRG makes a trade-off between time and space. For the equivalent practical convergence rate it
makes 2x-3x more gradient evaluations, but in doing so it does not need to store a table of gradients,
but a single average gradient. The usage of SAG vs. SVRG is problem dependent. For example for
linear predictors where gradients can be stored as a reduced vector of dimension p − 1 for p classes,
SAGA is preferred over SVRG both theoretically and in practice. For neural networks, where no
theory is available for either method, the storage of gradients is generally more expensive than the
additional backpropagations, but this is computer architecture dependent.
SVRG also has an additional parameter besides step size that needs to be set, namely the number of
iterations per inner loop (m). This parameter can be set via the theory, or conservatively as m = n,
however doing so does not give anywhere near the best practical performance. Having to tune one
parameter instead of two is a practical advantage for SAGA.

Finito/MISOµ
To make the relationship with other prior methods more apparent, we can rewrite the SAGA
algorithm (in P
the non-composite case) in term of an additional intermediate quantity uk , with
n
u0 := x0 + γ i=1 fi0 (x0 ), in addition to the usual xk iterate as described previously:

SAGA: Equivalent reformulation for non-composite case: Given the value of uk and of each
fi0 (φki ) at the end of iteration k, the updates for iteration k + 1, is as follows:
1. Calculate xk : Xn
xk = uk − γ fi0 (φki ). (7)
i=1

2. Update u with uk+1 = uk + 1


n (x
k
− uk ).
3. Pick a j uniformly at random.
4. Take φk+1
j = xk , and store fj0 (φk+1
j ) in the table replacing fj0 (φkj ). All other entries in
the table remain unchanged. The quantity φk+1 j is not explicitly stored.

Eliminating uk recovers the update (5) for xk . We now describe how the Finito [9] and MISOµ [10]
methods are closely related to SAGA. Both Finito and MISOµ use updates of the following form,
for a step length γ: n
1X k X
xk+1 = φi − γ fi0 (φki ). (8)
n i i=1
The step size used is of the order
P of 1/µn. To simplify the discussion of this algorithm we will
introduce the notation φ̄ = n1 i φki .
SAGA can be interpreted as Finito, but with the quantity φ̄ replaced with u, which is updated in the
same way as φ̄, but in expectation. To see this, consider how φ̄ changes in expectation:
 
1 k 1 k
E φ̄k+1 = E φ̄k + x − φkj = φ̄k + x − φ̄k .
   
n n
The update is identical in expectation to the update for u, uk+1 = uk + n1 (xk − uk ). There are
three advantages of SAGA over Finito/MISOµ. SAGA does not require strong convexity to work,
it has support for proximal operators, and it does not require storing the φi values. MISO has
proven support for proximal operators only in the case where impractically small step sizes are
used [10]. The big advantage of Finito/MISOµ is that when using a per-pass re-permuted access
ordering, empirical speed-ups of up-to a factor of 2x has been observed. This access order can also
be used with the other methods discussed, but with smaller empirical speed-ups. Finito/MISOµ is
particularly useful when fi is computationally expensive to compute compared to the extra storage
costs required over the other methods.

SDCA
The Stochastic Dual Coordinate Descent (SDCA) [2] method on the surface appears quite different
from the other methods considered. It works with the convex conjugates of the fi functions. How-
ever, in this section we show a novel transformation of SDCA into an equivalent method that only
works with primal quantities, and is closely related to the MISOµ method.

4
Consider the following algorithm:

SDCA algorithm in the primal


Step k + 1:

1. Pick an index j uniformly at random.


f Pn
2. Compute φk+1
j = proxγj (z), where γ = µn and z = −γ
1
i6=j fi0 (φki ).
3. Store the gradient fj0 (φk+1 ) = γ1 z − φk+1 in the table at location j. For i 6= j, the

j j
table entries are unchanged (fi (φi ) = fi (φi )).
0 k+1 0 k

Pn
At completion, return xk = −γ i fi0 (φki ) .

We claim that this algorithm is equivalent to the version of SDCA where exact block-coordinate
maximisation is used on the dual.1 Firstly, note that while SDCA was originally described for one-
dimensional outputs (binary classification or regression), it has been expanded to cover the multi-
class predictor case [11] (called Prox-SDCA there). In this case, the primal objective has a separate
strongly convex regulariser, and the functions fi are restricted to the form fi (x) := ψi (XiT x), where
Xi is a d×p feature matrix, and ψi is the loss function that takes a p dimensional input, for p classes.
To stay in the same general setting as the other incremental gradient methods, we work directly with
the fi (x) functions rather than the more structured ψi (XiT x). The dual objective to maximise then
becomes  
n 2 n
µ 1 X 1 X
D(α) = − αi − f ∗ (−αi ) ,
2 µn i=1 n i=1 i

where αi ’s are d-dimensional dual variables. Generalising the exact block-coordinate maximisation
update that SDCA performs to this form, we get the dual update for block j (with xk the current
primal iterate):
( )
2

 µn k 1
k+1 k
αj = αj + argmax −fj −αj − ∆αj − k
x + ∆αj . (9)
∆aj ∈Rd 2 µn
In the special case where fi (x) = ψi (XiT x), we can see that (9) gives exactly the same update as
Option I of Prox-SDCA in [11, Figure 1], which operates instead on the equivalent p-dimensional
dual variables α̃i with the relationship that αi = Xi α̃i .2 As noted by Shalev-Shwartz & Zhang [11],
the update (9) is actually an instance of the proximal operator of the convex conjugate of fj . Our
primal formulation exploits this fact by using a relation between the proximal operator of a function
and its convex conjugate known as the Moreau decomposition:

proxf (v) = v − proxf (v).
This decomposition allows us to compute the proximal operator of the conjugate via the primal
proximal operator. As this is the only use in the basic SDCA method of the conjugate function,
applying this decomposition allows us to completely eliminate the “dual” aspect of the algorithm,
yielding the above primal form of SDCA. The dual variables are related to the primal representa-
tives φi ’s through
P αi = −fi (φi ). The KKT conditions ensure that if the αi values are dual optimal
0

then x = γ i αi as defined above is primal optimal. The same trick is commonly used to in-
k

terpret Dijkstra’s set intersection as a primal algorithm instead of a dual block coordinate descent
algorithm [12].
The primal form of SDCA differs from the other incremental gradient methods described in this
section in that it assumes strong convexity is induced by a separate strongly convex regulariser,
rather than each fi being strongly convex. In fact, SDCA can be modified to work without a separate
regulariser, giving a method that is at the midpoint between Finito and SDCA. We detail such a
method in the supplementary material.
1
More precisely, to Option I of Prox-SDCA as described in [11, Figure 1]. We will simply refer to this
method as “SDCA” in this paper for brevity.
2
This is because fi∗ (αi ) = inf ψi∗ (α̃i ).
α̃i s.t. αi =Xi α̃i

5
SDCA variants
The SDCA theory has been expanded to cover a number of other methods of performing the coor-
dinate step [11]. These variants replace the proximal operation in our primal interpretation in the
previous section with an update where φk+1 j is chosen so that: fj0 (φk+1
j ) = (1−β)fj0 (φkj )+βfj0 (xk ),
where xk = − µn 1
i fi (φi ). The variants differ in how β ∈ [0, 1] is chosen. Note that φj
0 k k+1
does
P

not actually have to be explicitly known, just the gradient fj0 (φj ), which is the result of the above
k+1

interpolation. Variant 5 by Shalev-Shwartz & Zhang [11] does not require operations on the conju-
µn
gate function, it simply uses β = L+µn . The most practical variant performs a line search involving
the convex conjugate to determine β. As far as we are aware, there is no simple primal equivalent
of this line search. So in cases where we can not compute the proximal operator from the standard
SDCA variant, we can either introduce a tuneable parameter into the algorithm (β), or use a dual
line search, which requires an efficient way to evaluate the convex conjugates of each fi .

4 Implementation
We briefly discuss some implementation concerns:
• For many problems each derivative fi0 is just a simple weighting of the ith data vector.
Logistic regression and least squares have this property. In such cases, instead of storing
the full derivative fi0 for each i, we need only to store the weighting constants. This reduces
the storage requirements to be the same as the SDCA method in practice. A similar trick
can be applied to multi-class classifiers with p classes by storing p − 1 values for each i.
• Our algorithm assumes that initial gradients are known for each fi at the starting point x0 .
Instead, a heuristic may be used where during the first pass, data-points are introduced one-
by-one, in a non-randomized order, with averages computed in terms of those data-points
processed so far. This procedure has been successfully used with SAG [1].
• The SAGA update as stated is slower than necessary when derivatives are sparse. A just-in-
time updating of u or x may be performed just as is suggested for SAG [1], which ensures
that only sparse updates are done at each iteration.
• We give the form of SAGA for the case where each fi is strongly convex. However in
practice we usually have only convex fi , with strong convexity in f induced by the addition
of a quadratic regulariser. This quadratic regulariser may be split amongst the fi functions
evenly, to satisfy our assumptions. It is perhaps easier to use a variant of SAGA where the
regulariser µ2 ||x||2 is explicit, such as the following modification of Equation (5):
" #
k+1 k 0 k 0 k 1X 0 k
x = (1 − γµ) x − γ fj (x ) − fj (φj ) + f (φ ) .
n i i i
For sparse implementations instead of scaling xk at each step, a separate scaling constant
β k may be scaled instead, with β k xk being used in place of xk . This is a standard trick
used with stochastic gradient methods.
For sparse problems with a quadratic regulariser the just-in-time updating can be a little intricate. In
the supplementary material we provide example python code showing a correct implementation that
uses each of the above tricks.

5 Theory
In this section, all expectations are taken with respect to the choice of j at iteration k + 1 and
conditioned on xk and each fi0 (φki ) unless stated otherwise.
We start with two basic lemmas that just state properties of convex functions, followed by Lemma 1,
which is specific to our algorithm. The proofs of each of these lemmas is in the supplementary
material.
Pn
Lemma 1. Let f (x) = n1 i=1 fi (x). Suppose each fi is µ-strongly convex and has Lipschitz
continuous gradients with constant L. Then for all x and x∗ :
L−µ µ 2
hf 0 (x), x∗ − xi ≤ [f (x∗ ) − f (x)] − kx∗ − xk
L 2

6
1 X 0 ∗ 2 µ
− kfi (x ) − fi0 (x)k − hf 0 (x∗ ), x − x∗ i .
2Ln i L
Lemma 2. We have that for all φi and x∗ :
" #
1X 0 0 ∗ 2 1X ∗ 1X 0 ∗ ∗
kfi (φi ) − fi (x )k ≤ 2L fi (φi ) − f (x ) − hfi (x ), φi − x i .
n i n i n i

Lemma 3. It holds that for any φki , x∗ , xk and β > 0, with wk+1 as defined in Equation 1:
2 2 2
E wk+1 − xk − γf 0 (x∗ ) ≤ γ 2 (1 + β −1 )E fj0 (φkj ) − fj0 (x∗ ) + γ 2 (1 + β)E fj0 (xk ) − fj0 (x∗ )
2
− γ 2 β f 0 (xk ) − f 0 (x∗ ) .

Theorem 1. With x∗ the optimal solution, define the Lyapunov function T as:
1X 1X 0 ∗ k 2
T k := T (xk , {φki }ni=1 ) := fi (φki ) − f (x∗ ) − fi (x ), φi − x∗ + c xk − x∗ .
n i n i
1 1 1
Then with γ = 2(µn+L) , c = 2γ(1−γµ)n , and κ = γµ , we have the following expected change in the
Lyapunov function between steps of the SAGA algorithm (conditional on T k ):
1
E[T k+1 ] ≤ (1 − )T k .
κ
Proof. The first three terms in T k+1 are straight-forward to simplify:
" #  
1X k+1 1 k 1 1X
E fi (φi ) = f (x ) + 1 − fi (φki ).
n i n n n i
" #  
1 X 0 ∗ k+1 ∗ 1 1 1X 0 ∗ k
E − fi (x ), φi − x = − f 0 (x∗ ), xk − x∗ − 1− fi (x ), φi − x∗ .
n i n n n i

For the change in the last term of T k+1 , we apply the non-expansiveness of the proximal operator3 :
2 2
c xk+1 − x∗ = c proxγ (wk+1 ) − proxγ (x∗ − γf 0 (x∗ ))
2
≤ c wk+1 − x∗ + γf 0 (x∗ ) .
We expand the quadratic and apply E[w k+1
] = x − γf (x ) to simplify the inner product term:
k 0 k

2 2
cE wk+1 − x∗ + γf 0 (x∗ ) = cE xk − x∗ + wk+1 − xk + γf 0 (x∗ )
2 2
= c xk − x∗ + 2cE wk+1 − xk + γf 0 (x∗ ), xk − x∗ + cE wk+1 − xk + γf 0 (x∗ )
 

2 2
= c xk − x∗ − 2cγ f 0 (xk ) − f 0 (x∗ ), xk − x∗ + cE wk+1 − xk + γf 0 (x∗ )
2 2
≤ c xk − x∗ − 2cγ f 0 (xk ), xk − x∗ + 2cγ f 0 (x∗ ), xk − x∗ − cγ 2 β f 0 (xk ) − f 0 (x∗ )
2 2
+ 1 + β −1 cγ 2 E fj0 (φkj ) − fj0 (x∗ ) + (1 + β) cγ 2 E fj0 (xk ) − fj0 (x∗ ) . (Lemma 3)


The value of β shall be fixed later. Now we apply Lemma 1 to bound −2cγ f (x ), xk − x∗ and 0 k
2
Lemma 2 to bound E fj0 (φkj ) − fj0 (x∗ ) :
2
 2 cγ  2
cE xk+1 − x∗ ≤ (c − cγµ) xk − x∗ + (1 + β)cγ 2 − E fj0 (xk ) − fj0 (x∗ )
L
2cγ(L − µ)  2
f (xk ) − f (x∗ ) − f 0 (x∗ ), xk − x∗ − cγ 2 β f 0 (xk ) − f 0 (x∗ )


L " #
1 X 1 X
+ 2 1 + β −1 cγ 2 L fi (φki ) − f (x∗ ) − fi0 (x∗ ), φki − x∗ .

n i n i

3
Note that the first equality below is the only place in the proof where we use the fact that x∗ is an optimality
point.

7
100
10−4 10−4
10−4 10−4

Function sub-optimality
10−8 10−8 10−8
10−8

10−12 10−12
10−12 10−12

5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20

3 × 10−2 102 100


10−1
101
100
10−1
2 × 10−2
10−2 10−1
10−2
5 10 15 20 5 10 15 20 5 10 15 20 5 10 15 20

Gradient evaluations / n 0
4
128
101016
10
Finito perm Finito SAGA SVRG SAG SDCA LBFGS 1515
20
0
Figure 2: From left to right we have the MNIST, COVTYPE, IJCNN1 and MILLIONSONG datasets. Top
row is the L2 regularised case, bottom row the L1 regularised case.

We can now combine the bounds that we have derived for each term in T , and pull out a frac-
2
tion κ1 of T k (for any κ at this point). Together with the inequality − f 0 (xk ) − f 0 (x∗ ) ≤
∗ 0 ∗ ∗
[13, Thm. 2.1.10], that yields:
k k

−2µ f (x ) − f (x ) − f (x ), x − x
 h
1 1 2cγ(L − µ) D Ei
E[T k+1 ] − T k ≤ − T k + − − 2cγ 2 µβ f (xk ) − f (x∗ ) − f 0 (x∗ ), xk − x∗
κ n L
 " X #
1 −1 2 1 1 k ∗ 1 XD 0 ∗ k ∗
E
+ + 2(1 + β )cγ L − fi (φi ) − f (x ) − fi (x ), φi − x
κ n n i n i
   
1 2 1 2
+ − γµ c xk − x∗ + (1 + β)γ − cγE fj0 (xk ) − fj0 (x∗ ) . (10)
κ L
Note that each of the terms in square brackets are positive, and it can be readily verified that our
assumed values for the constants (γ = 2(µn+L)
1
, c = 2γ(1−γµ)n
1
, and κ = γµ 1
), together with
β = 2µn+L
L ensure that each of the quantities in round brackets are non-positive (the constants were
determined by setting all the round brackets to zero except the second one — see [14] for the details).
Adaptivity to strong convexity result: Note  that when using the γ = 3L 1
step size, the same c as
µ
above can be used with β = 2 and κ = min 4n , 3L to ensure non-positive terms.
1 1

2
Corollary 1. Note that c xk − x∗ ≤ T k , and therefore by chaining the expectations, plugging
in the constants explicitly and using µ(n − 0.5) ≤ µn to simplify the expression, we get:
   k  
2 µ 2 n
xk − x∗ x0 − x∗ f (x0 ) − f 0 (x∗ ), x0 − x∗ − f (x∗ ) .
 
E ≤ 1− +
2(µn + L) µn + L
Here the expectation is over all choices of index j k up to step k.

6 Experiments
We performed a series of experiments to validate the effectiveness of SAGA. We tested a binary
classifier on MNIST, COVTYPE, IJCNN1 and a least squares predictor on MILLIONSONG. Details
of these datasets can be found in [9]. We used the same code base for each method, just changing the
main update rule. SVRG was tested with the recalibration pass used every n iterations, as suggested
in [8]. Each method had its step size parameter chosen so as to give the fastest convergence.
We tested with a L2 regulariser, which all methods support, and with a L1 regulariser on a subset
of the methods. The results are shown in Figure 2. We can see that Finito (perm) performs the
best on a per epoch equivalent basis, but it can be the most expensive method per step. SVRG is
similarly fast on a per epoch basis, but when considering the number of gradient evaluations per
epoch is double that of the other methods for this problem, it is middle of the pack. SAGA can be
seen to perform similar to the non-permuted Finito case, and to SDCA. Note that SAG is slower
than the other methods at the beginning. To get the optimal results for SAG, an adaptive step size
rule needs to be used rather than the constant step size we used. In general, these tests confirm that
the choice of methods should be done based on their properties as discussed in Section 3, rather than
their convergence rate.

8
References
[1] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. Technical report, INRIA, hal-0086005, 2013.
[2] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regular-
ized loss minimization. JMLR, 14:567–599, 2013.
[3] Paul Tseng and Sangwoon Yun. Incrementally updated gradient methods for constrained and
regularized optimization. Journal of Optimization Theory and Applications, 160:832:853,
2014.
[4] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance re-
duction. Technical report, Microsoft Research, Redmond and Rutgers University, Piscataway,
NJ, 2014.
[5] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive vari-
ance reduction. NIPS, 2013.
[6] Taiji Suzuki. Stochastic dual coordinate ascent with alternating direction method of multipliers.
Proceedings of The 31st International Conference on Machine Learning, 2014.
[7] Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. Variance reduction techniques for
gradient estimates in reinforcement learning. JMLR, 5:1471–1530, 2004.
[8] Jakub Konečný and Peter Richtárik. Semi-stochastic gradient descent methods. ArXiv e-prints,
arXiv:1312.1666, December 2013.
[9] Aaron Defazio, Tiberio Caetano, and Justin Domke. Finito: A faster, permutable incremental
gradient method for big data problems. Proceedings of the 31st International Conference on
Machine Learning, 2014.
[10] Julien Mairal. Incremental majorization-minimization optimization with application to large-
scale machine learning. Technical report, INRIA Grenoble Rhône-Alpes / LJK Laboratoire
Jean Kuntzmann, 2014.
[11] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent
for regularized loss minimization. Technical report, The Hebrew University, Jerusalem and
Rutgers University, NJ, USA, 2013.
[12] Patrick Combettes and Jean-Christophe Pesquet. Proximal Splitting Methods in Signal Pro-
cessing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering. Springer,
2011.
[13] Yu. Nesterov. Introductory Lectures On Convex Programming. Springer, 1998.
[14] Aaron Defazio. New Optimization Methods for Machine Learning. PhD thesis, (draft under
examination) Australian National University, 2014. http://www.aarondefazio.com/pubs.html.

9
Appendix

A The SDCA/Finito Midpoint Algorithm


Using Lagrangian duality theory, SDCA can be shown at step k as minimising the following lower bound:
n
1 1 X  µ 2
Ak (x) = fj (x) + fi (φki ) + fi0 (φki ), x − φki + kxk .
n n 2
i6=j

Instead of directly including the regulariser in this bound, we can use the standard strong convexity lower bound for each
2 2
fi , by removing µ2 kxk and changing the expression in the summation to fi (φki ) + fi0 (φki ), x − φki + µ2 kx − φi k . The
transformation to having strong convexity within the fi functions yields the following simple modification to the algorithm:
fj
φk+1
j = prox(µ(n−1)) −1 (z), where:

1 X k 1 X
z= φi − fi0 (φki ).
n−1 µ(n − 1)
i6=j i6=j
It can be shown that after this update:
1 X k+1 1 X 0 k+1
xk+1 = φk+1 = φi − f (φ ).
j
n i µn i i i
Pn
Now the similarity to Finito is apparent if this equation is compared Equation 8: xk+1 = n1 i φki − γ i=1 fi0 (φki ). The
P
only difference is that the vectors on the right hand side of the equation are at their values at step k + 1 instead of k. Note
that there is a circular dependency here, as φk+1
j := xk+1 but φk+1j appears in the definition of xk+1 . Solving the proximal
operator is the resolution of the circular dependency. This mid-point between Finito and SDCA is interesting in it’s own right,
as it appears experimentally to have similar robustness to permuted orderings as Finito, but it has no tunable parameters like
SDCA.
When the proximal operator above is fast to compute, say on the same order as just evaluating fj , then SDCA can be the
best method among those discussed. It is a little slower than the other methods discussed here, but it has no tunable parameters
at all. It is also the only choice when each fi is not differentiable. The major disadvantage of SDCA is that it can not handle
non-strongly convex problems directly. Although like most methods, adding a small amount of quadratic regularisation can
be used to recover a convergence rate. It is also not adapted to use proximal operators for the regulariser in the composite
objective case. The requirement of computing the proximal operator of each loss fi initially appears to be a big disadvantage,
however there are variants of SDCA that remove this requirement, but they introduce additional downsides.

B Lemmas
Lemma A1. Let f be µ-strongly convex and have Lipschitz continuous gradients with constant L. Then we have for all x and
y:
1 2
f (x) ≥ f (y) + hf 0 (y), x − yi + kf 0 (x) − f 0 (y)k
2 (L − µ)
µL 2 µ
+ ky − xk + hf 0 (x) − f 0 (y), y − xi .
2 (L − µ) (L − µ)
2
Proof. Define the function g as g(x) = f (x) − µ2 kxk . Then the gradient is g 0 (x) = f 0 (x) − µx. g has a Lipschitz gradient
with constant L − µ. By convexity, we have [1, Thm. 2.1.5]:
1 2
g(x) ≥ g(y) + hg 0 (y), x − yi + kg 0 (x) − g 0 (y)k .
2(L − µ)
Substituting in the definition of g and g 0 , and simplifying the terms gives the result.

1
Pn
Lemma 1. Let f (x) = n1 i=1 fi (x). Suppose each fi is µ-strongly convex and has Lipschitz continuous gradients with
constant L. Then for all x and x∗ :
L−µ µ 1 X 0 ∗ 2 µ 0 ∗
f 0 (x), x∗ − x ≤ [f (x∗ ) − f (x)] − kx∗ − xk − fi (x ) − fi0 (x) f (x ), x − x∗ .
2

L 2 2Ln i L

Proof. This is a straight-forward corollary of Lemma A1, using y = x∗ , and averaging over the fi functions.
Lemma 2. We have that for all φi and x∗ :
" #
1X 0 0 ∗ 2 1X ∗ 1X 0 ∗ ∗
kfi (φi ) − fi (x )k ≤ 2L fi (φi ) − f (x ) − hfi (x ), φi − x i .
n i n i n i

2
Proof. Apply the standard inequality f (y) ≥ f (x) + hf 0 (x), y − xi + 1
2L kf 0 (x) − f 0 (y)k , with y = φi and x = x∗ , for
each fi , and sum.
Lemma 3. It holds that for any φki , x∗ , xk and β > 0, with wk+1 as defined in Equation 1:
2 2 2
E wk+1 − xk − γf 0 (x∗ ) ≤ γ 2 (1 + β −1 )E fj0 (φkj ) − fj0 (x∗ ) + γ 2 (1 + β)E fj0 (xk ) − fj0 (x∗ )
2
− γ 2 β f 0 (xk ) − f 0 (x∗ ) .

Proof. We follow a similar argument as occurs in the SVRG proof [2] for this term, but with a tighter argument. The tightening
2 2 2
comes from using kx + yk ≤ (1 + β −1 ) kxk + (1 + β) kyk instead of the simpler β = 1 case they use. The other key trick
2 2 2
is the use of the standard variance decomposition E[kX − E[X]k ] = E[kXk ] − kE[X]k three times.
2
E wk+1 − xk + γf 0 (x∗ )
2
γX 0 k h i
=E − fi (φi ) + γf 0 (x∗ ) + γ fj0 (φkj ) − fj0 (xk )
n i
| {z }
:= γX

X E[X]
z }| { z
}| #{ E[X]
" # " 2 2
1 X z }| {
= γ2E fj0 (φkj ) − fj0 (x∗ ) − fi0 (φki ) + f 0 (x∗ ) − fj0 (xk ) − fj0 (x∗ ) − f 0 (xk ) + f 0 (x∗ ) + γ2 f 0 (xk ) − f 0 (x∗ )
n i
2
1X 0 k
≤ γ 2 (1 + β −1 )E fj0 (φkj ) − fj0 (x∗ ) − fi (φi ) + f 0 (x∗ )
n i
2 2
+ γ 2 (1 + β)E fj0 (xk ) − fj0 (x∗ ) − f 0 (xk ) + f 0 (x∗ ) + γ 2 f 0 (xk ) − f 0 (x∗ )

(use variance decomposition twice more):


2 2 2
≤ γ 2 (1 + β −1 )E fj0 (φkj ) − fj0 (x∗ ) + γ 2 (1 + β)E fj0 (xk ) − fj0 (x∗ ) − γ 2 β f 0 (xk ) − f 0 (x∗ ) .

C Non-strongly-convex Problems
1
Pk
Theorem 2. When each fi is convex, using γ = 3L , we have for x̄k = k1 t=1 xt that:
 
4n 2L 0 2
E F (x̄k ) − F (x∗ ) ≤ x − x∗ + f (x0 ) − f 0 (x∗ ), x0 − x∗ − f (x∗ ) .
 
k n

Here the expectation is over all choices of index j k up to step k.


Proof. A more detailed version of this proof is available in [3]. We proceed by using a similar argument as in Theorem 1, but
2 2
we add an additional α xk − x∗ together with the existing c xk − x∗ term in the Lyapunov function.

2
2 2
We will bound α xk − x∗ in a different manner to c xk − x∗ . Define ∆ = − γ1 wk+1 − xk − f 0 (xk ), the differ-


ence between our approximation to the gradient at xk and true gradient. Then instead of using the non-expansiveness property
at the beginning, we use a result proved for prox-SVRG [4, 2nd eq. on p.12]:
2 2 2
αE xk+1 − x∗ ≤ α xk − x∗ − 2αγE F (xk+1 ) − F (x∗ ) + 2αγ 2 E k∆k .
 

Although their quantity ∆ is different, they only use the property that E[∆] = 0 to prove the above equation. A full proof of
this property for the SAGA algorithm that follows their argument appears in [3].
To bound the ∆ term, a small modification of the argument in Lemma 3 can be used, giving:
2 2 2
E k∆k ≤ 1 + β −1 E fj0 (φkj ) − fj0 (x∗ ) + (1 + β) E fj0 (xk ) − fj0 (x∗ )

.

Applying this gives:


2 2
αE xk+1 − x∗ ≤ α xk − x∗ − 2αγE F (xk+1 ) − F (x∗ )
 

2 2
+ 2(1 + β −1 )αγ 2 E fj0 (φkj ) − fj0 (x∗ ) + 2 (1 + β) αγ 2 E fj0 (xk ) − fj0 (x∗ ) .
2
As in Theorem 1, we then apply Lemma 2 to bound E fj0 (φkj ) − fj0 (x∗ ) . Combining with the rest of the Lyapunov function
as was derived in Theorem 1 gives (we basically add the α terms to inequality (10) with µ = 0):

E[T k+1 ] − T k
 
1
− 2cγ f (xk ) − f (x∗ ) − f 0 (x∗ ), xk − x∗ − 2αγE F (xk+1 ) − F (x∗ )
   

n
 " X #
−1 2 −1 2 1 1 k ∗ 1X 0 ∗ k ∗
+ 4(1 + β )αLγ + 2(1 + β )cLγ − fi (φi ) − f (x ) − fi (x ), φi − x
n n i n i
 c 2
+ (1 + β)cγ + 2(1 + β)αγ − γE fj0 (xk ) − fj0 (x∗ ) .
L
1
As before, the terms in square brackets are positive by convexity. Given that our choice of step size is γ = 3L (to match the
adaptive to strong convexity step size), we can set the three round brackets to zero by using β = 1, c = 2n and α = 3L
3L
8n . We
thus obtain:
1 
E[T k+1 ] − T k ≤ − E F (xk+1 ) − F (x∗ ) .

4n
These expectations are conditional on information from step k. We now take the expectation with respect to all previous steps,
1
E F (xk+1 ) − F (x∗ ) , where all expectations are unconditional. Further negating and

yielding E[T k+1 ] − E[T k ] ≤ − 4n
summing for k from 0 to k − 1 results in telescoping of the T terms, giving:
" k #
1 X
F (xt ) − F (x∗ ) ≤ T 0 − E[T k ].

E
4n t=1
 
We can drop the −E T k term since T k is always positive. Then we apply convexity to pull the summation inside of F , and
multiply through by 4n/k, giving:
" k
# " k #
1X t 1 X 4n 0
x ) − F (x∗ ) ≤ E F (xt ) − F (x∗ ) ≤

E F( T .
k t=1 k t=1
k

15L 2L
We get a (c + α) = 8n ≤ n term that we use in T 0 for simplicity.

D Example Code for Sparse Least Squares & Ridge Regression


The SAGA method is quite easy to implement for dense gradients, however the implementation for sparse gradient problems
can be tricky. The main complication is the need for just-in-time updating of the elements of the iterate vector. This is needed
to avoid having to do any full dense vector operations at each iteration. We provide below a simple implementation for the
case of least-squares problems that illustrates how to correctly do this. The code is in the compiled Python (Cython) language.

3
import random
import numpy as np
cimport numpy as np

cimport cython
from cython .view cimport array as cvarray

# Performs the lagged update of x by g.


cdef inline lagged_update (long k, double [:] x, double [:] g, unsigned long [:] lag ,
long [:] yindices , int ylen , double [:] lag_scaling , double a):

cdef unsigned int i


cdef long ind
cdef unsigned long lagged_amount = 0

for i in range (ylen ):


ind = yindices [i]
lagged_amount = k−lag[ind]
lag[ind] = k
x[ind] += lag_scaling [ lagged_amount ] ∗ (a∗g[ind ])

# Performs x += a∗y, where x is dense and y is sparse .


cdef inline add_weighted ( double [:] x, double [:] ydata , long [:] yindices , int ylen , double a):
cdef unsigned int i

for i in range (ylen ):


x[ yindices [i]] += a∗ ydata [i]

# Dot product of a dense vector with a sparse vector


cdef inline spdot ( double [:] x, double [:] ydata , long [:] yindices , int ylen ):
cdef unsigned int i
cdef double v = 0.0

for i in range (ylen ):


v += ydata [i]∗ x[ yindices [i]]

return v

def saga_lstsq (A, double [:] b, unsigned int maxiter , props ):

# temporaries
cdef double [:] ydata
cdef long [:] yindices
cdef unsigned int i, j, epoch , lagged_amount
cdef long indstart , indend , ylen , ind
cdef double cnew , Aix , cchange , gscaling

# Data points are stored in columns in CSC format .


cdef double [:] data = A.data
cdef long [:] indices = A. indices
cdef long [:] indptr = A. indptr

cdef unsigned int m = A. shape [0] # dimensions


cdef unsigned int n = A. shape [1] # datapoints

4
cdef double [:] xk = np. zeros (m)
cdef double [:] gk = np. zeros (m)

cdef double eta = props [’eta ’] # Inverse step size = 1/ gamma


cdef double reg = props .get(’reg ’, 0.0) # Default 0
cdef double betak = 1.0 # Scaling factor for xk.

# Tracks for each entry of x, what iteration it was last updated at.
cdef unsigned long [:] lag = np. zeros (m, dtype =’I’)

# Initialize gradients
cdef double gd = −1.0/n
for i in range (n):
indstart = indptr [i]
indend = indptr [i+1]
ydata = data[ indstart : indend ]
yindices = indices [ indstart : indend ]
ylen = indend−indstart
add_weighted (gk , ydata , yindices , ylen , gd ∗b[i])

# This is just a table of the sum the geometric series (1−reg/eta)


# It is used to correctly do the just−in−time updating when
# L2 regularisation is used.
cdef double [:] lag_scaling = np. zeros (n∗ maxiter +1)
lag_scaling [0] = 0.0
lag_scaling [1] = 1.0
cdef double geosum = 1.0
cdef double mult = 1.0 − reg/eta
for i in range (2,n∗ maxiter +1):
geosum ∗= mult
lag_scaling [i] = lag_scaling [i−1] + geosum

# For least−squares , we only need to store a single


# double for each data point , rather than a full gradient vector .
# The value stored is the A_i ∗ betak ∗ x product
cdef double [:] c = np. zeros (n)

cdef unsigned long k = 0 # Current iteration number

for epoch in range ( maxiter ):

for j in range (n):


if epoch == 0:
i = j
else:
i = np. random . randint (0, n)

# Selects the ( sparse ) column of the data matrix containing datapoint i.


indstart = indptr [i]
indend = indptr [i+1]
ydata = data[ indstart : indend ]
yindices = indices [ indstart : indend ]
ylen = indend−indstart

# Apply the missed updates to xk just−in−time


lagged_update (k, xk , gk , lag , yindices , ylen , lag_scaling , −1.0/(eta ∗ betak ))

5
Aix = betak ∗ spdot (xk , ydata , yindices , ylen)

cnew = Aix
cchange = cnew−c[i]
c[i] = cnew
betak ∗= 1.0 − reg/eta

# Update xk with sparse step bit (with betak scaling )


add_weighted (xk , ydata , yindices , ylen , −cchange /( eta ∗ betak ))

k += 1

# Perform the gradient−average part of the step


lagged_update (k, xk , gk , lag , yindices , ylen , lag_scaling , −1.0/(eta ∗ betak ))

# update the gradient average


add_weighted (gk , ydata , yindices , ylen , cchange /n)

# Perform the just in time updates for the whole xk vector , so that all entries are up−to−date.
gscaling = −1.0/(eta ∗ betak )
for ind in range (m):
lagged_amount = k−lag[ind]
lag[ind] = k
xk[ind] += lag_scaling [ lagged_amount]∗ gscaling ∗ gk[ind]
return betak ∗ np. asarray (xk)

References
[1] Yu. Nesterov. Introductory Lectures On Convex Programming. Springer, 1998.

[2] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. NIPS, 2013.
[3] Aaron Defazio. New Optimization Methods for Machine Learning. PhD thesis, (draft under examination) Australian
National University, 2014. http://www.aarondefazio.com/pubs.html.
[4] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. Technical report,
Microsoft Research, Redmond and Rutgers University, Piscataway, NJ, 2014.

You might also like