Semi-stochastic coordinate descent

We propose a novel stochastic gradient method—semi-stochastic coordinate descent—for the problem of minimizing a strongly convex function represented as the average of a large number of smooth convex functions: . Our method first performs a deterministic step (computation of the gradient of f at the starting point), followed by a large number of stochastic steps. The process is repeated a few times, with the last stochastic iterate becoming the new starting point where the deterministic step is taken. The novelty of our method is in how the stochastic steps are performed. In each such step, we pick a random function and a random coordinate j—both using non-uniform distributions—and update a single coordinate of the decision vector only, based on the computation of the jth partial derivative of at two different points. Each random step of the method constitutes an unbiased estimate of the gradient of f and moreover, the squared norm of the steps goes to zero in expectation, meaning that the stochastic estimate of the gradient progressively improves. The computational complexity of the method is the sum of two terms: evaluations of gradients and evaluations of partial derivatives , where is a novel condition number.


Introduction
In this paper we study the problem of unconstrained minimization of a strongly convex function represented as the average of a large number of smooth convex functions: Many computational problems in various disciplines are of this form.In machine learning, f i (x) represents the loss/risk of classifier x ∈ R d on data sample i, f represents the empirical risk (=average loss), and the goal is to find a predictor minimizing f .Often, an L2-regularizer of the form µ x 2 , for µ > 0, is added to the loss, making it strongly convex and hence easier to minimize.
Assumptions.We assume that the functions f i : R d → R are differentiable and convex function, with Lipschitz continuous partial derivatives.Formally, we assume that for each i ∈ [n] := {1, 2, . . ., n} and j ∈ [d] := {1, 2, . . ., d} there exists L ij ≥ 0 such that for all x ∈ R d and h ∈ R, where e j is the j th standard basis vector in R d , ∇f (x) ∈ R d the gradient of f at point x and •, • is the standard inner product.This assumption was recently used in the analysis the accelerated coordinate descent method APPROX [3].We further assume that f is µ-strongly convex.That is, we assume that there exists µ > 0 such that for all x, y ∈ R d , Context.Batch methods such as gradient descent (GD) enjoy a fast (linear) convergence rate: to achieve ǫ-accuracy, GD needs O(κ log(1/ǫ)) iterations, where κ is a condition number.The drawback of GD is that in each iteration one needs to compute the gradient of f , which requires a pass through the entire dataset.This is prohibitive to do many times if n is very large.Stochastic gradient descent (SGD) in each iteration computes the gradient of a single randomly chosen function f i only-this constitutes an unbiased (but noisy) estimate of the gradient of f -and makes a step in that direction [17,9,23].The rate of convergence of SGD is slower, O(1/ǫ), but the cost of each iteration is independent of n.Variants with nonuniform selection probabilities were considered in [24], a mini-batch variant (for SVMs with hinge loss) was analyzed in [21].
Recently, there has been progress in designing algorithms that achieve the fast O(log(1/ǫ)) rate without the need to scan the entire dataset in each iteration.The first class of methods to have achieved this are stochastic/randomized coordinate descent methods.
When applied to (11), coordinate descent methods (CD) [11,16] can, like SGD, be seen as an attempt to keep the benefits of GD (fast linear convergence) while reducing the complexity of each iteration.A CD method only computes a single partial derivative ∇ j f (x) at each iteration and updates a single coordinate of vector x only.When chosen uniformly at random, partial derivative is also an unbiased estimate of the gradient.However, unlike the SGD estimate, its variance is low.Indeed, as one approaches the optimum, partial derivatives decrease to zero.While CD methods are able to obtain linear convergence, they typically need O((d/µ) log(1/ǫ)) iterations when applied to (11) directly1 .CD method typically significantly outperform GD, especially on sparse problems with a very large number variables/coordinates [11,16].
An alternative to applying CD to (11) is to apply it to the dual problem.This is possible under certain additional structural assumptions on the functions f i .This is the strategy employed by stochastic dual coordinate ascent (SDCA) [20,12], whose rate is The condition number κ here is different (and larger) than the condition number appearing in the rate of GD.Despite this, this is a vast improvement on the rates achieved by both GD and SGD, and the method indeed typically performs much better in practice.Accelerated [19] and mini-batch [21] variants of SDCA have also been proposed.We refer the reader to QUARTZ [12] for a general analysis involving the update of a random subset of dual coordinates, following an arbitrary distribution.
Recently, there has been progress in designing primal methods which match the fast rate of SDCA.Stochastic average gradient (SAG) [18], and more recently SAGA [1], move in a direction composed of old stochastic gradients.The semi-stochastic gradient descent (S2GD) [6,8] and stochastic variance reduced gradient (SVRG) [5,22] methods employ a different strategy: one first computes the gradient of f , followed by O(κ) steps where only stochastic gradients are computed.These are used to estimate the change of the gradient, and it is this direction which combines the old gradient and the new stochastic gradient information which used in the update.
Main result.In this work we develop a new method-semi-stochastic coordinate descent (S2CD)for solving (11), enjoying a fast rate similar to methods such as SDCA, SAG, S2GD, SVRG, MISO, SAGA, mS2GD and QUARTZ.S2CD can be seen as a hybrid between S2GD and CD.In particular, the complexity of our method is the sum of two terms: evaluations ∇f i (that is, log(1/ǫ) evaluations of the gradient of f ) and O(κ log(1/ǫ)) evaluations of ∇ j f i for randomly chosen functions f i and randomly chosen coordinates j, where κ is a new condition number which is larger than the condition number κ appearing in the aforementioned methods.However, note that κ enters the complexity only in the term involving the cost of the evaluation of a partial derivative ∇ j f i , which can be substantially smaller than the evaluation cost of ∇f i .Hence, our complexity result can be both better or worse than previous results, depending on whether the increase of the condition number can or can not be compensated by the lower cost of the stochastic steps based on the evaluation of partial derivatives.
Outline.The paper is organized as follows.In Section 2 we describe the S2CD algorithm and in Section 3 we state a key lemma and our main complexity result.The proof of the lemma is provided in Section 4 and the proof of the main result in Section 5.

S2CD Algorithm
In this section we describe the Semi-Stochastic Coordinate Descent method (Algorithm 1).

Algorithm 1 Semi-Stochastic Coordinate Descent (S2CD)
parameters: m (max # of stochastic steps per epoch); h > 0 (stepsize parameter); Compute and store ∇f Pick coordinate j ∈ {1, 2, . . ., d} with probability p j Pick function index i from the set {i : L ij > 0} with probability q ij Update the j th coordinate: )) e j end for Reset the starting point: The method has an outer loop (an "epoch"), indexed by counter k, and an inner loop, indexed by t.At the beginning of epoch k, we compute and store the gradient of f at x k .Subsequently, S2CD enters the inner loop in which a sequence of vectors y k,t for t = 0, 1 . . ., t k is computed in a stochastic way, starting from y k,0 = x k .The number t k of stochastic steps in the inner loop is random, following a geometric law: where In each step of the inner loop, we seek to compute y k,t+1 , given y k,t .In order to do so, we sample coordinate j with probability p j and subsequently2 sample i with probability q ij , where the probabilities are given by Note that L ij = 0 means that function f i does not depend on the j th coordinate of x.Hence, ω i is the number of coordinates function f i depends on -a measure of sparsity of the data 3 .It can be shown that f has a 1-Lipschitz gradient with respect to the weighted Euclidean norm with weights {v j } ([3, Theorem 1]).Hence, we sample coordinate j proportionally to this weight v j .Note that p ij is the joint probability of choosing the pair (i, j).
Having sampled coordinate j and function index i, we compute two partial derivatives: ∇ j f i (x k ) and ∇ j f i (y k,t ) (we compressed the notation here by writing ∇ j f i (x) instead of ∇f i (x), e j ), and combine these with the pre-computed value ∇ j f (x k ) to form an update of the form where and Note that only a single coordinate of y k,t is updated at each iteration.
In the entire text (with the exception of the statement of Theorem 2 and a part of Section 5.3, where E denotes the total expectation) we will assume that all expectations are conditional on the entire history of the random variables generated up to the point when y k,t was computed.With this convention, it is possible to think that there are only two random variables: j and i.By E we then mean the expectation with respect to both of these random variables, and by E i we mean expectation with respect to i (that is, conditional on j).With this convention, we can write which means that conditioned on j, G ij kt is an unbiased estimate of the j th partial derivative of f at y k,t .An equally easy calculation reveals that the random vector g ij kl is an unbiased estimate of the gradient of f at y k,t : Hence, the update step performed by S2CD is a stochastic gradient step of fixed stepsize h.Before we describe our main complexity result in the next section, let us briefly comment on a few special cases of S2CD: • If n = 1 (this can be always achieved simply by grouping all functions in the average into a single function), S2CD reduces to a stochastic CD algorithm with importance sampling4 , as studied in [11,16,12], but written with many redundant computations.Indeed, the method in this case does not require the x k iterates, nor does it need to compute the gradient of f , and instead takes on the form: y 0,t+1 ← y 0,t − hp −1 j ∇ j f (y 0,t )e j , where p j = L 1j / j L 1j .
• It is possible to extend the S2CD algorithm and results to the case when coordinates are replaced by (nonoverlapping) blocks of coordinates, as in [16] -we did not do it here for the sake of keeping the notation simple.In such a setting, we would obtain semi-stochastic block coordinate descent.In the special case with all variables forming a single block, the algorithm reduces to the S2GD method described in [6], but with nonuniform probabilities for the choice of i -proportional to the Lipschitz constants of the gradient of the functions f i (this is also studied in [22]).As in [22], the complexity result then depends on the average of the Lipschitz constants.
Note that the algorithm, as presented, assumes the knowledge of µ.We have done this for simplicity of exposition: the method works also if µ is replaced by some lower bound in the method, which can be set to 0 (see [6]).The change to the complexity results will be only minor and all our conclusions hold.Likewise, it is possible to give an O(1/ǫ) complexity result in the non-strongly convex case f using standard regularization arguments (e.g., see [6]).

Complexity Result
In this section, we state and describe our complexity result; the proof is provided in Section 5.
An important step in our analysis is proving a good upper bound on the variance of the (unbiased) estimator g ij kt = p −1 j G ij kt e j of ∇f (y k,t ), one that we can "believe" would diminish to zero as the algorithm progresses.This is important for several reasons.First, as the method approaches the optimum, we wish g ij kt to be progressively closer to the true gradient, which in turn will be close to zero.Indeed, if this was the case, then S2CD behaves like gradient descent with fixed stepsize h close to optimum.In particular, this would indicate that using fixed stepsizes makes sense.
In light of the above discussion, the following lemma plays a key role in our analysis: Lemma 1.The iterates of the S2CD algorithm satisfy where The proof of this lemma can be found in Section 4. Note that as y k,t → x * and x k → x * , the bound (9) decreases to zero.This is the main feature of modern fast stochastic gradient methods: the squared norm of the stochastic gradient estimate progressively diminishes to zero, as the method progresses, in expectation.Note that the standard SGD method does not have this property: indeed, there is no reason for E i ∇f i (x) 2 to be small even if x = x * .
We are now ready to state the main result of this paper.
Theorem 2 (Complexity of S2CD).If 0 < h < 1/(2 L), then for all k ≥ 0 we have: 5 By analyzing the above result (one can follow the steps in [6, Theorem 6]), we get the following useful corollary: Corollary 3. Fix the number of epochs k ≥ 1, error tolerance ǫ ∈ (0, 1) and let ∆ := ǫ 1/k and κ := L/µ.If we run Algorithm 1 with stepsize h and m set as In particular, for k = ⌈log(1/ǫ)⌉ we have 1 ∆ ≤ exp(1), and we can pick If we run S2CD with the parameters set as in (13), then in each epoch the gradient of f is evaluated once (this is equivalent to n evaluations of ∇f i ), and the partial derivative of some function f i is evaluated 2m ≈ 52κ = O(κ) times.If we let C grad be the average cost of evaluating the gradient ∇f i and C pd be the average cost of evaluating the partial derivative ∇ j f i , then the total work of S2CD can be written as (nC grad + mC pd )k The complexity results of methods such as S2GD/SVRG [6,5,22] and SAG/SAGA [18,1]-in a similar but not identical setup to ours (these papers assume f i to be L i -smooth)-can be written in a similar form: where κ = L/µ and either L = L max := max i L i ([18, 5, 6, 1]), or L = L avg := 1 n i L i ( [22]).The difference between our result (14) and existing results (15) is in the term κC pd -previous results have κC grad in that place.This difference constitutes a trade-off: while κ ≥ κ (we comment on this below), we clearly have C pd ≤ C grad .The comparison of the quantities κC grad and κC pd is not straightforward and is problem dependent.
Let us now comment how do the condition numbers κ and κ avg = L avg /µ compare.It can be show that (see [16]) 5 It is possible to replace modify the argument slightly and replace the term L appearing in the numerator by L − µ maxs ps .However, as this does not bring any significant improvements, we decided to present the result in this simplified form.and, moreover, this inequality can be tight.Since ω i ≥ 1 for all i, we have Finally, κ can be smaller or larger than κ max := L max /µ.

Proof of Lemma 1
We will prove the following stronger inequality: Lemma 1 follows by dropping the negative term.
STEP 1.We first break down the left hand side of ( 16) into d terms each of which we will bound separately.By first taking expectation conditioned on j and then taking the full expectation, we can write: STEP 2. We now further break each of these d terms into three pieces.That is, for each j = 1, . . ., d we have: STEP 3. In this step we bound the first two terms in the right hand side of inequality (18).It will now be useful to introduce the following notation: and Let us fist examine the first term in the right-hand side of (18).Using the coordinate cocoercivity lemma (Lemma 4) with y = x * , we obtain the inequality using which we get the bound: ≤ 4 Note that by ( 4) and (10), we have that for all s = 1, 2, . . ., d, Continuing from (21), we can therefore further write The same reasoning applies to the second term on the right-hand side of the inequality (18) and we have: STEP 4. Next we bound the third term on the right-hand side of the inequality (18).First note that since f is µ-strongly convex (see ( 3)), for all x ∈ R d we have: We can now write: STEP 5. We conclude by combining ( 17), ( 18), ( 22), ( 23) and (25).

Proof of the Main Result
In this section we provide the proof of our main result.In order to present the proof in an organize fashion, we first establish two technical lemmas.

Coordinate co-coercivity
It is a well known and widely used fact (see, e.g.[10]) that for a continuously differentiable function φ : R d → R and constant L φ > 0, the following two conditions are equivalent: The second condition is often referred to by the name co-coercivity.Note that our assumption (2) on f i is similar to the first inequality.In our first lemma we establish a coordinate-based co-coercivity result which applies to functions f i satisfying (2).
Lemma 4 (Coordinate co-coercivity).For all x, y ∈ R d and i = 1, . . ., n, j = 1, . . ., d, we have: Proof.Fix any i, j and y ∈ R d .Consider the function g i : R d → R defined by: Then since f i is convex, we know that g i (x) ≤ 0 for all x, with g i (y) = 0. Hence, y minimizes g i .We also know that for any x ∈ R d : Since f i satisfies (2), so does g i , and hence for all x ∈ R d and h ∈ R, we have Minimizing both sides in h, we obtain which together with (27) yields the result.

Recursion
We now proceed to the final lemma, establishing a key recursion which ultimately yields the proof of the main theorem, which we present in Section 5.3.

Proof of Theorem 2
For simplicity, let us denote: where the expectation now is with respect to the entire history.Then by Lemma 5 we have the following m inequalities: (1 − µh) t .