On the complexity of parallel coordinate descent

In this work we study the parallel coordinate descent method (PCDM) proposed by Richtárik and Takáč [Parallel coordinate descent methods for big data optimization, Math. Program. Ser. A (2015), pp. 1–52] for minimizing a regularized convex function. We adopt elements from the work of Lu and Xiao [On the complexity analysis of randomized block-coordinate descent methods, Math. Program. Ser. A 152(1–2) (2015), pp. 615–642], and combine them with several new insights, to obtain sharper iteration complexity results for PCDM than those presented in [Richtárik and Takáč, Parallel coordinate descent methods for big data optimization, Math. Program. Ser. A (2015), pp. 1–52]. Moreover, we show that PCDM is monotonic in expectation, which was not confirmed in [Richtárik and Takáč, Parallel coordinate descent methods for big data optimization, Math. Program. Ser. A (2015), pp. 1–52], and we also derive the first high probability iteration complexity result where the initial levelset is unbounded.


Introduction
Block coordinate descent methods are being thrust into the optimization spotlight because of a dramatic increase in the size of real world problems, and because of the "Big data" phenomenon.It is little wonder, when these seemingly simple methods, with low iteration costs and low memory requirements, can solve problems where the dimension is more than one billion, in a matter of hours [26].
There is an abundance of coordinate descent variants arising in the literature including: [4,6,9,11,12,15,16,22,24,27,28,31,32,33,34,35,36,37,38].The main differences between these methods is the way in which the block of coordinates to update is chosen, and also how the subproblem to determine the update to apply a block of variables is to be solved.The current, state-of-the-art block coordinate descent method is the Parallel (block) Coordinate Descent Method (PCDM) of Richtárik and Takáč [26].This method selects the coordinates to update randomly and the update is determined by minimizing an overapproximation of the objective function at the current point (see Section 3 for a detailed description).PCDM can be applied to a problem with a general convex composite objective, it is supported by strong iteration complexity results to guarantee the method's convergence, and it has been tested numerically on a wide range of problems to demonstrate its practical capabilities.
In this work we are interested in the following convex composite/regularized optimization problem min where we assume that f (x) is a continuously differentiable convex function, and Ψ(x) is assumed to be a (possibly nonsmooth) block separable convex regularizer.The Expected Separable Overapproximation (ESO) assumption introduced in [26] enabled the development of a unified theoretical framework that guarantees convergence of a serial [25], parallel [26] and even distributed [2,14,23] version of PCDM.To benefit from the ESO abstraction, we derive all the results in this paper based on the assumption that f admits an ESO with respect to a uniform block sampling Ŝ.This concept will be precisely defined in Section 3.2.For now it is enough to say that updating a random set of τ coordinates (selected uniformly at random) is one particular uniform sampling and the ESO enables us to overapproximate the expected value of the function at the next iteration by a separable function, which is easy to minimize in parallel.

Brief literature review
Nesterov [18] provided some of the earliest iteration complexity results for a serial Randomized Coordinate Descent Method (RCDM) for problems of the form (1), where Ψ ≡ 0, or is the indicator function for simple bound constraints.Later, this work was generalized to optimization problems with a composite objective of the form (1), where the function Ψ is any (possibly nonsmooth) convex (block) separable function [25,26].
One of the main advantages of randomized coordinate descent methods is that each iteration is extremely cheap, and can require as little as a few multiplications in some cases [22].However, a large number of iterations may be required to obtain a sufficiently accurate solution, and for this reason, parallelization of coordinate descent methods is essential.
The SHOTGUN algorithm presented in [1] represents a naïve way of parallelizing RCDM, applied to functions of the form (1) where Ψ ≡ • 1 .They also present theoretical results to show that parallelization can lead to algorithm speedup.Unfortunately, their results show that only a small number of coordinates should be updated in parallel at each iteration, otherwise there is no guarantee of algorithm speedup.
The first true complexity analysis of Parallel RCDM (PCDM) was provided in [26] after the authors developed the concept of an Expected Separable Overapproximation (ESO) assumption, which was central to their convergence analysis.The ESO gives an upper bound on the expected value of the objective function after a parallel update of PCDM has been performed, and depends on both the objective function, and the particular 'sampling' (way that the coordinates are chosen) that was used.Moreover, several distributed PCDMs were considered in [2,14,23] and their convergence was proved simply by deriving the ESO parameters for particular distributed samplings.
In [3,10] the accelerated PCDM was presented and its efficient distributed implementation was considered in [2].Recently, there has also been a focus on PCDMs that use an arbitrary sampling of coordinates [19,20,21,24].

Summary of contributions
In this section we summarize the main contributions of this paper (not in order of significance).

1.
No need to enforce "monotonicity".PCDM in [26] was analyzed (for a general convex composite function of the form (1)) under a monotonicity assumption; if, at any iteration of PCDM, an update was computed that would lead to a higher objective value than the objective value at the current point, then that update is rejected.Hence, PCDM presented in [26] included a step to force monotonicity of the function values at each iteration.In this paper we confirm that the monotonicity test is redundant, and can be removed from the algorithm.
2. First high-probability results for PCDM without levelset information.Currently, the high probability iteration complexity results for coordinate descent type methods require the levelset to be bounded.In this paper we derive the first high-probability result which does not rely on the size of the levelset.In particular, the analysis of PCDM in [26] assumes that the levelset {x ∈ R N : F (x) ≤ F (x 0 )} is bounded for the initial point x 0 , and under this assumption, convergence is guaranteed.However, in this paper we show that PCDM will converge, in expectation, to the optimal solution even if the levelset is unbounded (see Section 5).
3. Sharper iteration complexity results.In this work we obtain sharper iteration complexity results for PCDM than that those presented in [26] and Table 1 summarizes our findings.
A thorough discussion of the results can be found in Section 6.2.We briefly describe the variables used in the table (all will be properly defined in later sections.)Variable c is a constant, k is the iteration counter, α ∈ [0, 1] is the expected proportion of coordinates updated at each iteration, ξ 0 = F (x 0 ) − F * , and v is a (vector) parameter of the method.Also, µ f and µ Ψ are the (strong) convexity constants of f and Ψ respectively (both with respect to • v for some v) and ǫ and ρ are the desired accuracy and confidence level respectively.(C=Convex, SC=Strongly Convex).

F
Richtárik and Takáč [26] This paper Theorem Table 1: Comparison of the iteration complexity results for PCDM obtained in [26] and in this paper.The analysis used in this paper provides a sharper iteration complexity result in both the convex and strongly convex cases when ǫ and/or ρ are small.
4. Improved convergence rates for PCDM.In this work we show that PCDM converges at a faster rate than that given in [26], in both the convex and strongly convex cases.Table 2 provides a summary of our results and a thorough discussion can be found in Section 6.1.
F Richtárik and Takáč [26] This paper Theorem Table 2: Comparison of the convergence rates for PCDM obtained in [26] and in this paper.
(C=Convex, SC=Strongly Convex).The analysis used in this paper provides a better rate of of convergence in both the convex and strongly convex cases when ǫ and/or ρ are small.

Paper outline
The remainder of this paper is structured as follows.In Section 2 we introduce the notation and assumptions that will be used throughout the paper.Section 3 describes PCDM of Richtárik and Takáč [26] in detail.We also present a new convergence rate result for PCDM, which is sharper than that presented in [26].The proof of the result is given in Section 4 along with several necessary technical lemmas.
In Section 5 we present several iteration complexity results, which show that PCDM will converge to an ǫ-optimal solution with high probability.In Section 5.1 we provide the first iteration complexity result for PCDM that does not require the assumption of a bounded levelset.The results shows that PCDM requires O( 1 ρ ) iterations, so we have devised a 'multiple run strategy' that achieves the classical O(log 1 ρ ) result.Moreover, in Section 5.1 we present a high probability iteration complexity result for PCDM, that assumes boundedness of the levelset, which is sharper than the result given in [26].
In Section 6 we give a comparison of the results derived in this work, with the results given in [26].Then, we present several numerical experiments in Section 7 to highlight the practical capabilities of PCDM under different ESO assumptions.The ESO assumptions are given in Appendix A, where we also provide a new ESO for doubly uniform samplings (see Theorem 19).

Notation and assumptions
In this section we introduce block structure and associated objects such as norms and projections.The parallel (block) coordinate descent method will operate on blocks instead of coordinates.

Block structure
The problem under consideration is assumed to have block structure and this is modelled by decomposing the space R N into n subspaces as follows.Let U ∈ R N ×N be a column permutation of the N × N identity matrix and further let U = [U 1 , U 2 , . . ., U n ] be a decomposition of U into n submatrices, where U i is N × N i and n i=1 N i = N .Note that U T i U j = I N i when i = j and U T i U j = 0 (where 0 is the N i × N j matrix of all zeros) when i = j.Subsequently, any vector x ∈ R N can be written uniquely as where For simplicity we will write x = (x (1) , x (2) , . . ., x (n) ) T .In what follows let •, • denote the standard Euclidean inner product.Then we have Norms.Further we equip R N i with a pair of conjugate Euclidean norms: where B i ∈ R N i ×N i is a positive definite matrix.For fixed positive scalars v 1 , v 2 , . . ., v n , let v = (v 1 , . . ., v n ) T and define a pair of conjugate norms in R N by Projection onto a set of blocks.Let ∅ = S ⊆ {1, 2, . . ., n}.Then for x ∈ R N we write and we define is the vector in R N whose blocks i ∈ S are identical to those of x, but whose other blocks are zeroed out.

Assumptions and strong convexity
Throughout this paper we make the following assumption regarding the block separability of the function Ψ.
Assumption 1 (Block separability).The nonsmooth function Ψ : R N → R ∪ {+∞} is assumed to be block separable, i.e., it can be decomposed as: where the functions Ψ i : R N i → R ∪ {+∞} are proper, closed and convex.
In some of the results presented in this work we assume that F is strongly convex and we denote the (strong) convexity parameter of F , with respect to the norm where φ ′ is any subgradient of φ at x.The case with µ φ = 0 reduces to convexity.Strong convexity of F may come from f or Ψ or both and we will write µ f (resp.µ Ψ ) for the strong convexity parameter of f (resp.Ψ).Following from ( 8) From the first order optimality conditions for (1) we obtain F ′ (x * ), x − x * ≥ 0 for all x ∈ domF .Combining this with (8) used with y = x and x = x * , yields the standard inequality 3 Parallel coordinate descent method In this section we describe the Parallel Coordinate Descent Method (Algorithm 1) of Richtárik and Takáč [26].We now present the algorithm, and a detailed discussion will follow.
Algorithm 1 PCDM: Parallel Coordinate Descent Method [26] 1: choose initial point randomly choose set of blocks S k ⊆ {1, . . ., n} 4: end for 7: apply the update: The algorithm can be described as follows.At iteration k of Algorithm 1, a set of blocks S k is chosen, corresponding to the (blocks of) coordinates that are to be updated.The set of blocks is selected via a sampling, which is described in detail in Section 3.1.Then, in Steps 4-6, the updates h(x k ) (i) , for all i ∈ S k , are computed in parallel, via a small/low dimensional minimization subproblem.(In Section 3.2, we describe the origin of this subproblem via an ESO.) Finally, in Step 7, the updates h(x k ) (i) are applied to the current point x k , to give the new point x k+1 .Notice that Algorithm 1 does not require knowledge of objective function values.
We now describe the key steps of Algorithm 1 (Steps 3 and 4-6) in more detail.

Step 3: Sampling
At the kth iteration of Algorithm 1, a set of indices S k ⊆ {1, . . ., n} (corresponding to the blocks of x k to be updated) is selected.Here we briefly explain several schemes for choosing the set of indices S k ; a thorough description can be found in [26].Formally, S k is a realisation of a random set-valued mapping Ŝ with values in 2 {1,...,n} .Richtárik and Takáč [26] have coined the term sampling in reference to Ŝ.
In what follows, we will assume that all samplings are proper.That is, we assume that p i > 0 for all blocks i, where p i is the probability that the ith block of x is updated.
We state several sampling schemes now.
1. Uniform: A sampling Ŝ is uniform if all blocks have the same probability of being updated.

Doubly uniform:
A doubly uniform sampling is one that generates all sets of equal cardinality with equal probability.That is P(S ′ ) = P(S ′′ ) whenever

Nonoverlapping uniform:
A nonoverlapping uniform sampling is one that is uniform and assigns positive probabilities only to sets forming a partition of {1, . . ., n}.
In fact, doubly uniform and nonoverlapping uniform samplings are special cases of uniform samplings, so in this work all results are proved for uniform samplings.Other samplings, which are also special cases of uniform samplings, are presented in [26], but we omit details of all, except a τ -nice sampling, for brevity.We say that a sampling Ŝ is τ -nice, if for any S ⊆ {1, 2, . . ., n} we have

Step 5: Computing the step-length
The block update h(x k ) (i) is chosen in such a way that an upper bound on the expected function value at the next iterate is minimized, with respect to the particular sampling Ŝ that is used.The construction of the expected upper bound should be (block) separable to ensure efficient parallelizability.Before we focus on how to construct the expected upper-bound on F we will state a definition of ESO.
Definition 2 (Expected Separable Overapproximation; Definition 5 in [26]).Let v ∈ R n ++ and Ŝ be a proper uniform sampling.We say that f : R N → R admits an ESO with respect to the sampling Ŝ with parameter v, if, for all x, h ∈ R N the following inequality holds: We say that the ESO is monotonic if ∀S ∈ Ŝ such that P(S = Ŝ) > 0 the following holds: In Appendix A, a review of different smoothness assumptions on f and corresponding ESO parameters v for a doubly uniform sampling, is given.In all that follows, we assume that f admits an ESO, and that v is the ESO parameter and Ŝ is a proper uniform sampling.Then where we have used that fact that Ψ is block separable and that Ŝ is a proper uniform sampling (see [26,Theorem 4]).Now, it is easy to see that minimizing the right hand side of (13) in h is the same as minimizing the function H v in h, where H v is defined to be In view of (2), (5), and ( 7), we can write Further, we define h(x) := arg min which is the update used in Algorithm 1.Notice that the algorithm never evaluates function values.

Complexity of PCDM
We are now ready to present one of our main results, which is a generalization of Theorem 1 in [39].The result shows that PCDM converges in expectation and provides an sharper convergence rate than that given in [26].The proof is provided in Section 4. Let us mention that a similar result was given independently1 in [15], but that result only holds for the particular ESO described in Theorem 21.However, even for that ESO, our result (Theorem 3) is still much better because it depends on x 0 − x * v and not on the size of the initial levelset (which could even be unbounded).We state our result now.
Theorem 3. Let F * be the optimal value of problem (1), and let {x k } k≥0 be the sequence of iterates generated by PCDM using a uniform sampling Ŝ.
n and suppose that f admits an ESO with respect to the sampling Ŝ with parameter v. Then for any k ≥ 0, Remark 4. Notice that Theorem 3 is a general result, in the sense that any ESO can be used for PCDM and the result holds.

Proof of the main result
In this section we provide a proof of our main convergence rate result, Theorem 3.However, first we will present several preliminary results, including the idea of a composite gradient mapping, and other technical lemmas.

Block composite gradient mapping
We now define the concept of a block composite gradient mapping [17,39].By the first-order optimality conditions for problem (15), there exists a subgradient We define the block composite gradient mappings as From ( 18) and ( 19) we obtain If we let g(x) := n i=1 U i (g(x)) (i) (compare ( 2) and ( 19)), then since Ψ is separable, (20) can be written as Moreover and g(x), h(x) Finally, note that using (4), ( 5), ( 19) and ( 22), we get

Main technical lemmas
The following result concerns the expected value of a block-separable function when a random subset of coordinates is updated.
Lemma 5 (Theorem 4 in [26]).Suppose that n , we have The following technical lemma plays a central role in our analysis.The result can be viewed as a generalization of Lemma 3 in [39], which considers the serial case (α = 1), to the parallel setting.Lemma 6.Let x ∈ dom F and x + = x + (h(x)) [ Ŝ] , where Ŝ is any uniform sampling.Then for any y ∈ dom F , Moreover, (i) (ii) Proof.We first note that This is a special case of the identity Lemma 5, which holds for block separable functions ψ), with ψ(u) = u 2 v , u = x − y and h = h(x).Further, for any h for which x + h ∈ dom Ψ, we have This was established in [26,Section 5].The claim now follows by combining (30), used with h = h(x), and the following estimate of H v (x, h(x)): = F (y) + Part (i) follows by letting x = y and using ( 29) and (23).Part (ii) follows as a special case by choosing µ f = µ Ψ = 0. Property (i) means that function values F (x k ) of PCDM are monotonically decreasing in expectation when conditioned on the previous iteration.

Proof of Theorem 3
Proof.Let x * be an arbitrary optimal solution of (1).Let . By subtracting F * from both sides of (28), we get , and taking expectations with respect to the whole history of realizations of S l , l ≤ k gives us Applying this inequality recursively and using the fact that E[F j ] is monotonically decreasing for j = 0, 1, . . ., k + 1 (27), we obtain which leads to (16).We now prove (17) under the strong convexity assumption µ f + µ Ψ > 0. From (26) we get Notice that for any 0 ≤ γ ≤ 1 we have Choosing we obtain Combining the inequality above with (31) gives It now only remains to take expectation in x k on both sides of (33), and (17) follows.

High Probability Convergence Result
Theorem 3 showed that the Algorithm 1 converges to the optimal solution in expectation.In this section we derive iteration complexity bounds for PCDM for obtaining an ǫ-optimal solution with high probability.Let us mentioned that all existing [18,25,26,39] high-probability results for serial or parallel CDM require a bounded levelset, i.e. they assume that is bounded.In Section 5.1 we present the first high probability result in the case when the levelset can be unbounded (Corollary 9 and Corollary 11).Then in Section 5.2 we derive a sharper highprobability result for PCDM of [26] if a bounded levelset is assumed (i.e.L(x 0 ) is bounded).

Case 1: Possibly unbounded levelset
We begin by presenting Lemma 7, which will allow us to state the first high-probability result (Corollary 9) for a PCDM applied to a convex function that does not require the assumption of a bounded levelset.
Lemma 7. Let x 0 be fixed and {x k } ∞ k=0 be a sequence of random vectors in R N such that the conditional distribution of x k+1 on x k is the same as conditional distribution of x k+1 on the whole history {x i } ∞ i=0 (hence we have Markov sequence).Let us define r k = φ r (x k ) and ξ k = φ ξ (x k ) where φ r , φ ξ : R N → R are non-negative functions.Further, let us assume that following two inequalities holds for any k with some known ζ ∈ (0, 1).Then if Proof.Using (35) we have Hence Now, from the Markov inequality we have ≤ ρ.
Naturally, the result O( 1 ǫρ ) is very pessimistic and hence one may be concerned about tightness of the lemma.The following example, indeed, shows that Lemma 7 is tight, i.e. the bound on K cannot be improved much.(We construct an example that, under the assumptions ( 35) and (36) (i.e., using the analysis of [39]), requires O( 1 ǫρ ) iterations.) Example 8 (Tightness of Lemma 7).Let us fix some small value of ρ ∈ (0, 1) and assume that (r 1 , ξ 1 ) have following distribution: where ϑ is chosen in such a way that (35) is satisfied.Then, we can chose it as follows Now we define, for k = 1, 2, 3, . . .
Now it is easy to verify that for Corollary 9 (High probability result without bounded levelset).If we use Lemma 7 with The negative aspect of Corollary 9 is the fact that one needs O( 1 ρ ) iterations, whereas classical results under the bounded levelset assumption require only O(log 1 ρ ) iterations.
Multiple run strategy.Now we present a restarting strategy [25] trick which will give us high probability result O(log 1 ρ ).
and {ξ k } ∞ k=0 be the same as in Lemma 7. Assume that we observe r = ⌈log 1 ρ ⌉ different random and independent realisations of this sequence always starting from x 0 , i.e. for any k we have observed Proof.Because the realisation are independent then for any l ∈ {1, 2, . . ., r} we have from Lemma 7 that P(ξ l K ≥ ǫ) ≤ 1 e .Hence P min l∈{1,2,...,r} each, then the best solution we get, indexed l ∈ {1, 2, . . ., r}, satisfies

Case 2: Bounded levelset
The next result, Theorem 12, obtains the rate O(log 1 ρ ), under the assumption that the levelset is bounded.However, some results will hold only for a modified version of Algorithm 1.In particular, we now present Algorithm 2.
Distance to the optimal solution set.In some of the results derived in this Section we need the distance to the optimal solution set, inside the levelset, to be finite, i.e.
Note that for any x * ∈ X * (where X * is a set of optimal solutions) it trivially holds that x 0 −x * v ≤ R v,0 .Moreover, for some problems the levelset can be unbounded, in which case R v,0 is infinite, whereas if X * = ∅ then x 0 − x * is always finite.
Theorem 12. Let {x k } k≥0 be a sequence of iterates generated by Then (i) if F is convex and we choose (ii) or if F is strongly convex with µ f + µ Ψ > 0 and we choose Proof.The proof proceeds as in [25, Theorem 1].For convenience, let ξ k := F (x k ) − F * and define Using the Markov inequality, so it suffices to find K such that Using an ESO and Lemma 17 in [26] will give us It is easy to verify that (46) and the definition of ξ ǫ k lead to (see the proof of [25, Theorem 1]) Taking expectation with respect to x k on both sides of the above we get In addition, using ( 16) and the relation ξ ǫ k ≤ ξ k , we have Now for any t > 0, let It follows from (48) that E[ξ ǫ K 1 ] ≤ tǫ, which together with (47) implies that Notice that, by (47), the sequence E[ξ ǫ k ] is decreasing.Hence, we have where It is easy to verify that Because K ≥ K(t * ), we see that (45) holds and the proof of (i) is complete.Now we prove (ii).For convenience, set µ Ψ ≡ µ Ψ (w).Then from (17), we have where 0 < γ ≤ 1 is defined in (32).Taking expectation in x k (and using recursion) gives Finally, using the Markov inequality (44), and K given in (42), we have and the result follows.
In this Section we have presented three new convergence results for PCDM.The first result shows that, using the analysis in [39], PCDM obtains a O( 1 ρ ) rate when the levelset is unbounded for a single run strategy.The second result shows that PCDM obtains a O(log 1 ρ ) rate for a restarting strategy.
On the other hand, if the levelset is bounded, we have shown that PCDM achieves a rate of O(log 1 ρ ).It is still an open problem to determine whether PCDM can achieve a rate of O(log 1 ρ ) for a single run strategy when the levelset is unbounded.

Comparison of the convergence rate results
We have the following remarks on comparing the results in Theorem 3 with those in [26].

Comparison in the convex case
For problem (1), an expected-value type of convergence rate is not presented explicitly in [25], although it can be derived from the following relation (that is stated in [26] and proved in [25, Theorem 1]): where c is defined in (40).Taking expectation on both sides of (56) and using a similar argument as that in [18], gives Let a and b denote the right hand side of ( 16) and ( 57) respectively.By the definition of c and the relation (58)

Comparison in the strongly convex case
For the special case of (1) where at least one of f and Ψ is strongly convex (i.e., µ f + µ Ψ > 0), Richtarik and Takac [26] showed that for all k ≥ 0, there holds It is not hard to observe that Recall that γ is defined in (32).Then it follows that for sufficiently large k one has

Comparison of the iteration complexity results
Here we compare the results in Theorem 12 with those in [26].
Comparison in the convex case.For any 0 < ǫ < F (x 0 ) − F * and ρ ∈ (0, 1), Richtárik and Takáč [26] showed that (43) holds for all k ≥ K where Using the definition of c and the fact that By the definitions of K and K we have that for sufficiently small ǫ > 0, In addition, x 0 − x * v can be much smaller than R v,0 and thus τ can be very small.It follows from the above that K can be significantly smaller than K. that separates the samples into their corresponding classes.The optimization problem can be formulated as follows: where y (i) ∈ {−1, 1} is the label of the class to which sample a (i) ∈ R m belongs.While problem formulation (65) does not fit our framework (the nonsmooth part is nonseparable) the dual formulation (see [5,30,31]) does: where Q ∈ R N ×N , Q i,j = y (i) y (j) a (i) , a (j) .In particular, problem formulation (66) is the sum of a smooth term, and the restriction x ∈ [0, 1] N can be formulated as a (block separable) indicator function.In this dataset, each sample is normalized, hence L = (1, . . ., 1) T .For any dual feasible point x we can obtain a primal feasible point w(x) = 1 λn N i=1 x (i) y (i) a (i) .Moreover, from strong duality we know that if x * is an optimal solution of (66), then w * = w(x * ) is optimal for problem (65).Therefore, we can associate a gap G(x) = P (w(x))−D(x) to each feasible point x, which measures the distance of the objective value from optimality.Clearly G(x * ) = 0.
Note that this smoothness assumption is more general than that made in [15] because of the possibility of choosing general norms of the form • ( J) .Further, Assumption 15 generalizes the smoothness assumptions imposed in [1,23].
The third type of assumption we make is that each function in the sum (67) has coordinate-wise Lipschitz continuous gradient.
Assumption 16 ((Block) Coordinate-wise Lipschitz continuous gradient of sub-functions).The gradient of f J , J ∈ J is block Lipschitz, uniformly in x, with non-negative constants LJ,1 , . . ., LJ,n .That is, for all x ∈ R N , i = 1, . . ., n, J ∈ J and h ∈ R N i we have One can think of Assumptions 14 and 15 as being 'opposite' to each other in the following sense.If we associate the block coordinates with the columns, and the functions with the rows, we see that Assumption 14 captures the dependence columns-wise, while Assumption 15 captures the dependence row-wise.Hence, Assumption 16 can be thought of as an element-wise smoothness assumption.
To make this more concrete, we present an example that demonstrates how to compute the Lipschitz constants for a quadratic function, under each of the three smoothness assumptions stated above.In words, L i is equal to square of the ℓ 2 norm of ith column, Lj is equal to the square of the ℓ 2 norm of the jth row and Lj,i is simply the square of the (j, i)th element of the matrix A.
One could be misled into believing that Assumption 16 is the best because it is the most restrictive.However, while this is true for the quadratic objective shown in Example 17, for a general convex function, Assumption 16 can give Lipschitz constants that lead to worse ESO bounds (see Example 22 for further details).

A.2 Expected Separable Overapproximation (ESO)
Now, it is clear that the update h in Algorithm 1 depends on the ESO parameter v.This shows that the ESO is not just a technical tool; the parameters are actually used in Algorithm 1. Therefore we must be able to obtain/compute these parameters easily.We now present the following three theorems, namely Theorems 18, 20 and 21, that explain how to obtain the v parameter for a τ -nice sampling, under different smoothness assumptions.
Theorem 18 (ESO for a τ -nice sampling, Theorem 14 in [26]).Let Assumption 14 hold with constants L 1 , . . ., L n and let Ŝ be a τ -nice sampling.Then f : R N → R admits an ESO with respect to the sampling Ŝ with parameter where L = (L 1 , . . ., L n ) T .
The obvious disadvantage of Theorem 18 is the fact that v in the ESO, depends on ω. (When ω is large, so too is v.) One can imagine a situation in which ω is much larger than the average cardinality of J ∈ J , resulting in a large v.For example, if |J| for J ∈ J is small for all but one function.
With this in mind, we introduce a new theorem that shows how the ESO in Theorem 18 can be modified if we know that Assumption 16 holds.In this case, the role of ω is slightly suppressed.(72) Proof.From Theorem 15 in [26] we know that for each function f J , J ∈ J we have

Figure 1 :
Figure 1: Evolution of F (x k ) − F * for 5 different methods (left) and distribution of v (right).

32 2000
If we do not want to estimate τ =

Figure 2 :
Figure 2: Comparison of evolution of G(x k ) for various methods and the distribution of v.

Example 17 . 2 Ax−b 2 2 = 1 2 m
Let the function f (x) = 1 j=1 (b (i) − n i=1 a j,i x (i) ) 2, where A ∈ R m×n and a j,i is (j, i)th element of the matrix A. Let us fix all the norms • ( J) from Assumption 15 to be standard Euclidean norms.Then one can easily verify that equations (68), (70) and (71) are satisfied with the following choice of constants