Coordinate descent with arbitrary sampling II: expected separable overapproximation

The design and complexity analysis of randomized coordinate descent methods, and in particular of variants which update a random subset (sampling) of coordinates in each iteration, depend on the notion of expected separable overapproximation (ESO). This refers to an inequality involving the objective function and the sampling, capturing in a compact way certain smoothness properties of the function in a random subspace spanned by the sampled coordinates. ESO inequalities were previously established for special classes of samplings only, almost invariably for uniform samplings. In this paper we develop a systematic technique for deriving these inequalities for a large class of functions and for arbitrary samplings. We demonstrate that one can recover existing ESO results using our general approach, which is based on the study of eigenvalues associated with samplings and the data describing the function.


Introduction
Coordinate descent methods have been popular with practitioners for many decades due to their inherent conceptual simplicity and ease with which one can produce a working code.However, up to a few exceptions [30,13], they have been largely ignored in the optimization community until recently when a renewed interest in coordinate descent was sparked by several reports of their remarkable success in certain applications [2,31,21].Additional and perhaps more significant reason behind the recent flurry of research activity in the area of coordinate descent comes from breakthroughs in our theoretical understanding of these methods through the introduction of randomization in the iterative process [15,24,23,22,26,27,29,28,6,19,4,14,12,8,5,3,9,11,10,17,16,7].Traditional variants of coordinate descent rely on cyclic or greedy rules for the selection of the next coordinate to be updated.

Expected Separable Overapproximation
It has recently become increasingly clear that the design and complexity analysis of randomized coordinate descent methods is intimately linked with and can be better understood through the notion of expected separable overapproximation (ESO) [22,27,6,28,19,5,4,20,17] and [16].This refers to an inequality involving the objective function and the sampling (a random set valued mapping describing the law with which subsets of coordinates are selected at each iteration), capturing in a compact way certain smoothness properties of the function in a random subspace spanned by the sampled coordinates.
A (coordinate) sampling Ŝ is a random set-valued mapping with values being subsets of [n] def = {1, 2, . . ., n}.It will be useful to write Definition 1.1 (Expected Separable Overapproximation).Let f : R n → R be a differentiable function and Ŝ a sampling.We say that f admits an expected separable overapproximation (ESO) with respect to sampling Ŝ with parameters v = (v 1 , . . ., v n ) > 0 if the following inequality holds 1 for all x, h ∈ R n : We will compactly write (f, Ŝ) ∼ ESO(v).
In this definition, e i is the i-th unit coordinate vector in R n and ∇ i f (x) = (∇f (x)) ⊤ e i is the i-th partial derivative of f at x.In the context of block coordinate descent, the above definition refers to the case when all blocks correspond to coordinates.For simplicity of exposition, we focus on this case.However, all our results can be extended to the more general block setup.
Instead of the above general definition, it will be useful to the reader to instead think about the form of this inequality in the simple case when f (x) = Ax 2 , where • is the L2 norm, and x = 0. Letting A = [A 1 , . . ., A n ], in this case inequality (2) takes the form where p • v denotes the Hadamard product of vectors p = (p 1 , . . ., p m ) and v = (v 1 , . . ., v n ); that is p • v = (p 1 v 1 , . . ., p n v n ) ∈ R n , and Diag(p • v) is the n-by-n diagonal matrix with vector p • v on the diagonal.The term on the left hand side is a convex quadratic function of h, and so is the term on the right hand side -however, the latter function has a diagonal Hessian.Hence, for quadratics, finding the ESO parameter v reduces to an eigenvalue problem.
The ESO inequality is of key importance for randomized coordinate descent methods for several reasons: • The parameters v = (v 1 , . . ., v n ) for which ESO holds are needed 2 to run coordinate descent.
Indeed, they are used to set the stepsizes to a suitable value. 1 This definition can in a straightforward way be extended the case when coordinates are replaced by blocks of coordinates [22].In such a case, hi would be a allowed to be a vector of size larger than one, ei would be replaced by a column submatrix of the identity matrix (usually denoted Ui i n the literature) and h 2 i would be replaced by the squared norm of hi (it is often useful to design this norm based on properties of f ).
2 All existing parallel coordinate coordinate descent methods for which a complexity analysis has been performed are designed with fixed stepsizes.Designing a line-search procedure in such a setup is a nontrivial task, and to the best of our knowledge, only a single paper in the literature deals with this issue [7].Certainly, properly designed line search has the potential to improve the practical performance of these methods.
Table 1: Complexity of randomized coordinate descent methods which were analyzed for an arbitrary sampling (λ is a strong convexity constant, x 0 is the starting point and x * the optimal point. • The size of these parameters directly influences the complexity of the method (see Table 1).
• There are problems for which updating more coordinates in each iteration, as opposed to updating just one, may not lead to fewer iterations [22] (which suggests that perhaps the resources should be instead utilized in some other way).Whether this happens or not can be understood through a careful study of the complexity result and its dependence, through the vectors p and v, on the number of coordinates updated in each iteration [22,27,19,4,6,17].
• The ESO assumption is generic in the sense that as soon as function f and sampling Ŝ satisfy it, the complexity result follows.This leads to a natural dichotomy in the study of coordinate descent: i) the search for new variants of coordinate descent (e.g., parallel, accelerated, distributed) and study of their complexity under the ESO assumption, and ii) the search for pairs (f, Ŝ) for which one can compute v such that (f, Ŝ) ∼ ESO(v).Our current study follows this dichotomy: in [16] we deal with the algorithmic and complexity aspects, and in this paper we deal with the ESO aspect.

Complexity of coordinate descent
As mentioned above, complexity of coordinate descent methods depends in a crucial way on the optimization problem, sampling employed, and on the ESO parameters v = (v 1 , . . ., v n ).In Table 1 we summarize all known complexity results3 which hold for an arbitrary sampling.Note that in all cases, vectors p and v appear in the complexity bound.The bounds are not directly comparable as they apply to different optimization problems.
For instance, the NSync bound4 in Table 1 applies to the problem of unconstrained minimization of a smooth strongly convex function.It was in [20] where the general form of the ESO inequality used in this paper was first mentioned and used to derive a complexity result for a coordinate descent method with arbitrary sampling.
The Quartz algorithm [17], on the other hand, applies to a much more serious problem -a problem of key importance in machine learning.In particular, it applies to the regularized empirical risk minimization problem, where the loss functions are convex and have Lipschitz gradients and the regularizer is strongly convex and possibly nonsmooth.Coordinate ascent is applied to the dual of this problem, and the bound appearing in Table 1 applies to the duality gap 5 .
The APPROX method was first proposed in [5] and then generalized to an arbitrary sampling (among other things) in [16].In its accelerated variant it enjoys a O(1/ √ ǫ) rate, whereas it's non-accelerated variant has a slower O(1/ǫ) rate.Again, the complexity of the method explicitly depends on the vector of probabilities p and the ESO parameter v.

Historical remarks
The ESO relation (2) was first introduced by Richtárik and Takáč [22] in the special case of uniform samplings, i.e., samplings for which P(i ∈ Ŝ) = P(j ∈ Ŝ) for all coordinates i, j ∈ {1, 2, . . ., n}.The uniformity condition is satisfied for a large variety of samplings, we refer the reader to [22] for a basic classification of uniform samplings (including overlapping, non-overlapping, doubly uniform, binomial, nice, serial and parallel samplings) and to [19,4,17] for further examples (e.g., "distributed sampling").The study of non-uniform samplings has until recently been confined to serial sampling only, i.e., to samplings which only pick a single coordinate at a time.In [20] the authors propose a particular example of a parallel nonuniform sampling, where "parallel" refers to samplings for which P(| Ŝ| > 1) > 0, and "non-uniform" simply means not uniform.Further, they derive an ESO inequality for their sampling and a partially separable function.The proposed sampling is easy to generate (note that in general a sampling is described by assigning distinct probabilities to all 2 n subsets of [n], and hence most samplings will necessarily be hard to generate), and leads to strong ESO bounds which predict nearly linear speedup for NSync for sparse problems.A further example of a non-uniform sampling was given in [17]-the so-called "product sampling"-and an associated ESO inequality derived.Intuitively speaking, this sampling samples sets of "independent" coordinates, which leads to complexity scaling linearly with the size of the sampled sets.To the best of our knowledge, this is the state of the art -no further non-uniform samplings were proposed nor associated ESO inequalities derived.

Contributions
We now briefly list the contributions of this work.

ESO inequalities were previously established for special classes of samplings only, almost
invariably for uniform samplings [22,19,6,4,5], and often using seemingly disparate approaches.We give the first systematic study of ESO inequalities for arbitrary samplings.
3. Our approach to deriving ESO inequalities is via the study of random principal submatrices of a positive semidefinite matrix.In particular, we give bounds on the largest eigenvalue of the mean of the random submatrix.This may be of independent interest.

Outline of the paper
Our paper is organized as follows.In Section 2 we describe the class of functions (f ) we consider in this paper and briefly establish some basic terminology related to samplings ( Ŝ).In Section 3 we study probability matrices associated with samplings (P( Ŝ)), in Section 4 we study eigenvalues of these probability matrices (λ(P( Ŝ)) and λ ′ (P( Ŝ))) and in Section 5 we design a general technique for computing parameter v = (v 1 , . . ., v n ) for which the ESO inequality holds (i.e., for which (f, Ŝ) ∼ ESO(v)).We illustrate the use of these techniques in Section 6 and conclude with Section 7.

Functions and samplings
Recall that in the paper we are concerned with establishing inequality (2) which we succinctly write as (f, Ŝ) ∼ ESO(v).In Section 2.1 we describe the class of functions f we consider in this paper and in Section 2.2 we briefly review several elementary facts related to samplings.

Functions
We assume in this paper that f : R n → R is differentiable and that it satisfies the following assumption (however, the first time we will again talk about functions is in Section 5).

Assumption 2.1.
There is an m-by-n matrix A such that for all x, h ∈ R n , In the subsequent text, we shall often refer to the set of columns of A for which the entry in the j-th row of A is nonzero: Assumption 2.1 holds for many functions of interest in optimization and machine learning.Coordinate descent methods for functions f explicitly required to satisfy Assumption 2.1 were studied in [1,19,4].
The following simple observation will help us relate the above assumption with standing assumptions considered in various papers on randomized coordinate descent methods.
where for each j, M j ∈ R d×n and function φ j : R d → R has γ j -Lipschitz continuous gradient (with respect to the L2 norm).Then f satisfies Assumption 2.1 for matrix A given by Then since φ j is γ j -smooth, we have It remains to add these inequalities for j = 1, . . ., s.
By I we denote the n-by-n identity matrix and for S ⊆ [n] we will use the notation I [S] for the n-by-n matrix obtained from I by retaining elements I ii for which i ∈ S and zeroing out all other elements.
We now apply Proposition 2.1 to several special cases: , where for each j, That is, φ j depends on coordinates of x belonging to set C j only.By Proposition 2.1, f satisfies (3), where A is the n-by-n diagonal matrix given by Functions of the form (6) (i.e., partially separable functions) were considered in the context of parallel coordinate descent methods in [22].However, in [22] the authors only assume the sum f to have a Lipschitz gradient (which is more general, but somewhat complicates the analysis), whereas we assume that all component functions {φ j } j have Lipschitz gradient.
2. Linear transformation of variables.Let s = 1.Then f is of the form By Proposition 2.1, f satisfies (3), where A is given by A functions of the form (7) appears in the dual problem of the standard primal-dual formulation to which stochastic dual coordinate ascent methods are applied [26,25,32,9,17].
3. Sum of scalar functions depending on x through an inner product.Let d = 1 and M j = e T j M, where M ∈ R m×n and e j is the j-th unit coordinate vector in R m .Then f is of the form By Proposition 2.1, f satisfies (3), with A given by Functions of the form (8) play an important role in the design of efficiently implementable accelerated coordinate descent methods [5,16].These functions also appear in the primal problem of the standard primal-dual formulation to which stochastic dual coordinate ascent methods are applied.

Samplings
As defined in the introduction, by sampling we mean a random set-valued mapping with values in 2 [n] (the set of subsets of [n]).
Classification of samplings.Following the terminology established in [22], we say that sampling Ŝ is proper if p i = P(i ∈ Ŝ) > 0 for all i ∈ [n].We shall focus our attention on proper samplings as otherwise there is a coordinate which is never chosen (and hence never updated by the coordinate descent method).We say that Ŝ is Of key importance in this paper are elementary samplings, defined next.
Definition 2.1 (Elementary samplings).Elementary sampling associated with S ⊆ [n] is sampling which selects set S with probability one.We will denote it by ÊS : By image of sampling Ŝ we mean the collection of sets which are chosen with positive probability: We say that Ŝ is nonoverlapping, if no two sets in its image intersect.We say that the sampling is uniform if P(i ∈ Ŝ) = P(j ∈ Ŝ) for all i, j ∈ [n].The class of uniform samplings is large, for examples (and properties) of notable subclasses, we refer the reader to [22] and [19].
We say that sampling Ŝ is doubly uniform if it satisfies the following condition: if . Necessarily, every doubly uniform sampling is uniform [22].The definition postulates an additional "uniformity" property ("equal cardinality implies equal probability"), whence the name.As described in [22], doubly uniform samplings are special in the sense that "good" ESO results can be proved for them.A notable subclass of the class of doubly uniform samplings are the τ -nice samplings for 1 τ n.The τ -nice sampling is obtained by picking (all) subsets of cardinality τ , uniformly at random (we give a precise definition below).This sampling is by far the most common in stochastic optimization, and refers to standard minibatching.The τ -nice sampling arises as a special case of the (c, τ )-distributed sampling (which, as its name suggests, can be used to design distributed variants of coordinate descent [19,4]), which we define next: Definition 2.2 ((c, τ )-distributed sampling; [19,4,17]).Let P 1 , . . ., P c be a partition of {1, 2, . . ., n} such that |P l | = s for all l.That is, sc = n.Now let Ŝ1 , . . ., Ŝc be independent τ -nice samplings from P 1 , . . ., P c , respectively.Then the sampling is called (c, τ )-distributed sampling.
The τ -nice sampling arises as a special case of the (c, τ )-distributed sampling (for c = 1) which we define next.Definition 2.3 (τ -nice sampling; [22,27,28,6,5]).Sampling Ŝ is called τ -nice if it picks only subsets of [n] of cardinality τ , uniformly at random.More formally, it is defined by Operations with samplings.We now define several basic operations with samplings (convex combination, intersection and restriction).
Definition 2.4 (Convex combination of samplings; [22]).Let Ŝ1 , . . ., Ŝk be samplings and let q 1 , . . ., q k be nonnegative scalars summing to 1.By k t=1 q t Ŝt we denote the sampling obtained as follows: we first pick t ∈ {1, . . ., k}, with probability q t , and then sample according to Ŝt .More formally, Ŝ is defined as follows: Note that (11) indeed defines a sampling, since Each sampling is a convex combination of elementary samplings.Indeed, for each Ŝ we have We now show that each doubly uniform sampling arises as a convex combination of τ -nice samplings.
Proposition 2.2.Let Ŝ be a doubly uniform sampling and let Ŝτ be the τ -nice sampling, for τ = 0, 1, . . ., n.Then Proof.Fix any S ⊆ [n] and let q τ = P(| Ŝ| = τ ).Note that where the last equality follows from the definition of doubly uniform and τ -nice samplings.The statement then follows from (11) (i.e., by definition of convex combination of samplings).
It will be useful to define two more operations with samplings; intersection and restriction.Definition 2.5 (Intersection of samplings).For two samplings Ŝ1 and Ŝ2 we define the intersection Ŝ def = Ŝ1 ∩ Ŝ2 as the sampling for which: Definition 2.6 (Restriction of a sampling).Let Ŝ be a sampling and J ⊆ [n].By restriction of Ŝ to J we mean the sampling ÊJ ∩ Ŝ.By abuse of notation we will also write this sampling as J ∩ Ŝ.
Graph sampling.Let G = (V, E) be an undirected graph with |V | = n vertex and (i, i ′ ) be an edge in E if and only if there is j ∈ [m] such that {i, i ′ } ⊆ J j .If S is an independent set of graph G, then necessarily max Denote by T the collection of all independent sets of the graph G.We now define the graph sampling as follows: Definition 2.7 (Graph sampling).Graph sampling associated with graph G is any sampling Ŝ for which P( Ŝ = S) = 0 if S / ∈ T .In other words, a graph sampling can only assign positive weights to independent sets of G.
Let Ŝ be a graph sampling.In view of (12), for some nonnegative constants q S adding up to 1: Note that, necessarily, q S = P( Ŝ = S) for all S ∈ T .

Probability matrix associated with a sampling
In this section we define the notion of a probability matrix associated with a sampling.As we shall see in later sections, this matrix encodes all information about Ŝ which is relevant for development of ESO inequality.Definition 3.1 (Probability matrix).With each sampling Ŝ we associate an n-by-n "probability matrix" P = P( Ŝ) defined by We shall write P( Ŝ) when it is important to indicate which sampling is behind the probability matrix, otherwise we simply write P.
For two matrices M 1 and M 2 of the same size, we denote by M 1 • M 2 their Hadamard (i.e., elementwise) product.We use the same notation for Hadamard product of vectors.For arbitrary matrix M ∈ R n×n and S ⊆ [n] we will use the notation M [S] for the n-by-n matrix obtained from M by retaining elements M ij for which both i ∈ S and j ∈ S and zeroing out all other elements.In what follows, by E we denote the n-by-n matrix of all ones and by I we denote the n-by-n identity matrix.For any h = (h 1 , . . ., h n ) ∈ R n and S ⊆ [n] we will write where e 1 , . . ., e n are the standard basis vectors in R n .Also note that Using the notation we have just established, probability matrices of elementary samplings are given by where e ∈ R n is the vector of all ones.In particular, the matrix is rank-one and positive semidefinite.

Representation of probability matrices
We now establish a simple but particularly insightful result, leading to many useful identities.
Theorem 3.1.For each sampling Ŝ we have In particular: (i) The set of probability matrices is the convex hull of the probability matrices corresponding to elementary samplings.
Proof.The (i, j) element of the matrix on the right hand side is follows from ( 16) since E [S] = P( ÊS ).Claim (ii) follows from ( 16) since We have the following useful corollary: 6 Corollary 3.1.Let Ŝ be any sampling, P = P( Ŝ), M ∈ R n×n be an arbitrary matrix and h ∈ R n .Then the following identities hold: 6 Identities ( 19)-( 22) were already established in [22], in a different way without relying on Theorem 3.1, which is new.However, in this paper a key role is played by identities ( 17)- (18), which are also new.It was while proving these identities that we realized the fundamental nature of Theorem 3.1, as a vehicle for obtaining all identities in Corollary 3.1 as a consequence.The identities will be needed in further development.For illustration of a different proof technique, here is an alternative proof of ( 18): Proof.Since multiplying a matrix in the Hadamard sense by a fixed matrix is a linear operation, Next, identity (18) follows from (17): Identity (19) follows from ( 18) by setting M = E: Identity (20) holds since i Finally, (21) (resp.( 22)) follows from (19) (resp.( 20)) by setting h = e.

Operations with samplings
We now give formulas for the probability matrix of the sampling arising as a convex combination, intersection or a restriction, in terms of the probability matrices of the constituent samplings.
Convex combination of samplings.We have seen in ( 12) that each sampling is a convex combination of elementary samplings.In view of Theorem 3.1, the probability matrices of the samplings are related the same way: More generally, as formalized in the following lemma, the probability matrix of a convex combination of samplings is equal to the convex combination of the probability matrices of these samplings.
Lemma 3.1.Let Ŝ1 , . . ., Ŝk be samplings and q 1 , . . ., q k be non-negative scalars summing up to 1. Then Proof.Let Ŝ be the convex combination of samplings Ŝ1 , . . ., Ŝk and fix any i, j ∈ Intersection of samplings.The probability matrix of the intersection of two independent samplings is equal to the Hadamard product of the probability matrices of these samplings.This is formalized in the following lemma.
Lemma 3.2.Let Ŝ1 , Ŝ2 be independent samplings.Then Restriction.By Lemma 3.2, the probability matrix of the restriction of arbitrary sampling Ŝ to J ⊆ [n] is given by (we give several alternative ways of writing the result): = E Note that P(J ∩ Ŝ) is the matrix obtained from P( Ŝ) by keeping only elements i, j ∈ J and zeroing out all the rest.Furthermore, by combining the formulas derived above, we get = k t=1 q t P(J ∩ Ŝt ). ( 27)

Probability matrix of special samplings
The probability matrix of the (c, τ )-distributed samplings is computed in the following lemma.
Lemma 3.3.Let Ŝ be the (c, τ )-distributed sampling associated with the partition {P 1 , . . ., 2).Then where Note that B is the 0-1 matrix with B ij = 1 if and only if i, j belong to the same partition.
Proof.Let P = P( Ŝ).It is easy to see that otherwise Hence, As a corollary of the above in the c = 1 case we obtain the probability matrix of the τ -nice sampling: Lemma 3.4.Fix 1 τ n and let Ŝ be the τ -nice sampling.Then where β = (τ − 1)/ max(n − 1, 1).If τ = 0, then P( Ŝ) is the zero matrix.
Proof.For τ 1 this follows from Lemma 3.3 in the special case when c = 1 (note that Finally, we compute the probability matrix of a doubly uniform sampling.Lemma 3.5.Let Ŝ be a doubly uniform sampling and assume it is not nil (i.e., assume that P( Ŝ = ∅) = 1).Then where Proof.Letting q τ = P(| Ŝ| = τ ), by Proposition 2.2 we can write Ŝ = n τ =0 q τ Ŝτ , where Ŝτ is the τ -nice sampling.It only remains to combine Lemma 3.1 and Lemma 3.4 and rearrange the result.
Note that Lemma 3.4 is a special case of Lemma 3.5 (covering the case when P(| Ŝ| = τ ) = 1 for some τ ).

Largest eigenvalues of the probability matrix
For an n × n positive semidefinite matrix M we denote by λ(M) the largest eigenvalue of M: For a vector v ∈ R n , let Diag(v) be the diagonal matrix with v on the diagonal.For an n-by-n matrix M, Diag(M) denotes the diagonal matrix containing the diagonal of M. By λ ′ (M) we shall denote the "normalized" largest eigenvalue of M: Note that 1 λ ′ (M) n.
In this section we study (standard and normalized) largest eigenvalue of the probability matrix associated with a sampling: Recall that by Theorem 3.1, P( Ŝ) is positive semidefinite for each sampling Ŝ.For convenience, we write λ( Ŝ) (resp.λ ′ ( Ŝ)) instead of λ(P( Ŝ)) (resp.λ ′ (P( Ŝ))).We study these quantities since, as we will show in later sections, they are useful in computing parameter v = (v 1 , . . ., v n ) for which ESO holds.

Elementary samplings
In the case of elementary samplings the situation is simple.Indeed, for any J ⊆ [n], we have = λ(e This can, in fact, be seen as a consequence of a more general identity 7 for arbitrary symmetric rank one matrices: for any x ∈ R n , we have Since P( ÊJ ) = E [J] and Diag(E [J] ) = I [J] , (38) can equivalently be written as and adding that the bound is tight.

Bounds for arbitrary samplings
In the first result of this section we give sharp bounds for λ ′ ( Ŝ) for arbitrary sampling Ŝ.
Proof.(i) For simplicity, let P = P( Ŝ).If e ∈ R n is the vector of all ones, then we get e ⊤ Pe e ⊤ Diag(P)e = e ⊤ Pe Tr(P) 1, where the last inequality holds since Tr(P) is upper bounded by the sum of all elements of P. It remains to apply identities ( 21) and ( 22).
(iii) The result follows by combining the upper and lower bounds.
In the next result we study the quantity λ( Ŝ).
Theorem 4.2.The following statements hold: (i) Lower and upper bounds.For any sampling Ŝ we have (i) Sharper upper bound.If Ŝ is uniform and | Ŝ| τ with probability one, then the upper bound can be improved to (iii) Identity.If Ŝ is uniform and | Ŝ| = τ with probability one, then Proof.(i) The upper bound holds since λ( Ŝ) is the maximal eigenvalue of P( Ŝ) and by ( 23), E[| Ŝ|] = Tr(P( Ŝ)).The lower bound follows from: (ii) By combining (37) and Theorem 4.1 (ii) we obtain: (iii) The result follows by combining the lower bound from (i) with the upper bound in (ii).
A natural lower bound for λ( Ŝ) (largest eigenvalue of P( Ŝ)) is E[| Ŝ|]/n (the average of the eigenvalues of P( Ŝ)).Notice that the lower bound in (41) is better than this.Moreover, observe that both bounds in (41) are tight.Indeed, in view of (38), the upper bound is achieved for any elementary sampling.The lower bound is also tight -in view of part (iii) of the theorem.

Bounds for restrictions of selected samplings
In this part we study the normalized eigenvalue associated with the restriction of a few selected samplings (or families of samplings).In particular, we first give a (necessarily rough) bound that holds for arbitrary samplings, followed by a bound for the (c, τ )-distributed sampling and the τ -nice sampling (both are specific uniform samplings).Finally, we give a bound for the family of doubly uniform samplings.Proposition 4.1.Let Ŝ be an arbitrary sampling and let τ be such that | Ŝ| τ with probability 1.Then for all ∅ = J ⊆ [n], we have Proof. 8 Note that |J ∩ Ŝ| min{|J|, τ } with probability 1.We only need to apply the upper bound in Theorem 4.1 to the restriction sampling J ∩ Ŝ.
We now specialize the above result to the c = 1 case, obtaining a formula for λ ′ (J ∩ Ŝ) in the case when Ŝ is the τ -nice sampling (recall Definition 2.3).Proposition 4.3.Let Ŝ be the τ -nice sampling.Then for all ∅ = J ⊆ [n], Proof.Let ∅ = J ⊆ [n].Since τ -nice sampling is the (1, τ )-distributed sampling, by applying Proposition 4.2 we get: Next, by direct calculation we can verify that which together with the lower bound established in Theorem 4.1 yields: ) .
Note that (47) is much better (i.e., smaller) than the right hand side in (42).This is to be expected as the bound (42) applies to all samplings (which have size at most τ with probability 1).

Expected Separable Overapproximation
In this section we develop a general technique for computing parameters v = (v 1 , . . ., v n ) for which the ESO inequality (2) holds.

General technique
We will write It is a well known fact that the Hadamard product of two positive semidefinite matrices is positive semidefinite: The reason for defining and studying probability matrices P( Ŝ) is motivated by the following result, which for functions satisfying Assumption 2.1 reduces the ESO Assumption (f, Ŝ) ∼ ESO(v) to the problem of bounding the Hadamard product of the probability matrix P( Ŝ) and the data matrix A ⊤ A from above by a diagonal matrix.Note that because P( Ŝ) 0, in view of (49),the Hadamard product P( Ŝ) • A ⊤ A is positive semidefinite.
for some vector v ∈ R n ++ , where p is the vector of probabilities defined in (1), then Proof.Let us substitute h ← h [ Ŝ] into (3) and take expectation in Ŝ of both sides.Applying (18), we obtain: It remains to apply assumption (50).
We next focus on the problem of finding vector v for which (50) holds.The following direct consequence of (49) will be helpful in this regard: In particular, (52) can be used to establish the first part of the following useful lemma.

ESO I: no coupling between the sampling and data
By applying Lemma 5.2, Eq (53), to M 1 = P( Ŝ) and M 2 = A ⊤ A, we obtain a formula for v satisfying (50).
The benefit of this approach is twofold: First, if the data matrix A is sparse, the sets J j have small cardinality, and from Proposition 4.1 (or other results in Section 4.3, depending on the sampling Ŝ used) we conclude that λ ′ (J j ∩ Ŝ) is small.Hence, the parameters v i obtained through (57) get better (i.e., smaller) with sparser data.Second, the formula for v i does not involve the need to compute an eigenvalue associated with the data matrix.On the other hand, instead of having to compute λ ′ ( Ŝ) (which, as we have seen, is equal to τ if | Ŝ| = τ with probability 1), we now need to compute the normalized largest eigenvalue of m restrictions of Ŝ, λ ′ (J j ∩ Ŝ) for all j = 1, 2, . . ., m.However, for this there is a good upper bound available through Proposition 4.1 for an arbitrary sampling, and refined bounds can be derived for specific samplings (for examples, see Section 4.3).

ESO without eigenvalues
In this section we illustrate the use of the techniques developed in the preceding sections to derive ESO inequalities, for selected samplings, which do not depend on any eigenvalues, and lead to easily computable ESO parameters v = (v 1 , . . ., v n ).The techniques can be used to derive similar ESO inequalities for other samplings as well.Proposition 6.1.Let f satisfy Assumption 2.1 and let sets J 1 , . . ., J m be defined as in (4).Then (f, Ŝ) ∼ ESO(v) provided that the sampling Ŝ and vector v are chosen in any of the following ways: (i) Ŝ is an arbitrary sampling such that | Ŝ| τ with probability 1, and (ii) Ŝ is the (c, τ )-distributed sampling and where ω (iii) Ŝ is the τ -nice sampling (for τ 1) and (iv) Ŝ is a doubly uniform sampling (which is not nil) and (v) Ŝ is a graph sampling and (vi) Ŝ is a serial sampling (i.e., a sampling for which | Ŝ| = 1 with probability 1) and v = (v 1 , . . ., v n ) is defined as in (63).
Proof.(i) A direct consequence of Theorem 5.2 and Proposition 4.1.
(ii) A direct consequence of Theorem 5.2 and Proposition 4.2.
(iii) This is a special case of part (ii) for c = 1.
(iv) A direct consequence of Theorem 5.2 and Proposition 4.4.
(v) For a graph sampling it is clear that |J j ∩ Ŝ| 1 with probability 1 for all j ∈ [m].The result then follows from Theorem 5.2.
(vi) A special case of (v).Indeed, a single vertex is an independent set of a graph.
Remarks: Note that part (i) of Proposition 6.1 is a strict improvement on (56).Also, this is strict improvement, both in the quality of the bound and in generality of the sampling, on the result in [22], which was proved for uniform samplings only and where the bound involved max j |J j | instead of |J j |.Part (ii) should be compared with the results obtained in [4] and part (iii) with those in [5,22].

Optimal sampling
Proposition 6.1 should be understood in the context of complexity results for randomized coordinate descent, such as those in Table 1.For instance, in view of (59) for an arbitrary sampling Ŝ such that | Ŝ| τ with probability 1, the accelerated coordinate descent method developed in [16] has complexity Naturally, the bound improves if we use a specialized sampling, such as the τ -nice sampling (since the constants v i become smaller).Sometimes, one can find a sampling which minimizes the complexity bound.For instance, if we restrict our attention to serial samplings only (samplings picking a single coordinate at a time), then one can find probabilities p 1 , . . ., p n , which uniquely define a sampling, minimizing the complexity bound: where w i = j A 2 ji .Note that if the ith coordinate is optimal at the starting point (i.e., if x 0 i = x * i ), then the prediction is to choose p i = 0 (i.e., to never update coordinate i) -this is what one would expect.Using the serial sampling defined by (65), the complexity (64) takes the form where d ∈ R n with d i = w 1/6 i (x 0 i − x * i ) 1/3 and d q = ( n i=1 d q i ) 1/q .However, if the uniform serial sampling is used instead (each coordinate is chosen with probability p i = 1/n), then the complexity (64) has the form While d 6 d 2 for all d, these quantities can be equal, in which case C opt is n times better than C unif .

Conclusion
We have conducted a systematic study of ESO inequalities for a large class of functions (those satisfying Assumption 2.1) and arbitrary samplings.These inequalities are crucial in the design and complexity analysis of randomized coordinate descent methods.This led us to study standard and normalized largest eigenvalue of the Hadamard product of the probability matrix associated with a sampling and a certain positive semidefinite matrix containing the data defining the function.Using our approach we have established new ESO results and also re-derived ESO results already established in the literature (in the case of uniform samplings) via different techniques.Our approach can be used to derive further bounds for specific samplings and can potentially be of interest outside the domain of randomized coordinate descent.

Table 2 :
P = P( Ŝ) n-by-n probability matrix:P ij = P({i, j} ⊆ Ŝ) Sec 3 p i p i = P ii = P(i ∈ Ŝ) (1) p p = (p 1 , ..., p n ) ⊤ ∈ R n(1) Matrices and vectors e the n-by-1 vector of all ones e i the i-th unit coordinate vector in R n h [S] for h ∈ R n and S ⊆ [n], this is defined by h [S] = i∈S h i e i A m-by-n data matrix defining f (3) J j the set of i ∈ [n] for which A ij = 0 (4) I n-by-n identity matrix E n-by-n matrix of all ones Diag outputs a diagonal matrix based on its argument (matrix or vector) • Hadamard (elementwise) product of two matrices or vectors M [S]restriction of matrix M ∈ R n×n to rows and columns indexed by S Notation appearing frequently in the paper.