## COLT 2016 videos

This is a one-sentence post for the set {blog followers}\{Twitter/Facebook followers} to advertise the COLT 2016 videos which have just been posted!

## Notes on least-squares, part II

As promised at the end of Part I, I will give some intuition for the Bach and Moulines analysis of constant step size SGD for least-squares. Let us recall the setting. Let be a pair of random variables with where . Let be the convex quadratic function defined by

Observe that, with the notation and one has

We assume that (in particular this implies ) and . For instance these assumptions are respectively satisfied with almost surely, and independent of . Recall that constant step-size SGD on is defined by the following iteration (where is a sequence of i.i.d. copies of ), for ,

Our goal is to show that for one has (for a sequence we denote by its average )

Decomposition into a noise-free SGD and a pure noise SGD

With the notation , , and (with the convention ) one can write an iteration of SGD as

Interestingly, since , we have with ,

In other words the convergence analysis decomposes into two terms: which corresponds to a noise-free SGD (it is as if we were running SGD with ) and which corresponds to a pure noise SGD (it is as if we were running SGD with ).

Noise-free SGD

The analysis of the noise-free SGD is the usual drill: let and recall that (recall that in the noise-free case one has ), thus

which directly yields and thus for one gets which finally yields (by convexity of )

Thus from now on we can restrict our attention to the pure noise case, and the goal is to show that in this case one has .

The pure noise case is the difficult part in the proof (this was to be expected, indeed in the SGD analysis 101 one usually cancels the effect of the noise by taking a very small step size). For this case Bach and Moulines (following the footsteps of Polyak and Juditsky [1992]) suggest to first study a “full gradient” method, that is to replace the term by in the SGD iteration. More precisely the “full gradient” analysis would amount to study the sequence defined by (recall that we are in the pure noise case and thus ) and

We expect that will manage to stay pretty close to the optimum since we are doing full gradient steps, and thus it should only suffer the inherent variance of the problem (recall that the statistical lower bound for this problem is ). It is actually easy to prove that this intuition is correct, first note that

(1)

and thus for (which implies )

which implies (the cross terms vanish since is centered, and also recall that )

(2)

Difference between full gradient and SGD

Let us now see how different is the sequence from our real target :

(3)

In words we see that the difference between the two sequences satisfy exactly the same recursion as SGD, except that the noise term is replaced by . This is pretty good news since we expect to be of order : indeed is a full gradient method starting from , so it should stay close to up to the noise term ; again this intuition is easy to make precise since with (X) one sees easily that . In other words while corresponds to SGD with noise (the covariance of this noise is bounded by ), we see that corresponds to SGD with noise (now the covariance of this noise is bounded by ). Thus we see that the new noise term is a factor smaller. In particular we expect to have

(this come from the usual decomposition of the “average regret” for SGD as distance to optimum over –which is here– plus times the average of the norm squared of the noise –which is .)

Not quite there but close

So far we have obtained for the pure noise case:

We fall short of what we were hoping to prove (which is ) because is not close enough to . Morally the above shows that can be viewed as a order approximation to , with a remainder term of order . What about continuing this expansion of as a polynomial in ? Let us see what the next term should look like. First we hope that this term will also satisfy (2) and thus it should correspond to a full gradient method. On the other hand the noise term in this full gradient method should absorb the noise term in (3) so that we can hope that the noise term in the new (3) will be smaller. This leads to:

(4)

and thus (1) rewrites as

In fact there is no reason to stop there, and for any one can define

Denoting for a positive number such that the covariance matrix of (which is the noise term in the “full gradient with noise” corresponding to both and ) is bounded by , one gets with the same arguments as above

and

In particular to conclude it suffices to show that is geometrically decreasing. In fact an easy induction shows that one can take (we already did the base case ).

That’s it for today. In part III I will discuss how to use random projections to speed up computations for least squares in both the large sample regime and in the high-dimensional regime .

## Bandit theory, part II

These are the lecture notes for the second part of my minicourse on bandit theory (see here for Part 1).

The linear bandit problem, Auer [2002]

We will mostly study the following far-reaching extension of the -armed bandit problem.

Known parameters: compact action set , adversary’s action set , number of rounds .

Protocol: For each round , the adversary chooses a loss vector and simultaneously the player chooses based on past observations and receives a loss/observation .

Other models: In the i.i.d. model we assume that there is some underlying such that . In the Bayesian model we assume that we have a prior distribution over the sequence (in this case the expectation in is also over ). Alternatively we could assume a prior over .

Example: Part 1 was about and . Another simple example is path planning: say you have a graph with edges, and at each step one has to pick a path from some source node to a target node. The action set can be represented as a subset of the hypercube . The adversary chooses delays on the edges, and the delay of the chosen path is the sum of the delays on the edges that the path visits (this is indeed a linear loss).

Assumption: unless specified otherwise we assume .

Other feedback model: in the case where one can assume that the loss is observed at every coordinate where . This is the so-called semi-bandit feedback.

Thompson Sampling for linear bandit after RVR14

Assume . Recall from Part 1 that TS satisfies

where and .

Writing , , and we want to show that

Using the eigenvalue formula for the trace and the Frobenius norm one can see that . Moreover the rank of is at most since where (the row of is and for it is ).

Let us make some observations.

1. TS satisfies . To appreciate the improvement recall that without the linear structure one would get a regret of order and that can be exponential in the dimension (think of the path planning example).
2. Provided that one can efficiently sample from the posterior on (or on ), TS just requires at each step one linear optimization over .
3. TS regret bound is optimal in the following sense. W.l.og. one can assume and thus TS satisfies for any action set. Furthermore one can show that there exists an action set and a prior such that for any strategy one has , see Dani, Hayes and Kakade [2008], Rusmevichientong and Tsitsiklis [2010], and Audibert, Bubeck and Lugosi [2011, 2014].

Recall from Part 1 that exponential weights satisfies for any such that and ,

DHK08 proposed the following (beautiful) unbiased estimator for the linear case:

Again, amazingly, the variance is automatically controlled:

Up to the issue that can take negative values this suggests the “optimal” regret bound.

1. The non-negativity issue of is a manifestation of the need for an added exploration. DHK08 used a suboptimal exploration which led to an additional in the regret. This was later improved in Bubeck, Cesa-Bianchi, and Kakade [2012] with an exploration based on the John’s ellipsoid (smallest ellipsoid containing ). You can check this video for some more details on this.
2. Sampling the exp. weights is usually computationally difficult, see Cesa-Bianchi and Lugosi [2009] for some exceptions.
3. Abernethy, Hazan and Rakhlin [2008] proposed an alternative (beautiful) strategy based on mirror descent. The key idea is to use a -self-concordant barrier for as a mirror map and to sample points uniformly in Dikin ellipses. This method’s regret is suboptimal by a factor and the computational efficiency depends on the barrier being used.
4. Bubeck and Eldan [2014]‘s entropic barrier allows for a much more information-efficient sampling than AHR08. This gives another strategy with optimal regret which is efficient when is convex (and one can do linear optimization on ). You can check this video for some more on the entropic barier.

Adversarial combinatorial bandit after Audibert, Bubeck and Lugosi [2011, 2014]

Combinatorial setting: , , .

1. Full information case goes back to the end of the 90’s (Warmuth and co-authors), semi-bandit and bandit were introduced in Audibert, Bubeck and Lugosi [2011] (following several papers that studied specific sets ).
2. This is a natural setting to study FPL-type (Follow the Perturbed Leader) strategies, see e.g. Kalai and Vempala [2004] and more recently Devroye, Lugosi and Neu [2013].
3. ABL11: Exponential weights is provably suboptimal in this setting! This is in sharp contrast with the case where .
4. Optimal regret in the semi-bandit case is and it can be achieved with mirror descent and the natural unbiased estimator for the semi-bandit situation.
5. For the bandit case the bound for exponential weights from the previous slides gives (you should read this as “range of the loss times square root dimension times time times logsize of the set). However the lower bound from ABL14 is , which is conjectured to be tight.

Preliminaries for the i.i.d. case: a primer on least squares

Assume where is an i.i.d. sequence of centered and sub-Gaussian real-valued random variables. The (regularized) least squares estimator for based on is, with and :

Observe that we can also write where so that

A basic martingale argument (see e.g., Abbasi-Yadkori, Pal and Szepesvari [2011]) shows that w.p. , ,

Note that (w.l.o.g. we assumed ).

i.i.d. linear bandit after DHK08, RT10, AYPS11

Let , and . We showed that w.p. one has for all .

The appropriate generalization of UCB is to select: (this optimization is NP-hard in general, more on that next slide). Then one has on the high-probability event:

To control the sum of squares we observe that:

so that (assuming )

Putting things together we see that the regret is .

What’s the point of i.i.d. linear bandit?

So far we did not get any real benefit from the i.i.d. assumption (the regret guarantee we obtained is the same as for the adversarial model). To me the key benefit is in the simplicity of the i.i.d. algorithm which makes it easy to incorporate further assumptions.

1. Sparsity of : instead of regularization with -norm to define one could regularize with -norm, see e.g., Johnson, Sivakumar and Banerjee [2016] (see also Carpentier and Munos [2012] and Abbasi-Yadkori, Pal and Szepesvari [2012].
2. Computational constraint: instead of optimizing over to define one could optimize over an -ball containing (this would cost an extra in the regret bound).
3. Generalized linear model: for some known increasing , see Filippi, Cappe, Garivier and Szepesvari [2011].
4. -regime: if is finite (note that a polytope is effectively finite for us) one can get regret:

Some non-linear bandit problems

1. Lipschitz bandit: Kleinberg, Slivkins and Upfal [2008, 2016], Bubeck, Munos, Stoltz and Szepesvari [2008, 2011], Magureanu, Combes and Proutiere [2014]
2. Gaussian process bandit: Srinivas, Krause, Kakade and Seeger [2010]
3. Convex bandit: see the videos by myself and Ronen Eldan here and our arxiv paper.

Contextual bandit

We now make the game-changing assumption that at the beginning of each round a {\em context} is revealed to the player. The ideal notion of regret is now:

Sometimes it makes sense to restrict the mapping from contexts to actions, so that the infimum is taken over some {\em policy set} .

As far as I can tell the contextual bandit problem is an infinite playground and there is no canonical solution (or at least not yet!). Thankfully all we have learned so far can give useful guidance in this challenging problem.

Linear model after embedding

A natural assumption in several application domains is to suppose linearity in the loss after a correct embedding. Say we know mappings such that for some unknown (or in the adversarial case that ).

This is nothing but a linear bandit problem where the action set is changing over time. All the strategies we described are robust to this modification and thus in this case one can get a regret of (and for the stochastic case one can get efficiently ).

A much more challenging case is when the correct embedding is only known to belong to some class . Without further assumptions on we are basically back to the general model. Also note that a natural impulse is to run “bandits on top of bandits”, that is first select some and then select based on the assumption that is correct. We won’t get into this here, but let us investigate a related idea.

Exp4, Auer, Cesa-Bianchi, Freund and Schapire [2001]

One can play exponential weights on the set of policies with the following unbiased estimator (obvious notation: , , and )

Easy exercise: (indeed the relative entropy term is smaller than while the variance term is exactly ).

The only issue of this strategy is that the computationally complexity is linear in the policy space, which might be huge. A year and half ago a major paper by Agarwal, Hsu, Kale, Langford, Li and Schapire was posted, with a strategy obtaining the same regret as Exp4 (in the i.i.d. model) but which is also computationally efficient with an oracle for the offline problem (i.e., ). Unfortunately the algorithm is not simple enough yet to be included in these slides.

The statistician perspective, after Goldenshluger and Zeevi [2009, 2011], Perchet and Rigollet [2011]

Let , , i.i.d. from some absolutely continuous w.r.t. Lebesgue. The reward for playing arm under context is drawn from some distribution on with mean function which is assumed to be -Holder smooth. Let be the “gap” function.

A key parameter is the proportion of contexts with a small gap. The margin assumption is that for some , one has

One can achieve a regret of order , which is optimal at least in the dependency on . It can be achieved by running Successive Elimination on an adaptively refined partition of the space, see Perchet and Rigollet [2011] for the details.

The online multi-class classification perspective after Kakade, Shalev-Shwartz, and Tewari [2008]

Here the loss is assumed to be of the following very simple form: . In other words using the context one has to predict the best action (which can be interpreted as a class) .

KSST08 introduces the banditron, a bandit version of the multi-class perceptron for this problem. While with full information the online multi-class perceptron can be shown to satisfy a “regret” bound on of order , the banditron attains only a regret of order . See also Chapter 4 in Bubeck and Cesa-Bianchi [2012] for more on this.

1. The optimal regret for the linear bandit problem is . In the Bayesian context Thompson Sampling achieves this bound. In the i.i.d. case one can use an algorithm based on the optimism in face of uncertainty together with concentration properties of the least squares estimator.
2. The i.i.d. algorithm can easily be modified to be computationally efficient, or to deal with sparsity in the unknown vector .
3. Extensions/variants: semi-bandit model, non-linear bandit (Lipschitz, Gaussian process, convex).
4. Contextual bandit is still a very active subfield of bandit theory.
5. Many important things were omitted. Example: knapsack bandit, see Badanidiyuru, Kleinberg and Slivkins [2013].

Some open problems we discussed

2. Guha and Munagala [2014] conjecture: for product priors, TS is a 2-approximation to the optimal Bayesian strategy for the objective of minimizing the number of pulls on suboptimal arms.
3. Find a “simple” strategy achieving the Bubeck and Slivkins [2012] best of both worlds result (see Seldin and Slivkins [2014] for some partial progress on this question).
4. For the combinatorial bandit problem, find a strategy with regret at most (current best is ).
5. Is there a computationally efficient strategy for i.i.d. linear bandit with optimal gap-free regret and with gap-based regret?
6. Is there a natural framework to think about “bandits on top of bandits” (while keeping -regret)?
Posted in Optimization, Probability theory | 4 Comments

## Bandit theory, part I

This week I’m giving two 90 minutes lectures on bandit theory at MLSS Cadiz. Despite my 2012 survey with Nicolo I thought it would be a good idea to post my lectures notes here. Indeed while much of the material is similar, the style of a mini-course is quite different from the style of a survey. Also, bandit theory has surprisingly progressed since 2012 and many things can now be explained better. Finally in the survey we completely omitted the Bayesian model as we thought that we didn’t have much to add on this topic compared to existing sources (such as the 2011 book by Gittins, Glazebrook, Weber). For a mini-course this concern is irrelevant so I quickly discuss the famous Gittins index and its proof of optimality.

i.i.d. multi-armed bandit, Robbins [1952]

Known parameters: number of arms and (possibly) number of rounds .

Unknown parameters: probability distributions on with mean (notation: ).

Protocol: For each round , the player chooses based on past observations and receives a reward/observation (independently from the past).

Performance measure: The cumulative regret is the difference between the player’s accumulated reward and the maximum the player could have obtained had she known all the parameters,

This problem models the fundamental tension between exploration and exploitation (one wants to pick arms that performed well in the past, yet one needs to make sure that no good option has been missed). Almost every week new applications are found that fit this simple framework and I’m sure you already have some in mind (the most popular one being ad placement on the internet).

i.i.d. multi-armed bandit: fundamental limitations

How small can we expect to be? Consider the -armed case where and where is unknown. Recall from Probability 101 (or perhaps 102) that with expected observations from the second arm there is a probability at least to make the wrong guess on the value of . Now let be the expected number of pulls of arm up to time when . One has

We refer to Bubeck, Perchet and Rigollet [2013] for the details. The important message is that for fixed the lower bound is , while for the worse (which is of order ) it is . In the -armed case this worst-case lower bound becomes (see Auer, Cesa-Bianchi, Freund and Schapire [1995]). The -lower bound is slightly “harder” to generalize to the -armed case (as far as I know there is no known finite-time lower bound of this type), but thankfully it was already all done 30 years ago. First some notation: let and the number of pulls of arm up to time . Note that one has . For let

Theorem [Lai and Robbins [1985]]

Consider a strategy s.t. , we have if . Then for any Bernoulli distributions,

Note that so up to a variance-like term the Lai and Robbins lower bound is . This lower bound holds more generally than just for Bernoulli distributions, see for example Burnetas and Katehakis [1996].

i.i.d. multi-armed bandit: fundamental strategy

Hoeffding’s inequality teaches us that with probability at least , ,

The UCB (Upper Confidence Bound) strategy (Lai and Robbins [1985], Agarwal [1995], Auer, Cesa-Bianchi and Fischer [2002]) is:

The regret analysis is straightforward: on a probability event one has

so that and in fact

i.i.d. multi-armed bandit: going further

• The numerical constant in the UCB regret bound can be replaced by (which is the best one can hope for), and more importantly by slightly modifying the derivation of the UCB one can obtain the Lai and Robbins variance-like term (that is replacing by ): see Cappe, Garivier, Maillard, Munos and Stoltz [2013].
•  In many applications one is merely interested in finding the best arm (instead of maximizing cumulative reward): this is the best arm identification problem. For the fundamental strategies see Even-Dar, Mannor and Mansour [2006] for the fixed-confidence setting (see also Jamieson and Nowak [2014] for a recent short survey) and Audibert, Bubeck and Munos [2010] for the fixed budget setting. Key takeaway: one needs of order rounds to find the best arm.
• The UCB analysis extends to sub-Gaussian reward distributions. For heavy-tailed distributions, say with moment for some , one can get a regret that scales with (instead of ) by using a robust mean estimator, see Bubeck, Cesa-Bianchi and Lugosi [2012].

Adversarial multi-armed bandit, Auer, Cesa-Bianchi, Freund and Schapire [1995, 2001]

For , the player chooses based on previous observations, and simultaneously an adversary chooses a loss vector . The player’s loss/observation is . The regret and pseudo-regret are defined as:

Obviously and there is equality in the oblivious case ( adversary’s choices are independent of the player’s choices). The case where is an i.i.d. sequence corresponds to the i.i.d. model we just studied. In particular we already know that is a lower bound on the attainable pseudo-regret.

The exponential weights strategy for the full information case where is observed at the end of round is defined by: play at random from where

In five lines one can show with and a well-chosen learning rate (recall that ):

For the bandit case we replace by in the exponential weights strategy, where

The resulting strategy is called Exp3. The key property of is that it is an unbiased estimator of :

Furthermore with the analysis described above one gets

It only remains to control the variance term, and quite amazingly this is straightforward:

Thus with one gets .

• With the modified loss estimate one can prove high probability bounds on , and by integrating the deviations one can show .
• The extraneous logarithmic factor in the pseudo-regret upper can be removed, see Audibert and Bubeck [2009]. Conjecture: one cannot remove the log factor for the expected regret, that is for any strategy there exists an adaptive adversary such that .
• can be replaced by various measure of “variance” in the loss sequence, see e.g., Hazan and Kale [2009].
• There exist strategies which guarantee simultaneously in the adversarial model and in the i.i.d. model, see Bubeck and Slivkins [2012].
• Many interesting variants: graph feedback structure of Mannor and Shamir [2011] (there is a graph on the set of arms, and when an arm is played one observes the loss for all its neighbors), regret with respect to best sequence of actions with at most switches, switching cost (interestingly in this case the best regret is , see Dekel, Ding, Koren and Peres [2013]), and much more!

Bayesian multi-armed bandit, Thompson [1933]

Here we assume a set of “models” and prior distribution over . The Bayesian regret is defined as

where simply denotes the regret for the i.i.d. model when the underlying reward distributions are . In principle the strategy minimizing the Bayesian regret can be computed by dynamic programming on the potentially huge state space . The celebrated Gittins index theorem gives sufficient condition to dramatically reduce the computational complexity of implementing the optimal Bayesian strategy under a strong product assumption on . Notation: denotes the posterior distribution on at time .

Theorem [Gittins [1979]]

Consider the product and -discounted case: , , , and furthermore one is interested in maximizing . The optimal Bayesian strategy is to pick at time the arm maximizing the Gittins index:

where the expectation is over drawn from with , and the supremum is taken over all stopping times .

Note that the stopping time in the Gittins index definition gives the optimal strategy for a 2-armed game, where one arm’s reward distribution is while the other arm reward’s distribution is with as a prior for .

Proof: The following exquisite proof was discovered by Weber [1992]. Let

be the Gittins index of arm at time , which we interpret as the maximum charge one is willing to pay to play arm given the current information. The prevailing charge is defined as (i.e. whenever the prevailing charge is too high we just drop it to the fair level). We now make three simple observations which together conclude the proof:

• The discounted sum of prevailing charge for played arms is an upper bound (in expectation) on the discounted sum of rewards. Indeed the times at which the prevailing charge are updated are stopping times, and so between two such times the expected sum of discounted reward is smaller than the discounted sum of the fair charge at time which is equal to the prevailing charge at any time in .
• Since the prevailing charge is nonincreasing, the discounted sum of prevailing charge is maximized if we always pick the arm with maximum prevailing charge. Also note that the sequence of prevailing charge does not depend on the algorithm.
• Gittins index does exactly 2. (since we stop playing an arm only at times at which the prevailing charge is updated) and in this case 1. is an equality. Q.E.D.

For much more (implementation for exponential families, interpretation as a multitoken Markov game, …) see Dumitriu, Tetali and Winkler [2003], Gittins, Glazebrook, Weber [2011], Kaufmann [2014].

Bayesian multi-armed bandit, Thompson Sampling (TS)

In machine learning we want (i) strategies that can deal with complicated priors, and (ii) guarantees for misspecified priors. This is why we have to go beyond the Gittins index theory.

In his 1933 paper Thompson proposed the following strategy: sample and play .

Theoretical guarantees for this highly practical strategy have long remained elusive. Recently Agrawal and Goyal [2012] and Kaufmann, Korda and Munos [2012] proved that TS with Bernoulli reward distributions and uniform prior on the parameters achieves (note that this is the frequentist regret!). We also note that Liu and Li [2015] takes some steps in analyzing the effect of misspecification for TS.

Let me also mention a beautiful conjecture of Guha and Munagala [2014]: for product priors, TS is a 2-approximation to the optimal Bayesian strategy for the objective of minimizing the number of pulls on suboptimal arms.

Bayesian multi-armed bandit, Russo and Van Roy [2014] information ratio analysis

Assume a prior in the adversarial model, that is a prior over , and let denote the posterior distribution (given ). We introduce

The key observation is that (recall that )

Indeed, equipped with Pinsker’s inequality and basic information theory concepts one has (we denote for the mutual information conditionally on everything up to time , also denotes the law of conditionally on everything up to time ):

Thus which gives the claim thanks to a telescopic sum. We will use this key observation as follows (we give a sequence of implications leading to a regret bound so that all that is left to do is to check that the first statement in this sequence is true for TS):

Thus writing and we have

For TS the following shows that one can take :

Thus TS always satisfies . Side note: by the minimax theorem this implies the existence of a strategy for the oblivious adversarial model with regret (of course we already proved that such a strategy exist, in fact we even constructed one via exponential weights, but the point is that the proof here does not require any “miracle” –yes exponential weights are kind of a miracle, especially when you consider how the variance of the unbiased estimator gets automatically controlled).

Summary of basic results

• In the i.i.d. model UCB attains a regret of and by Lai and Robbins’ lower bound this is optimal (up to a multiplicative variance-like term).
• In the adversarial model Exp3 attains a regret of and this is optimal up to the logarithmic term.
• In the Bayesian model, Gittins index gives an optimal strategy for the case of product priors. For general priors Thompson Sampling is a more flexible strategy. Its Bayesian regret is controlled by the entropy of the optimal decision. Moreover TS with an uninformative prior has frequentist guarantees comparable to UCB.
Posted in Optimization, Probability theory | 5 Comments

## COLT 2016 accepted papers

Like previous years (2014, 2015) I have compiled the list of accepted papers for this year’s edition of COLT together with links to the arxiv submission whenever I could find one. These 63 papers were selected from about 200 submissions (a healthy 11% increase in terms of submissions from last year).

COLT 2016 accepted papers

Posted in Conference/workshop | 1 Comment

## Notes on least-squares, part I

These are mainly notes for myself, but I figured that they might be of interest to some of the blog readers too. Comments on what is written below are most welcome!

Let be a pair of random variables, and let be the convex quadratic function defined by

We are interested in finding a minimizer of (we will also be satisfied with such that ). Denoting for the covariance matrix of the features (we refer to as a feature vector) and , one has (if is not invertible then the inverse should be understood as the Moore-Penrose pseudo-inverse, in which case is the smallest-norm element of the affine subspace of minimizers of ). The model of computation we are interested in corresponds to large-scale problems where one has access to an unlimited stream of i.i.d. copies of . Thus the only constraint to find is our computational ability to process samples. Furthermore we want to be able to deal with large dimensional problem where can be extremely ill-conditioned (for instance the smallest eigenvalue can be , or very close to ), and thus bounds that depend on the condition number of (ratio of largest eigenvalue to smallest eigenvalue) are irrelevant for us.

Let us introduce a few more notation before diving into the comparison of different algorithms. With the well-specified model in mind (i.e., where is independent of ) we define the noise level of the problem by assuming that . For simplicity we will assume (whenever necessary) that almost surely one has and . All our iterative algorithms will start at . We will also not worry too much about numerical constants and write for an inequality true up to a numerical constant. Finally for the computational complexity we will count the number of elementary -dimensional vector operations that one has to do (e.g. adding/scaling such vectors, or requesting a new i.i.d. copy of ).

ERM

The most natural approach (also the most terrible as it will turn out) is to take a large sample and then minimize the corresponding empirical version of , or in other words compute where is the column vector and is the matrix whose column is . Since is a quadratic function, we can use the conjugate gradient algorithm (see for example Section 2.4 here) to get in iterations. However each iteration (basically computing a gradient of ) cost elementary vector operations, thus leading overall to elementary vector operations (which, perhaps surprisingly, is also the cost of just forming the empirical covariance matrix ). What number of samples should one take? The answer (at least for the well-specified model) comes from a standard statistical analysis (e.g. with the Rademacher complexity described here, see also these nice lecture notes by Philippe Rigollet) which yield:

(Note that in the above bound we evaluate on instead of , see the comments section for more details on this issue. Since this is not the main point of the post I overlook this issue in what follows.) Wrapping up we see that this “standard” approach solves our objective by taking samples and performing elementary vector operations. This number of elementary vector operations completely dooms this approach when is very large. We note however that the statistical bound above can be shown to be minimax optimal, that is any algorithm has to use at least samples (in the worst case over all ) in order to satisfy our objective. Thus the only hope to do better than the above approach is to somehow process the data more efficiently.

A potential idea to trade a bit of the nice statistical property of for some computational easiness is to add a regularization term to in order to make the problem better conditioned (recall that conjugate gradient reaches an -optimal point in iterations, where is the condition number).

In other words we would like to minimize, for some values of and to be specified,

One can write the minimizer as , and this point is now obtained at a cost of elementary vector operations ( denotes the spectral norm of and we used the fact that with high probability ). Here one can be even smarter and use accelerated SVRG instead of conjugate gradient (for SVRG see Section 6.3 here, and here or here for how to accelerate it), which will reduce the number of elementary vector operations to . Also observe that again an easy argument based on Rademacher complexity leads to

. Thus we want to take of order which in terms of is of order leading to an overall number of vector operations of

Note that, perhaps surprisingly, anything less smart than accelerated SVRG would lead to a worse dependency on than the one obtained above (which is also the same as the one obtained by the “standard” approach). Finally we observe that at a high level the two terms in the above bound can be viewed as “variance” and “bias” terms. The variance term comes from the inherent statistical limitation while the bias comes from the approximate optimization. In particular it makes sense that appears in this bias term as it is the initial distance to the optimum for the optimization procedure.

Standard SGD

Another approach altogether, which is probably the most natural approach for anyone trained in machine learning, is to run the stochastic gradient algorithm. Writing (this is a stochastic gradient, that is it satisfies ) we recall that SGD can be written as:

The most basic guarantee for SGD (see Theorem 6.1 here) gives that with step size of order , and assuming to simplify (see below for how to get around this) that is known up to a numerical constant so that if steps outside the ball of that radius then we project it back on it, one has (we also note that , and we denote ):

This means that roughly with SGD the computational cost for our main task corresponds vector operations. This is a fairly weak result compared to accelerated SVRG mainly because of: (i) dependency in instead of , and (ii) instead of . We will fix both issues, but let us also observe that on the positive side the term in accelerated SVRG is replaced by the less conservative quantity (that is one replaces the largest eigenvalue of by the average eigenvalue of ).

Let us see quickly check whether one can improve the performance of SGD by regularization. That is we consider applying SGD on with stochastic gradient which leads to (see Theorem 6.2 here):

Optimizing over we see that we get the same rate as without regularization. Thus, unsurprisingly, standard SGD fares poorly compared to the more sophisticated approach based on accelerated SVRG described above. However we will see next that there is a very simple fix to make SGD competitive with accelerated SVRG, namely just tune up the learning rate!

Constant step size SGD (after Bach and Moulines 2013)

SGD with a constant step size of order of a constant times satisfies

We see that this result gives a similar bias/variance decomposition as the one we saw for accelerated SVRG. In particular the variance term is minimax optimal, while the bias term matches what one would expect for a basic first order method (this term is not quite optimal as one would expect that a decrease in is possible since this is the optimal rate for first-order optimization of a smooth quadratic function -without strong convexity-, and indeed a recent paper of Dieuleveut, Flammarion and Bach tackles this issue).

In part II I hope to give the intuition/proof of the above result, and in part III I will discuss other aspects of the least squares problem (dual methods, random projections, sparsity).

Posted in Optimization | 3 Comments

## AlphaGo is born

Google DeepMind is making the front page of Nature (again) with a new AI for Go, named AlphaGo (see also this Nature youtube video). Computer Go is a notoriously difficult problem, and up to now AI were faring very badly compared to good human players. In their paper the DeepMind team reports that AlphaGo won 5-0 against the best European player Fan Hui!!! This is truly a jump in performance: the previous best AI, Crazy Stone, needed several handicap stones to compete with pro players. Congratulations to the DeepMind team for this breakthrough!

How did they do it? From a very high level point of view they simply combined the previous state of the art (Monte Carlo Tree Search) with the new deep learning techniques. Recall that MCTS is a technique inspired from multi-armed bandits to efficiently explore the tree of possible action sequences in a game, for more details see this very nice survey by my PhD advisor Remi Munos: From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning. Now in MCTS there are two key elements beyond the bandit part (i.e., how to deal with exploration v.s. exploitation): one needs a way to combine all the information collected to produce a value function for each state (this value is the key term in the upper confidence bound used by the bandit algorithm); and one needs a reasonable random policy to carry out the random rollouts once the bandit strategy has gone deep enough in the tree. In AlphaGo the initial random rollout strategy is learned via supervised deep learning on a dataset of human expert games, and the value function is learned (online, with MCTS guiding the search) via convolutional neural networks (in some sense this corresponds to a very natural inductive bias for this game).

Of course there is much more to AlphaGo than what I described above and you are invited to take a look at the paper (see this reddit thread to find the paper)!

Posted in Uncategorized | 1 Comment

## On the spirit of NIPS 2015 and OpenAI

I just came back from NIPS 2015 which was a clear success in terms of numbers (note that this growth is not all because of deep learning, only about 10% of the papers were on this topic, which is about double of those on convex optimization for example):

In this post I want to talk about some of the new emerging directions that the NIPS community is taking. Of course my view is completely biased as I am more representative of COLT than NIPS (though obviously the two communities have a large overlap). Also I only looked in details at about 25% of the papers so perhaps I missed the most juicy breakthrough. In any case below you will find a short summary of each of these new directions with pointers to some of the relevant papers. Before going into the fun math I wanted to first share some thoughts about the big announcement of yesterday.

Obvious disclaimer: the opinions expressed here represent my own and not those of my employer (or previous employer hosting this blog). Now, for those of you who missed it, yesterday Elon Musk and friends made a huge announcement: they are giving $1 billion to create a non-profit organization whose goal is the advancement of AI (see here for the official statement, and here for the New York Times covering). This is just absolutely wonderful news, and I really feel like we are watching history in the making. There are very very few places in the world solely dedicated to basic research and with that kind of money. Examples are useful to get some perspective: the Perimeter Institute for Theoretical Physics was funded with$100 million (I believe it has a major impact in the field), the Institute for Advanced Studies was funded with a similar size gift (a simple statistic give an idea of the impact: 41 out of 57 Fields medalists have been affiliated with IAS), more recently and perhaps closer to us the Simons Institute for the Theory of Computing was created with \$60 million and its influence on the field keep growing (it was certainly a very influential place in my own career). Looking at what those places are doing with 1/10 of OpenAI’s budget sets the bar extremely high for OpenAI, and I am very excited to see what direction they take and what their long term plans are!

I wish the best of luck to OpenAI and their members. The game-changing potential of this organization puts a lot of responsibility on them and I sincerely hope that they will try to seriously explore different paths to AI rather than to chase local-in-time advertisement (please don’t just solve Go with deep nets!!!).

Now time for some of the cool stuff that happened at NIPS.

Scaling up sampling

When I’m a grown-up I want to do non-convex optimization!

With deep nets in mind all the rage is about non-convex optimization. One direction in that space is to develop more efficient algorithms for specific problems where we already know polynomial-time methods under reasonable assumptions, such as low rank estimation (see “A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements“) and phase retrieval (see “Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems“). The nice thing about those new results is that they essentially show that gradient descent with a spectral initialization will work (previous evidence was already shown for alternating minimization, see also “A Nonconvex Optimization Framework for Low Rank Matrix Estimation“). Another direction in non-convex optimization is to slowly extend the class of functions that one can solve efficiently, see “Beyond Convexity: Stochastic Quasi-Convex Optimization“. Finally a thought-provoking paper which is worth mentioning is “Matrix Manifold Optimization for Gaussian Mixtures” (it comes without provable guarantees but maybe something can be done there…).

Convex optimization strikes back

As I said non-convex optimization is all the rage, yet there are still many things about convex optimization that we don’t understand (an interesting example is given in this paper “Information-theoretic lower bounds for convex optimization with erroneous oracles“). I blogged recently about a new understanding of Nesterov’s acceleration, but this said nothing about the Nesterov’s accelerated gradient descent. The paper “Accelerated Mirror Descent in Continuous and Discrete Time” builds on (and refines) recent advances on understanding the relation of AGD and Mirror Descent, as well as the differential equations underlying them. Talking about Mirror Descent, I was happy to see it applied to deep nets optimization in “End-to-end Learning of LDA by Mirror-Descent Back Propagation over a Deep Architecture“. Another interesting trend is the revival of second-order methods (e.g., Newton’s method) by using various low-rank approximations to the Hessian, see “Convergence rates of sub-sampled Newton methods“, “Newton-Stein Method: A Second Order Method for GLMs via Stein’s Lemma“, and “Natural Neural Networks“.

Other topics

There are a few other topics that caught my attention but I am running out of stamina. These include many papers on the analysis of cascades in networks (I am particularly curious about the COEVOLVE model), papers that further our understanding of random features, adaptive data analysis (see this), and a very healthy list of bandit papers (or Bayesian optimization as some like to call it).

Posted in Conference/workshop | 14 Comments

## Convex Optimization: Algorithms and Complexity

I am thrilled to announce that my short introduction to convex optimization has just came out in the Foundations and Trends in Machine Learning series (free version on arxiv). This project started on this blog in 2013 with the lecture notes “The complexities of optimization”, it then morphed a year later into a first draft titled “Theory of Convex Optimization for Machine Learning”, and finally after one more iteration it has found the more appropriate name: “Convex Optimization: Algorithms and Complexity”. Notable additions since the last version include: a short treatment of conjugate gradient, an almost self-contained analysis of Vaidya’s 1989 cutting plane method (which attains the best of both center of gravity and ellipsoid method in terms of oracle complexity and computational complexity), and finally an algorithm with a simple geometric intuition which attains the rate of convergence of Nesterov’s accelerated gradient descent.

## Crash course on learning theory, part 2

It might be useful to refresh your memory on the concepts we saw in part 1 (particularly the notions of VC dimension and Rademacher complexity). In this second and last part we will discuss two of the most successful algorithm paradigms in learning: boosting and SVM. Note that just like last time each topic we cover have its own books dedicated to them (see for example the boosting book by Schapire and Freund, and the SVM book by Scholkopf and Smola). Finally we conclude our short tour of the learning’s basics with a simple observation: stable algorithms generalize well.

Boosting

Say that given a distribution supported on points one can find (efficiently) a classifier such that (here we are in the context of classification with the zero-one loss). Can we “boost” this weak learning algorithm into a strong learning algorithm with arbitrarily small for large enough? It turns out that this is possible, and even simple. The idea is to build a linear combination of hypotheses in with a greedy procedure. That is at time step our hypothesis is (the sign of) , and we are now looking to add with an approriate weight . A natural guess is to optimize over to minimize the training error of on our sample . This might be a difficult computational problem (how do you optimize over ?), and furthermore we would like to make use of our efficient weak learning algorithm. The key trick is that . More precisely:

where . From this we see that we would like to be a good predictor for the distribution . Thus we can pass to the weak learning algorithm, which in turns gives us with . Thus we now have:

Optimizing the above expression one finds that leads to (using )

The procedure we just described is called AdaBoost (introduce by Schapire and Freund) and we proved that it satisfies

(1)

In particular we see that our weak learner assumption implies that is realizable (and in fact realizable with margin , see next section for the definition of margin) with the hypothesis class:

This class can be thought of as a neural network with one (infinite size) hidden layer. To realize how expressive is compared to it’s a useful exercise to think about the very basic case of decision stumps (for which the empirical risk minimization can be implemented very efficiently):

To derive a bound on the true risk of AdaBoost it remains to calculate the VC dimension of the class where the size of the hidden layer is . This follows from more general results on the VC dimension of neural networks, and up to logarithmic factors one obtains that is of order . Putting this together with \eqref{eq:empada} we see that when is a constant, one should run AdaBoost for rounds, and then one gets .

Margin

We consider to be the set of distributions such that there exists with (again we are in the context of classification with the zero-one loss, and this assumption means that the data is almost surely realizable). The SVM idea is to search for with minimal Euclidean norm and such that . Effectively this is doing empirical risk minimization over the following data dependent hypothesis class (which we write in terms of the set of admissible weight vectors):

The key point is that we can now use the contraction lemma to bound the Rademacher complexity of this class. Indeed replacing the zero-one loss by the (Lipschitz!) “ramp loss” makes no difference for the optimum , and our estimated weight still has training error while its true loss is only surestimated. Using the argument from previous sections we see that the Rademacher complexity of our hypothesis class (with respect to the ramp loss) is bounded by (assuming the examples are normalized to be in the Euclidean ball). Now it is easy to see that the existence of with (and ) exactly corresponds to a geometric margin of between positive and negative examples (indeed the margin is exactly ). To summarize we just saw that under the -margin condition the SVM algorithm has a sample complexity of order . This suggests that from an estimation perspective one should map the points into a high-dimensional space so that one could hope to have the separability condition (with margin). However this raises computational challenges, as the QP given by the SVM can suddenly look daunting. This is where kernels come into the picture.

Kernels

So let’s go overboard and map the points to an infinite dimensional Hilbert space (as we will see in the next subsection this notation will be consistent with being the hypothesis class). Denote for this map, and let be the kernel associated with it. The key point is that we are not using all the dimensions of our Hilbert space in the SVM optimization problem, but rather we are effectively working in the subspace spanned by (this is because we are only working with inner products with those vectors, and we are trying to minimize the norm of the resulting vector). Thus we can restrict our search to (this fact is called Mercer representer theorem and the previous sentence is the proof…). The beauty is that now we only need to compute the Gram matrix as we only need to consider and . In particular we never need to compute the points (which anyway could be infinite dimensional, so we couldn’t really write them down…). Note that the same trick would work with soft SVM (i.e., regularized hinge loss). To drive the point home let’s see an example: leads to . I guess it doesn’t get much better than this :). Despite all this beauty, one should note that we now have to manipulate an object of size (the kernel matrix ) and in our big data days this can be a disaster. We really want to focus on methods with computational complexity linear in , and thus one is led to the problem of kernel approximation, which we will explore below. But first let us explore a bit further what kernel SVM is really doing.

RKHS and the inductive bias of kernels

As we just saw in kernel methods the true central element is the kernel rather than the embedding . In particular since the only thing that matter are inner products we might as well assume that , and is the completion of where the inner product is defined by (and the definition is extended to by linearity). Assuming that is positive definite (that is Gram matrices built from are positive definite) one obtains a well-defined Hilbert space . Furthermore this Hilbert space has a special property: for any , . In other words is a reproducing kernel Hilbert space (RKHS), and in fact any RKHS can be obtained with the above construction (this is a simple consequence of Riesz representation theorem). Now observe that we can rewrite the kernel SVM problem

as

While the first formulation is computationally more effective, the second sheds light on what we are really doing: simply searching the consistent (with margin) hypothesis in with smallest norm. In other words, thinking in terms of inductive bias, one should choose a kernel for which the norm represents the kind of smoothness one expects in the mapping from input to output (more on that next).

It should also be clear now that one can “kernelize” any regularized empirical risk minimization, that is instead of the boring (note that here the loss is defined on instead of )

one can consider the much more exciting

since this can be equivalently written as

This gives the kernel ridge regression, kernel logistic regression, etc…

Translation invariant kernels and low-pass filtering

We will now investigate a bit further the RKHS that one obtains with translation invariant kernels, that is . A beautiful theorem of Bochner characterizes the continuous maps (with ) for which such a is a positive definite kernel: it is necessary and sufficient that is the characteristic function of a probability measure , that is

An important example in practice is the Gaussian kernel: (this corresponds to mapping to the function ). One can check that in this case is itself a Gaussian (centered and with covariance ).

Now let us restrict our attention to the case where has a density with respect to the Lebesgue measure, that is . A standard calculation then shows that

which implies in particular

Note that for the Gaussian kernel one has , that is the high frequency in are severely penalized in the RKHS norm. Also note that smaller values of correspond to less regularization, which is what one would have expected from the feature map representation (indeed the features are more localized around the data point for larger values of ).

To summarize, SVM with translation invariant kernels correspond to some kind of soft low-pass filtering, where the exact form of the penalization for higher frequency depends on the specific kernel being used (smoother kernels lead to more penalization).

Random features

Let us now come back to computational issues. As we pointed out before, the vanilla kernel method has at least a quadratic cost in the number of data points. A common approach to reduce this cost is to use a low rank approximation of the Gram matrix (indeed thanks to the i.i.d. assumption there is presumably a lot of redundancy in the Gram matrix), or to resort to online algorithms (see for example the forgetron of Dekel, Shalev-Shwartz and Singer). Another idea is to approximate the feature map itself (a priori this doesn’t sound like a good idea, since as we explained above the beauty of kernels is that we avoid computing this feature map). We now describe an elegant and powerful approximation of the feature map (for translation invariant kernels) proposed by Rahimi and Recht which is based on random features.

Let be a translation invariant kernel and its corresponding probability measure. Let’s rewrite in a convenient form, using and Bochner’s theorem,

A simple idea is thus to build the following random feature map: given and , i.i.d. draws from respectively and , let be defined by where

For it is an easy exercise to verify that with probability at least (provided has a second moment at most polynomial in ) one will have for any in some compact set (with diameter at most polynomial in ),

The SVM problem can now be approximated by:

This optimization problem is potentially much simpler than the vanilla kernel SVM when is much bigger than (essentially replaces for most computational aspects of the problem, including space/time complexity of prediction after training).

Stability

We conclude our tour of the basic topics in statistical learning with a different point of view on generalization that was put forward by Bousquet and Elisseeff.

Let’s start with a simple observation. Let be an independent copy of , and denote . Then one can write, using the slight abuse of notation ,

This last quantity can be interpreted as a stability notion, and we see that controlling it would in turn control how different is the true risk compared to the empirical risk. Thus stable methods generalize.

We will now show that regularization can be interpreted as a stabilizer. Precisely we show that

is -stable for a convex and -Lipschitz loss. Denote for the above objective function, then one has

and thus by Lipschitzness

On the other hand by strong convexity one has

and thus with the above we get which implies (by Lipschitzness)

or in other words regularized empirical risk minimization () is stable. In particular denoting for the minimizer of we have:

Assuming and optimizing over we recover the bound we obtained previously via Rademacher complexity.