Author Archives: Sebastien Bubeck

Crash course on learning theory, part 2

It might be useful to refresh your memory on the concepts we saw in part 1 (particularly the notions of VC dimension and Rademacher complexity). In this second and last part we will discuss two of the most successful algorithm paradigms in … Continue reading

Posted in Machine learning, Optimization, Probability theory | Leave a comment

Crash course on learning theory, part 1

This week and next week I’m giving 90 minutes lectures at MSR on the fundamentals of learning theory. Below you will find my notes for the first course, where we covered the basic setting of statistical learning theory, Glivenko-Cantelli classes, Rademacher complexity, VC … Continue reading

Posted in Machine learning, Optimization, Probability theory | 9 Comments

Some stuff I learned this week

This week I had the pleasure to spend 3 days at a wonderful workshop (disclaimer: I was part of the organization, with Ofer Dekel, Yuval Peres and James Lee, so I might be a bit biased). Below you will find … Continue reading

Posted in Conference/workshop, Probability theory | Leave a comment

A solution to bandit convex optimization

Ronen Eldan and I have just uploaded to the arXiv our newest paper which finally proves that for online learning with bandit feedback, convex functions are not much harder than linear functions. The quest for this result started in 2004 … Continue reading

Posted in Optimization | 2 Comments

Revisiting Nesterov’s Acceleration

Nesterov’s accelerated gradient descent (AGD) is hard to understand. Since Nesterov’s 1983 paper people have tried to explain “why” acceleration is possible, with the hope that the answer would go beyond the mysterious (but beautiful) algebraic manipulations of the original … Continue reading

Posted in Optimization | 7 Comments

COLT 2015 accepted papers and some cool videos

Like last year I compiled a list of the COLT 2015 accepted papers together with links to the arxiv version whenever I could find one. These papers were selected from 180 submissions, a number that keep rising in the recent … Continue reading

Posted in Conference/workshop | 3 Comments

Deep stuff about deep learning?

Unless you live a secluded life without internet (in which case you’re not reading those lines), odds are that you have heard and read about deep learning (such as in this 2012 article in the New York Times, or the … Continue reading

Posted in Machine learning | 19 Comments

Some pictures in geometric probability

As I discussed in a previous blog post, I have been recently interested in models of randomly growing networks. As a starting point I focused my attention on the preferential attachment rule and its variants, in part because its ubiquity … Continue reading

Posted in Probability theory, Random graphs | 2 Comments

Guest post by Sasho Nikolov: Beating Monte Carlo

If you work long enough in any mathematical science, at some point you will need to estimate an integral that does not have a simple closed form. Maybe your function is really complicated. Maybe it’s really high dimensional. Often you … Continue reading

Posted in Theoretical Computer Science | 7 Comments

The entropic barrier: a simple and optimal universal self-concordant barrier

Ronen Eldan and I have just uploaded our new paper on the arxiv (it should appear tomorrow, for the moment you can see it here). The abstract reads as follows: We prove that the Fenchel dual of the log-Laplace transform … Continue reading

Posted in Optimization | 1 Comment