Author Archives: Sebastien Bubeck

On the influence of the seed graph in the preferential attachment model

    The preferential attachment model, introduced in 1992 by Mahmoud and popularized in 1999 by Barabási and Albert, has attracted a lot of attention in the last decade. In its simplest form it describes the evolution of a random tree. Formally we denote by … Continue reading

Posted in Random graphs | 2 Comments

Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization

    About a year ago I described Nesterov’s Accelerated Gradient Descent in the context of smooth optimization. As I mentioned previously this has been by far the most popular post on this blog. Today I have decided to revisit this post to give a … Continue reading

Posted in Optimization | Leave a comment

COLT deadline next week

COLT deadline is approaching fast, don’t forget to send your awesome learning theory paper(s) before Friday 7th! Also recall that if you get a paper into COLT it will (i) give you an excuse to spend a few days in … Continue reading

Posted in Conference/workshop | Leave a comment

One year of blogging

A year ago I started this blog. I did not expect this experience to be so rewarding. For instance seeing weeks after the weeks the growing interest for the optimization posts gave me the stamina to pursue this endaveor to … Continue reading

Posted in Uncategorized | 5 Comments

A good NIPS!

This year’s edition of NIPS was a big success. As you probably already know we had the surprise visit of Mark Zuckerberg (see here for the reason behind this visit). More interestingly (or perhaps less interestingly depending on who you are) here … Continue reading

Posted in Conference/workshop | 3 Comments

The hunter and the rabbit

In this post I will tell you the story of the hunter and the rabbit. To keep you interested let me say that in the story we will see a powerful (yet trivial) inequality that I learned today from Yuval … Continue reading

Posted in Probability theory | 2 Comments

Guest post by Dan Garber and Elad Hazan: The Conditional Gradient Method, A Linear Convergent Algorithm – Part II/II

The goal of this post is to develop a Conditional Gradient method that converges exponentially fast while basically solving only a linear minimization problem over the domain on each iteration. To this end we consider the following relaxation of the … Continue reading

Posted in Optimization | Leave a comment

Guest post by Dan Garber and Elad Hazan: The Conditional Gradient Method, A Linear Convergent Algorithm – Part I/II

In a previous post Sebastien presented and analysed the conditional gradient method for minimizing a smooth convex function over a compact and convex domain . The update step of the method is as follows,     where , . The … Continue reading

Posted in Optimization | Leave a comment

5 announcements

First two self-centered announcements: A new problem around subgraph densities Nati Linial and I have just uploaded our first paper together, titled ‘On the local profiles of trees‘. Some background on the paper: recently there has been a lot of … Continue reading

Posted in Conference/workshop | 1 Comment

First Big Data workshop at the Simons Institute

This week at the Simons Institute we had the first Big Data workshop on Succinct Data Representations and Applications. Here I would like to briefly talk about one of the ‘stars’ of this workshop: the squared-length sampling technique. I will illustrate this method … Continue reading

Posted in Conference/workshop | 2 Comments