Author Archives: Sebastien Bubeck

Guest post by Dan Garber and Elad Hazan: The Conditional Gradient Method, A Linear Convergent Algorithm – Part I/II

In a previous post Sebastien presented and analysed the conditional gradient method for minimizing a smooth convex function over a compact and convex domain . The update step of the method is as follows,     where , . The … Continue reading

Posted in Optimization | Leave a comment

5 announcements

First two self-centered announcements: A new problem around subgraph densities Nati Linial and I have just uploaded our first paper together, titled ‘On the local profiles of trees‘. Some background on the paper: recently there has been a lot of … Continue reading

Posted in Conference/workshop | 1 Comment

First Big Data workshop at the Simons Institute

This week at the Simons Institute we had the first Big Data workshop on Succinct Data Representations and Applications. Here I would like to briefly talk about one of the ‘stars’ of this workshop: the squared-length sampling technique. I will illustrate this method … Continue reading

Posted in Conference/workshop | 2 Comments

First week of activity at the Simons Institute

This first week at the Simons Institute was a lot of fun! I attended the first workshop in the Real Analysis program which was about Testing, Learning and Inapproximability. There was plenty of good talks and I learned a lot of … Continue reading

Posted in Theoretical Computer Science | 4 Comments

Random-Approx 2013

Last week I attended the Random-Approx conference at Berkeley. I missed quite a few talks as I was also settling in my new office for the semester at the Simons Institute so I will just report on the three invited talks: Luca Trevisan gave a … Continue reading

Posted in Random graphs, Theoretical Computer Science | 2 Comments

COLT 2013/ICML 2013 videos

The videos for COLT 2013 were just published on The videos for ICML 2013 are also available on

Posted in Uncategorized | 3 Comments

Two summer readings on Big Data and Deep Learning

This is the first (short) post dedicated to the Big Data program of the Simons Institute. We received from the program organizer Mike Jordan our first reading assignment which is a report published by the National Academy of Sciences on the “Frontiers in Massive … Continue reading

Posted in Uncategorized | 2 Comments

ICML and isotropic position of a convex body

ICML 2013 just finished a few days ago. The presentation that inspired me the most was the invited talk by Santosh Vempala. He talked about the relations between sampling, optimization, integration, learning, and rounding. I strongly recommend Vempala’s short survey … Continue reading

Posted in Uncategorized | 5 Comments

COLT (with bombs, deep learning and other exciting stuff)

The 2013 edition of COLT at Princeton just finished a few days ago and I’m happy to report that everything went smoothly! We had a sort of stressful start (at least for the organizers..) since on the day before the … Continue reading

Posted in Uncategorized | 4 Comments

Embeddings of finite metric spaces in Hilbert space

In this post we discuss the following notion: Let and be two metric spaces, one says that embeds into with distortion if there exists and such that for any ,     We write this as . Note that if … Continue reading

Posted in Theoretical Computer Science | Leave a comment