Author Archives: Sebastien Bubeck

Komlos conjecture, Gaussian correlation conjecture, and a bit of machine learning

Today I would like to talk (somewhat indirectly) about a beautiful COLT 2014 paper by Nick Harvey and Samira Samadi. The problem studied in this paper goes as follows: imagine that you have a bunch of data points in with a certain … Continue reading

Posted in Optimization, Probability theory, Theoretical Computer Science | 2 Comments

A zest of number theory

I just encountered an amazing number theoretic result. It is probably very well known, but for those who never saw it it’s quite something, so I thought I would share it. Let be a positive integer. A partition of is … Continue reading

Posted in Theoretical Computer Science | Leave a comment

Probability in high dimension

The Barcelona events have just ended, and I’m happy to report that everything went very smoothly. In my opinion the quality of the works presented at COLT and at the Foundations of Learning Theory workshop were truly outstanding. I hope … Continue reading

Posted in Probability theory | Leave a comment

Theory of Convex Optimization for Machine Learning

I am extremely happy to release the first draft of my monograph based on the lecture notes published last year on this blog. (Comments on the draft are welcome!) The abstract reads as follows: This monograph presents the main mathematical … Continue reading

Posted in Optimization | 13 Comments

COLT 2014 accepted papers

The accepted papers for COLT 2014 have just been posted! This year we had a record number of 140 submissions, out of which 52 were accepted (38 for a 20mn presentation and 14 for a 5mn presentation). In my opinion … Continue reading

Posted in Conference/workshop | 2 Comments

On the influence of the seed graph in the preferential attachment model

    The preferential attachment model, introduced in 1992 by Mahmoud and popularized in 1999 by Barabási and Albert, has attracted a lot of attention in the last decade. In its simplest form it describes the evolution of a random tree. Formally we denote by … Continue reading

Posted in Random graphs | 6 Comments

Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization

    About a year ago I described Nesterov’s Accelerated Gradient Descent in the context of smooth optimization. As I mentioned previously this has been by far the most popular post on this blog. Today I have decided to revisit this post to give a … Continue reading

Posted in Optimization | 4 Comments

COLT deadline next week

COLT deadline is approaching fast, don’t forget to send your awesome learning theory paper(s) before Friday 7th! Also recall that if you get a paper into COLT it will (i) give you an excuse to spend a few days in … Continue reading

Posted in Conference/workshop | Leave a comment

One year of blogging

A year ago I started this blog. I did not expect this experience to be so rewarding. For instance seeing weeks after the weeks the growing interest for the optimization posts gave me the stamina to pursue this endaveor to … Continue reading

Posted in Uncategorized | 5 Comments

A good NIPS!

This year’s edition of NIPS was a big success. As you probably already know we had the surprise visit of Mark Zuckerberg (see here for the reason behind this visit). More interestingly (or perhaps less interestingly depending on who you are) here … Continue reading

Posted in Conference/workshop | 3 Comments