Category Archives: Optimization

Nemirovski’s acceleration

I will describe here the very first (to my knowledge) acceleration algorithm for smooth convex optimization, which is due to Arkadi Nemirovski (dating back to the end of the 70’s). The algorithm relies on a -dimensional plane-search subroutine (which, in … Continue reading

Posted in Optimization | 17 Comments

A short proof for Nesterov’s momentum

Yesterday I posted the following picture on Twitter and it quickly became my most visible tweet ever (by far):     I thought this would be a good opportunity to revisit the proof of Nesterov’s momentum, especially since as it … Continue reading

Posted in Optimization | 20 Comments

Smooth distributed convex optimization

A couple of months ago we (Kevin Scaman, Francis Bach, Yin Tat Lee, Laurent Massoulie and myself) uploaded a new paper on distributed convex optimization. We came up with a pretty clean picture for the optimal oracle complexity of this … Continue reading

Posted in Optimization | Leave a comment

Geometry of linearized neural networks

This week we had the pleasure to host Tengyu Ma from Princeton University who told us about the recent progress he has made with co-authors to understand various linearized versions of neural networks. I will describe here two such results, … Continue reading

Posted in Machine learning, Optimization | 2 Comments

Local max-cut in smoothed polynomial time

Omer Angel, Yuval Peres, Fan Wei, and myself have just posted to the arXiv our paper showing that local max-cut is in smoothed polynomial time. In this post I briefly explain what is the problem, and I give a short … Continue reading

Posted in Optimization, Probability theory | 2 Comments

Kernel-based methods for convex bandits, part 3

(This post absolutely requires to have read Part 1 and Part 2.) A key assumption at the end of Part 1 was that, after rescaling space so that the current exponential weights distribution is isotropic, one has (1)   for … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Kernel-based methods for convex bandits, part 2

The goal of this second lecture is to explain how to do the variance calculation we talked about at the end of Part 1 for the case where the exponential weights distribution is non-Gaussian. We will lose here a factor … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Kernel-based methods for bandit convex optimization, part 1

A month ago Ronen Eldan, Yin Tat Lee and myself posted our latest work on bandit convex optimization. I’m quite happy with the result (first polynomial time method with poly(dimension)-regret) but I’m even more excited by the techniques we developed. Next … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Notes on least-squares, part II

As promised at the end of Part I, I will give some intuition for the Bach and Moulines analysis of constant step size SGD for least-squares. Let us recall the setting. Let be a pair of random variables with where … Continue reading

Posted in Machine learning, Optimization | Leave a comment

Bandit theory, part II

These are the lecture notes for the second part of my minicourse on bandit theory (see here for Part 1). The linear bandit problem, Auer [2002] We will mostly study the following far-reaching extension of the -armed bandit problem. Known … Continue reading

Posted in Optimization, Probability theory | 4 Comments