Category Archives: Optimization
Optimal bound for stochastic bandits with corruption
Guest post by Mark Sellke. In the comments of the previous blog post we asked if the new viewpoint on best of both worlds can be used to get clean “interpolation” results. The context is as follows: in a STOC … Continue reading
Amazing progress in adversarially robust stochastic multi-armed bandits
In this post I briefly discuss some recent stunning progress on robust bandits (for more background on bandits see these two posts, part 1 and part 2, in particular what is described below gives a solution to Open Problem 3 … Continue reading
Nemirovski’s acceleration
I will describe here the very first (to my knowledge) acceleration algorithm for smooth convex optimization, which is due to Arkadi Nemirovski (dating back to the end of the 70’s). The algorithm relies on a -dimensional plane-search subroutine (which, in … Continue reading
A short proof for Nesterov’s momentum
Yesterday I posted the following picture on Twitter and it quickly became my most visible tweet ever (by far): I thought this would be a good opportunity to revisit the proof of Nesterov’s momentum, especially since as it … Continue reading
Smooth distributed convex optimization
A couple of months ago we (Kevin Scaman, Francis Bach, Yin Tat Lee, Laurent Massoulie and myself) uploaded a new paper on distributed convex optimization. We came up with a pretty clean picture for the optimal oracle complexity of this … Continue reading
Geometry of linearized neural networks
This week we had the pleasure to host Tengyu Ma from Princeton University who told us about the recent progress he has made with co-authors to understand various linearized versions of neural networks. I will describe here two such results, … Continue reading
Local max-cut in smoothed polynomial time
Omer Angel, Yuval Peres, Fan Wei, and myself have just posted to the arXiv our paper showing that local max-cut is in smoothed polynomial time. In this post I briefly explain what is the problem, and I give a short … Continue reading
Kernel-based methods for convex bandits, part 3
(This post absolutely requires to have read Part 1 and Part 2.) A key assumption at the end of Part 1 was that, after rescaling space so that the current exponential weights distribution is isotropic, one has (1) for … Continue reading
Kernel-based methods for convex bandits, part 2
The goal of this second lecture is to explain how to do the variance calculation we talked about at the end of Part 1 for the case where the exponential weights distribution is non-Gaussian. We will lose here a factor … Continue reading
Kernel-based methods for bandit convex optimization, part 1
A month ago Ronen Eldan, Yin Tat Lee and myself posted our latest work on bandit convex optimization. I’m quite happy with the result (first polynomial time method with poly(dimension)-regret) but I’m even more excited by the techniques we developed. Next … Continue reading