Category Archives: Optimization

The entropic barrier: a simple and optimal universal self-concordant barrier

Ronen Eldan and I have just uploaded our new paper on the arxiv (it should appear tomorrow, for the moment you can see it here). The abstract reads as follows: We prove that the Fenchel dual of the log-Laplace transform … Continue reading

Posted in Optimization | 1 Comment

Komlos conjecture, Gaussian correlation conjecture, and a bit of machine learning

Today I would like to talk (somewhat indirectly) about a beautiful COLT 2014 paper by Nick Harvey and Samira Samadi. The problem studied in this paper goes as follows: imagine that you have a bunch of data points in with a certain … Continue reading

Posted in Optimization, Probability theory, Theoretical Computer Science | 5 Comments

Theory of Convex Optimization for Machine Learning

I am extremely happy to release the first draft of my monograph based on the lecture notes published last year on this blog. (Comments on the draft are welcome!) The abstract reads as follows: This monograph presents the main mathematical … Continue reading

Posted in Optimization | 13 Comments

Nesterov’s Accelerated Gradient Descent for Smooth and Strongly Convex Optimization

    About a year ago I described Nesterov’s Accelerated Gradient Descent in the context of smooth optimization. As I mentioned previously this has been by far the most popular post on this blog. Today I have decided to revisit this post to give a … Continue reading

Posted in Optimization | 5 Comments

Guest post by Dan Garber and Elad Hazan: The Conditional Gradient Method, A Linear Convergent Algorithm – Part II/II

The goal of this post is to develop a Conditional Gradient method that converges exponentially fast while basically solving only a linear minimization problem over the domain on each iteration. To this end we consider the following relaxation of the … Continue reading

Posted in Optimization | Leave a comment

Guest post by Dan Garber and Elad Hazan: The Conditional Gradient Method, A Linear Convergent Algorithm – Part I/II

In a previous post Sebastien presented and analysed the conditional gradient method for minimizing a smooth convex function over a compact and convex domain . The update step of the method is as follows,     where , . The … Continue reading

Posted in Optimization | Leave a comment

The aftermath of ORF523 and the final

It has been two weeks since my last post on the blog, and I have to admit that it felt really good to take a break from my 2-posts-per-week regime of the previous months. Now that ORF523 is over the … Continue reading

Posted in Optimization | Leave a comment

ORF523: Optimization with bandit feedback

In this last lecture we consider the case where one can only access the function with a noisy -order oracle (see this lecture for a definition). This assumption models the ‘physical reality’ of many practical problems (on the contrary to … Continue reading

Posted in Optimization | Leave a comment

ORF523: Acceleration by randomization for a sum of smooth and strongly convex functions

In this lecture we are interested in the following problem:     where each is -smooth and strongly-convex. We denote by the condition number of these functions (which is also the condition number of the average). Using a gradient descent … Continue reading

Posted in Optimization | 2 Comments

ORF523: Noisy oracles

Today we start the third (and last) part of our class titled: ‘Optimization and Randomness’. An important setting that we will explore is the one of optimization with noisy oracles: Noisy -order oracle: given it outputs such that . Noisy … Continue reading

Posted in Optimization | 1 Comment