In this lecture we consider the same setting than in the previous post (that is we want to minimize a smooth convex function over ). Previously we saw that the plain Gradient Descent algorithm has a rate of convergence of order after steps, while the lower bound that we proved is of order .
We present now a beautiful algorithm due to Nesterov, called Nesterov’s Accelerated Gradient Descent, which attains a rate of order . First we define the following sequences:
(Note that .) Now the algorithm is simply defined by the following equations, with an arbitrary initial point ,
In other words, Nesterov’s Accelerated Gradient Descent performs a simple step of gradient descent to go from to , and then it ‘slides’ a little bit further than in the direction given by the previous point .
The intuition behind the algorithm is quite difficult to grasp, and unfortunately the analysis will not be very enlightening either. Nonetheless Nesterov’s Accelerated Gradient is an optimal method (in terms of oracle complexity) for smooth convex optimization, as shown by the following theorem.
Theorem (Nesterov 1983) Let be a convex and -smooth function, then Nesterov’s Accelerated Gradient Descent satisfies
We follow here the proof by Beck and Teboulle from the paper ‘A fast iterative shrinkage-thresholding algorithm for linear inverse problems‘.
Proof: We start with the following observation, that makes use of Lemma 1 and Lemma 2 from the previous lecture: let , then
Now let us apply this inequality to and , which gives
Similarly we apply it to and which gives
Now multiplying (1) by and adding the result to (2), one obtains with ,
Multiplying this inequality by and using that by definition one obtains
Now one can verify that
Next remark that, by definition, one has
Putting together (3), (4) and (5) one gets with ,
Summing these inequalities from to one obtains:
By induction it is easy to see that which concludes the proof.