In the following table we summarize our findings of previous lectures in terms of oracle convergence rates (we denote ).
Note that in the last two lines the upper bounds and lower bounds are not matching. In both cases one can show that in fact the lower bound is tight (using respectively Nesterov’s Accelerated Gradient Descent, and the Center of Gravity Method).
Thus in terms of oracle complexity the picture is now complete. However this is not quite the end of the story and the algorithms can be improved in various ways:
- The projection step in Projected Subgradient Descent can be expensive in terms of computational complexity. We will see an algorithm that avoids this projection, called Conditional Gradient Descent.
- The black-box model is essentially an abstraction (note that this is true for
-order oracles, but the situation is different for
-order oracles). In practice one often knows the function to be optimized entirely. In that context it seems very wasteful to drop all this information and focus on black-box procedures. However by doing so we were able to derive linear time algorithm while the ‘structural’ Interior Point Methods (which use the form of the function to be optimized by deriving an appropriate self-concordant barrier) are not linear time. We will see one example of an algorithm that makes a partial use of the structural information on the function to be optimized, while still being essentially a gradient-based procedure with linear time complexity. More specifically we will study the FISTA algorithm that can optimize functions of the form
, where
is known and
can be accessed with a
-order oracle. Another very interesting idea in that direction is the Nesterov’s smoothing but unfortunately we will not have time to cover it.
- The rates of convergence we proved did not explicitely involve the dimension
. However in specific instances some parameters may involve a dependency on the dimension. For instance consider the case where the function
is such that the magnitude of its subgradients are controlled coordinate-wise, say
. In that case the best one can say is that
which leads to a rate of convergence for Projected Subgradient Descent of order
. We will see that this situation can be dramatically improved and that one can obtain in this setting a rate of order
. More generally we will describe the Mirror Descent algorithm which is adapted to convex functions which are
-Lipschtiz in an arbitrary norm
(not necessarily
).
We start this program with a projection-free algorithm for black-box constrained optimization.
Conditional Gradient Descent (a.k.a. Frank-Wolfe algorithm)
Conditional Gradient Descent is an algorithm designed to minimize a smooth convex function over a compact convex set
. It performs the following update for
:
This algorithm goes back to Frank and Wolfe (1956), and the following result is extracted from Jaggi (2013). We consider here functions which are -smooth in some arbitrary norm
, that is
where the dual norm is defined as
.
Theorem. Let
be a
-smooth function w.r.t. some norm
,
, and
. Then for any
, one has
Proof: The following sequence of inequalities holds true, using respectively -smoothness (which implies
), the definition of
, the definition of
, and the convexity of
:
Rewriting this inequality in terms of one obtains
A simple induction using that finishes the proof (note that the initialization is done at step
with the above inequality yielding
).
The rate of convergence we just proved for Conditional Gradient Descent is no better than the rate of Projected Gradient Descent in the same setting. However, on the contrary to the latter algorithm where a quadratic programming problem over has to be solved (the projection step), here we only need to solve a linear problem over
, and in some cases this can be much simpler.
While being projection-free is an important property of Conditional Gradient Descent, a perhaps even more important property is that it produces sparse iterates in the following sense: Assume that is a polytope with sparse vertices, that is the number of non-zero coordinates (which we denote by the
-norm
which is not a norm despite the name) in a fixed vertex is small, say of order
. Then at each iteration, the new iterate
will increase its number of non-zero coordinates by at most
. In particular one has
. We will see now an example of application where this property is critical to obtain a computationally tractable algorithm.
Note that the property described above proves in particular that for a smooth function on the simplex
there always exist sparse approximate solutions. More precisely there must exist a point
with only
non-zero coordinates and such that
. Clearly this is the best one can hope for in general, as it can be seen with the function
(since by Cauchy-Schwarz one has
which implies on the simplex
).
An application of Conditional Gradient Descent: Least-squares regression with structured sparsity
This section is inspired by an open problem from this paper by Gabor Lugosi (what is described below solves the open problem). Consider the problem of approximating a signal by a ‘small’ combination of elementary dictionary elements
. One way to do this is to consider a LASSO type problem in dimension
of the following form (we denote here
for the
coordinate of a vector
)
Let be the dictionary matrix with
column given by
. Instead of considering the penalized version of the problem one could look at the following constrained problem on which we will focus now:
We will now make some assumptions on the dictionary. We are interested in cases where the size of the dictionary can be very large, potentially exponential in the ambient dimension
. Nonetheless we want to restrict our attention to algorithms that run in reasonable time with respect to the ambient dimension
, that is we want polynomial time algorithms in
. Of course in general this is impossible, and we need to assume that the dictionary has some structure that can be exploited. Here we make the assumption that one can do linear optimization over the dictionary in polynomial time in
. More precisely we assume that we can solve in time
(where
is polynomial) the following problem for any
:
This assumption is met for many combinatorial dictionaries. For instance the dictionary elements could be vector of incidence of spanning trees in some fixed graph, and then the linear optimization problem can be solved with a greedy algorithm.
Finally we assume that the -norm of the dictionary elements are controlled by some
, that is
.
Recall that one wants to minimize the function on the
-ball of
in polynomial time in
. Note that at first sight this task may seem completely impossible as we are not even allowed to write down entirely a vector
(since this would take time linear in
). The key property that will save us is that this function admits sparse minimizers as we discussed in the previous section, and this will be exploited by the Conditional Gradient Descent. First note that
This form for the gradient together with our computational assumption on the dictionary imply that Conditional Gradient Descent can be run in polynomial time. Indeed, assume that is already computed, then the first step of Conditional Gradient Descent is to find the coordinate
that maximizes
which can be done by maximizing
and
. Thus this first step takes time
. Computing
from
and
takes time
since
, and computing
from
and
takes time
. Thus the overall time complexity of
steps is
.
To derive a rate of convergence it remains to study the smoothness of . This can be done as follows:
which means that is
-smooth with respect to the
norm. Thus we get the following rate of convergence:
In other words we proved that one can get an -optimal solution to our original problem with a computational effort of
using the Conditional Gradient Descent.
By Conditional Gradient Descent (a.k.a. Frank-Wolfe algorithm) | zhanxingzhu May 2, 2013 - 10:55 am
[…] Recently, I read Martin Wainwright’s old paper: A new class of upper bounds on the log partition function, and found that conditional gradient was used in his paper to optimize edge appearance probability. Sébastien Bubeck gave a nice introduction on the conditional gradient. […]