Embeddings of finite metric spaces in Hilbert space

In this post we discuss the following notion: Let (X,d_X) and (Y,d_Y) be two metric spaces, one says that X embeds into Y with distortion D>0 if there exists \lambda >0 and f : X \rightarrow Y such that for any x,y \in X,

    \[1 \leq \frac{d_Y(f(x), f(y))}{\lambda \ d_X(x,y)} \leq D .\]

We write this as X \overset{D}\hookrightarrow Y.

Note that if Y is a normed vector space (over the reals) then the scaling parameter \lambda in the above definition is unnecessary since one can always perform this scaling with the embedding f (indeed in this case d_Y(\lambda f(x), \lambda f(y)) = \lambda d_Y(f(x), f(y))).

Informally if X embeds into Y with distortion D then up to an ‘error’ of order D one can view X (once mapped with the embedding f) as a subset of Y. This is particularly useful when Y has a specific known structure that we may want to exploit, such as Y = \ell_p.

The case p=2 is of particular interest since then one embeds into a Hilbert space which comes with a lot of ‘structure’. Thus we mainly focus on embeddings in \ell_2, and we also focus on (X,d) being a finite metric space with n = |X|.

First observe that the case p=\infty (embeddings in \ell_{\infty}) is trivial:

Proposition X \overset{1}\hookrightarrow \ell_{\infty}^n

Proof: Consider the embedding f(x) = (d(x,z))_{z \in X} \in \mathbb{R}^n. It is obvious that

    \[\|(d(x,z) - d(y,z))_{z \in X}\|_{\infty} \geq d(x,y) ,\]

and the other side of the inequality follows from the triangle inequality for d. Thus one has \|f(x) - f(y)\|_{\infty} = d(x,y) which concludes the proof.

\Box

Could it be that the situation is that simple for p=2? It seems very unlikely because \ell_2 is a much more structured space than an arbitrary metric space. Consider for example X=\{r,x,y,z\} with d(r,x) = d(r,y) = d(r,z) = 1 and d(x,y) = d(x,z) = d(z,y) = 2 (that is X is a tree with root r and three leaves x,y,z). It is easy to see that it is impossible to embed X into \ell_2 with distortion 1. Indeed f(x), f(y) and f(z) would be on a Euclidean ball of radius 1 and they would be at distance 2 from each other, which is impossible for 3 points.

So what can we say in general about embeddings into \ell_2? The following beautiful result gives the answer:

Theorem (Bourgain 1985) X \overset{O(\log n)}\hookrightarrow \ell_{2}

It can be shown that this result is unimprovable in general. However for specific classes of finite metric spaces one can hope for some improvement and we refer to the survey by Linial for more details on this.

 

Embeddings in \ell_2 with small distortion

While Bourgain’s theorem is a very strong statement in the sense that it applies to any finite metric space, it is also quite weak in the sense that the distortion is large, of order \log n. For some applications it is desirable to have a bounded distortion that does not grow with the size of X, or even a distortion of order 1+\epsilon with small \epsilon. As we said previously the only way to obtain such a statement is to make further assumptions on X. We will not take this route in this post but let me just mention the Johnson-Lindenstrauss lemma which should be well-known to Machine Learners and alike:

Lemma (Johnson-Lindenstrauss 1984) Let X \subset \ell_2 then X\overset{1+\epsilon}\hookrightarrow \ell_{2}^{O(\log(n) / \epsilon^2)}.

Now what can we say in general about embeddings with small distortion into \ell_2? There is a well-known result in convex geometry with the same flavor than this question, this is Dvoretzky’s theorem: any symmetric convex body in \mathbb{R}^n has a section of dimension O(c(\epsilon) \log n) which is Euclidean up to an error of order \epsilon. The analogue for the theory of metric embeddings is called the non-linear Dvoretzky’s theorem and it goes as follows:

Theorem (Bourgain–Figiel–Milman 1986) Let \epsilon > 0. There exists Y \subset X with |Y| \geq c(\epsilon) \log n and Y\overset{1+\epsilon}\hookrightarrow \ell_{2}.

Again this theorem has a weakness: while the distortion is small it also only applies to a small subset of the original metric space (of size O(\log n)). In some sense we have now two extreme results, Bourgain’s theorem which applies to the entire metric space but with a large distortion, and Bourgain–Figiel–Milman which applies to a small small subset but with a small distortion. A natural question at this point is to try to ‘interpolate’ between these two results. Such a statement was first obtained by Bartal, Linial, Mendel and Naor (2002) and later improved in Mendel and Naor (2006):

Theorem (Mendel and Naor 2006) Let \epsilon > 0. There exists Y \subset X with |Y| \geq n^{1-\epsilon} and Y\overset{O(1/\epsilon)}\hookrightarrow \ell_{2}.

 

In future posts I will try to prove some of these theorems. This might take a while as in the coming weeks I will be traveling to various conferences (including COLT 2013 and ICML 2013) and I will try to blog about some of the stuff that I learn there.

Posted in Theoretical Computer Science | Leave a comment

The aftermath of ORF523 and the final

It has been two weeks since my last post on the blog, and I have to admit that it felt really good to take a break from my 2-posts-per-week regime of the previous months. Now that ORF523 is over the blog will probably be much more quiet in the near future, though I have a few cool topics that I want to discuss in the coming weeks and I’m also hoping to get interesting guest posts. In any case the next period of intense activity will probably be next Fall, as I will be visiting the newly created Simons Institute for the Theory of Computing at UC Berkeley and I plan to blog about the new tricks that I will learn from the program on the Theoretical Foundations of Big Data Analysis and the one on Real Analysis in Computer Science. More details on this to come soon!

For those of you who followed assiduously my lectures I thought that you might want to take a look the final that I gave to my students this morning. The first part is based on this paper by Alon and Naor, and the second part is based on this paper by Nesterov (how could I not finish with a result from Nesterov??). Enjoy!

 

An SDP relaxation based on Grothendieck’s inequality

In this problem we consider the space of rectangular matrices \mathbb{R}^{m \times n} for m, n \geq 1. We denote by \langle \cdot, \cdot \rangle the Frobenius inner product on \mathbb{R}^{m \times n}, that is

    \[\langle A, B \rangle = \sum_{i=1}^m \sum_{j=1}^n A_{i,j} B_{i,j} .\]

Let p, p' \in \mathbb{R}_{++} \cup \{\infty\}. We consider the following norm on \mathbb{R}^{m \times n}:

    \[\|A\|_{p \rightarrow p'} = \max_{x \in \mathbb{R}^n : \|x\|_p \leq 1} \|A x\|_{p'} .\]

In other words this is the operator norm of A when we view it as a mapping from \ell_p^n to \ell_{p'}^m. In this problem we are interested in computing this operator norm for a given matrix.

1. Is the problem of computing the operator norm of A a convex problem?

2. For what values of p and p' can you easily propose a polynomial-time algorithm?

3. Let q' be such that 1/p' + 1/q' =1. Recall why one has

    \[\|A\|_{p \rightarrow p'} = \max_{x \in \mathbb{R}^n : \|x\|_p \leq 1, y \in \mathbb{R}^m : \|y\|_{q'} \leq 1} \langle A , y x^{\top} \rangle .\]

We will now focus on the case p=\infty and p'=1, that is

    \[\|A\|_{\infty \rightarrow 1} = \max_{x \in \mathbb{R}^n : \|x\|_{\infty} \leq 1, y \in \mathbb{R}^m : \|y\|_{\infty} \leq 1} \langle A , y x^{\top} \rangle.\]

We also consider the following quantity:

    \begin{eqnarray*} \text{SDP}(A) = & \max & \langle A , V U^{\top} \rangle \\ & \text{subject to} & U \in \mathbb{R}^{n \times (n+m)}, V \in \mathbb{R}^{m \times (n+m)} \\ & & \text{with unit norm rows (in} \; \ell_2). \end{eqnarray*}

4. Show that

    \[\|A\|_{\infty \rightarrow 1} \leq \text{SDP}(A) .\]

5. Show that the problem of computing \text{SDP}(A) for a given matrix A is an SDP.

Grothendieck’s inequality says that there exists a constant K_G \in [1.67, 1.79] such that

    \[\text{SDP}(A) \leq K_G \|A\|_{\infty \rightarrow 1} .\]

6. What is the implication of Grothendieck’s inequality for the problem of computing the norm \|A\|_{\infty \rightarrow 1}?

 

Coordinate Descent

Let f be a differentiable convex function on \mathbb{R}^n that admits a minimizer x^*. We denote \partial_i f(x) for the partial derivative in direction e_i of f at x (it is the i^{th} coordinate of \nabla f(x)). In this problem we consider the Coordinate Descent algorithm to solve the problem \min_{x \in \mathbb{R}^n} f(x): Let x_1 \in \mathbb{R}^n and for t>1 let

    \begin{eqnarray*} i_t & = & \mathrm{argmax}_{1 \leq i \leq n} | \partial_i f(x_t) | , \\ x_{t+1} & = & x_t - \eta \ \partial_{i_t} f (x_t) e_{i_t} . \end{eqnarray*}

We say that f is coordinate-wise \beta-smooth if for any x \in \mathbb{R}^n, h \in \mathbb{R}, i \in [n],

    \[|\partial_i f(x+h e_i) - \partial_i f(x) | \leq \beta |h| .\]

1. Is coordinate-wise smoothness a stronger or weaker assumption than standard smoothness?

2. Assume that f is coordinate-wise \beta-smooth and let \eta = \frac{1}{\beta}. Show first that

    \[f(x_{t+1}) - f(x_t) \leq - \frac{1}{2 \beta n} \| \nabla f(x_t) \|^2_2 .\]

Let R = \max \{\|x - x^*\|_2 : x \in \mathbb{R}^n \; \text{and} \; f(x) \leq f(x_1) \}, show next that

    \[f(x_{t+1}) - f(x_t) \leq - \frac{1}{2 \beta R^2 n} (f(x_t) - f(x^*))^2 .\]

Conclude by proving that

    \[f(x_t) - f(x^*) \leq \frac{2 \beta R^2 n}{t+3} .\]

Hint: to deal with the term f(x_1) - f(x^*) you can first show that f has to be (\beta n)-smooth.

3. Can you use this algorithm and the convergence result described above to show existence of sparse minimizers in the spirit of what we did with the Conditional Gradient Descent algorithm?

4. In general how does the computational complexity of Coordinate Descent compares to the computational complexity of Gradient Descent to attain an \epsilon-optimal point for a \beta-smooth function? What happens if you assume access to an oracle to compute i_t?

5. Same question but in the special case where f(x) = \sum_{i=1}^n f_i(x(i)) + \|A x - b \|_2^2 with A \in \mathbb{R}^{m \times n} and b \in \mathbb{R}^m.

6. Can you propose a variant of Coordinate Descent where the computation of i_t would be much faster and yet this new i_t would still allow us to prove the rate of convergence of question 2.? Hint: use randomization!

Posted in Optimization | Leave a comment

ORF523: Optimization with bandit feedback

In this last lecture we consider the case where one can only access the function with a noisy 0^{th}-order oracle (see this lecture for a definition). This assumption models the ‘physical reality’ of many practical problems (on the contrary to the case of a 1^{st}-order oracle which was essentially used as an ‘approximation’). Indeed there are many situations where the function f to be optimized is not known, but one can simulate the function. Imagine for example that we are trying to find the correct dosage of a few chemical elements in order to produce the most effective drug. We can imagine that given specific dosage one can produce the corresponding drug, and then test it to estimate its effectiveness. This corresponds precisely to assuming that we have a noisy 0^{th}-order oracle for the function \mathrm{dosage} \mapsto \mathrm{effectiveness}.

As one can see with the above example, the type of applications for 0^{th}-order optimization are fundamentally different from the type of applications for 1^{st}-order optimization. In particular in 1^{st}-order optimization the function f is known (we need to compute gradients), and thus it must have been directly engineered by humans (who wrote down the function!). This observation shows that in these applications one has some flexibility in the choice of f, and since we know that (roughly speaking) convex functions can be optimized efficiently, one has a strong incentive into engineering convex functions. On the other hand in 0^{th}-order optimization the function is produced by ‘Nature’. As it turns out, Nature does not care that we can only solve convex problems, and in most applications of 0^{th}-order optimization I would argue that it does not make sense to make a convexity assumption.

As you can imagine 0^{th}-order optimization appear in various contexts, from experiments in physics or biology, to the design of Artificial Intelligence for games, and the list goes on. In particular different communities have been looking at this problem, with different terminology and different assumptions. Being completely biased by my work on the multi-armed bandit problem, I believe that the bandit terminology is the nicest one to describe 0^{th}-order optimization and I will now refer to it as optimization with bandit feedback (for more explanation on this you can check my survey with Cesa-Bianchi on bandit problems). In order to simplify the discussion we will focus in this lecture on optimization over a finite set, that is |\mathcal{X}| < + \infty (we give some references on the continuous case at the end of the post).

 

Discrete optimization with bandit feedback

We want to solve the problem:

    \[\max_{i = 1, \hdots, K} \mu_i ,\]

where the \mu_i‘s are unknown quantities on which we can obtain information as follows: if one makes a query to action i (we will also say that one pulls arm i), then one receives an independent random variable Y such that \mathbb{E} Y = \mu_i. We will also assume that the distribution of Y is subgaussian (for example this include Gaussian distributions with variance less than 1 and distributions supported on an interval of length less than 2). The precise definition is not important, we will only use the fact that if one receives Y_1, \hdots, Y_s from pullings s times arm i then \hat{\mu}_{i,s} = \frac{1}{s} \sum_{t=1}^s Y_s satisfies:

(1)   \begin{equation*}  \mathbb{P}(\hat{\mu}_{i,s} - \mu_i > u) \leq \exp \left(- \frac{s u^2}{2} \right) . \end{equation*}

For sake of simplicity we assume that \mathrm{argmax}_{1 \leq i \leq K} \mu_i = \{i^*\}. An important parameter in our analysis is the suboptimality gap for arm i: \Delta_i = \mu_{i^*} - \mu_i (we also denote \Delta = \min_{i \neq i^*} \Delta_i).

 

We will be interested in two very different models:

  • In the PAC (Probably Approximately Correct) setting the algorithm can make as many queries as necessary, so that when it stops querying it can output an arm J \in \{1, \hdots, K\} such that \mathbb{P}(\mu_J < \mu_{i^*} - \epsilon) \leq \delta where \epsilon and \delta are prespecified accuracies. We denote by \mathcal{N} the number of queries that the algorithm made before stopping, and the objective is to have (\epsilon,\delta)-PAC algorithms with \mathcal{N} as small as possible. This formulation goes back to Bechhofer (1954).
  • In the fixed budget setting the algorithm can make at most n queries, and then it has to output an arm J \in \{1, \hdots, K\}. The goal here is to minimize the optimization error \mu_{i^*} - \mu_J. Strangely this formulation is much more recent: it was proposed in this paper that I wrote with Munos and Stoltz.

While at first sight the two models might seem similar, we will see that in fact there is a key fundamental difference.

 

Trivial algorithms

A trivial (\epsilon,\delta)-PAC algorithm would be to query each arm of order of \frac{1}{\epsilon^2} \log \frac{K}{\delta} times and then output the empirical best arm. Using (1) it is obvious that this algorithm is indeed (\epsilon,\delta)-PAC, and furthermore it satisfies \mathcal{N} \leq c \frac{K}{\epsilon^2} \log \frac{K}{\delta}.

 

In the fixed budget setting a trivial algorithm is to divide the budget evenly among the K arms. Using (1) it is immediate that this strategy satisfies \mathbb{P}(\mu_{i^*} - \mu_J \geq \epsilon) \leq K \exp( - c \frac{n}{K} \epsilon^2), which equivalently states that to have \mathbb{P}(\mu_{i^*} - \mu_J \geq \epsilon) \leq \delta one needs the budget n to be at least of order of \frac{K}{\epsilon^2} \log \frac{K}{\delta}.

 

We will now see that these trivial algorithms can be dramatically improved by taking into account the potential heterogeneity in the \mu_i‘s. For sake of simplicty we focus now on finding the best arm i^*, that is in the PAC setting we take \epsilon = 0, and in the fixed budget setting we consider \mathbb{P}(J \neq i^*). The critical quantity will be the hardness measure:

    \[H = \sum_{i \neq i^*} \frac{1}{\Delta_i^2} ,\]

that we introduced with Audibert and Munos in this paper (though it appeared previously in many other papers). We will show that, roughly speaking, while the trivial algorithms need of order of \frac{K}{\Delta^2} queries to find the optimal arm, our improved algorithms can find it with only H queries. Note that for information-theoretic reasons H is a lower bound on the number of queries that one has to make to find the optimal arm (see this paper by Mannor and Tsitsiklis for a lower bound in the PAC setting, and my paper with Audibert and Munos for a lower bound in the fixed budget setting).

 

Successive Elimination (SE) for the PAC setting

The Successive Elimination algorithm was proposed in this paper by Even-Dar, Mannor and Mansour. The idea is very simple: start with a set of active arms A_1 = \{1, \hdots, K\}. At each round t, pull once each arm in A_t. Now construct confidence intervals of size c \sqrt{\frac{\log (K t / \delta)}{t}} for each arm, and build A_{t+1} from A_t by eliminating arms in A_t for which the confidence interval does not overlap with the confidence interval of the currently best empirical arm in A_t. The algorithm stops when |A_t| = 1, and it outputs the single element of A_t. Using an union bound it is an easy exercise to verify that this algorithm is \delta-PAC, and furthermore with probability at least 1- \delta one has \mathcal{N} \leq c H \log \frac{K}{\delta \Delta^2}.

 

Successive Rejects (SR) for the fixed budget setting

The Successive Elimination algorithm cannot be easily adapted in the fixed budget setting. The reason is that in the fixed budget framework we do not know a priori what is a reasonable value for the confidence parameter \delta. Indeed in the end an optimal algorithm should have a probability of error of order \exp(- c n / H), which depends on the unknown hardness parameter H. As a result one cannot use strategy based on confidence intervals in the fixed budget setting unless one knows H (note that estimating H from data is basically as hard as finding the best arm). With Audibert and Munos we proposed an alternative to SE for the fixed budget that we called Successive Rejects.

The idea is to divide the budget n into K-1 chunks n_1, \hdots, n_{K-1} such that n_1 + \hdots + n_{K-1} = n. The algorithm then runs in K-1 phases. Let A_k be the set of active arms in phase k (with A_1 = \{1, \hdots, K\}). In phase k each arm in A_k is sampled n_k - n_{k-1} times, and the end of the phase the arm i with the worst empirical performance in A_k is rejected, that is A_{k+1} = A_k \setminus \{i\}. The output of the algorithm is the unique element of A_K.

Let \Delta_{(1)} \leq \Delta_{(2)} \leq \hdots \leq \Delta_{(K)} be the ordered \Delta_i‘s. Remark that in phase k, one of the k worst arms must still be in the active set, and thus using a trivial union bound one obtains that:

    \[\mathbb{P}(J \neq i^*) \leq \sum_{k=1}^{K-1} 2 k \exp\left( - n_k \frac{\Delta_{(K-k+1)}^2}{8} \right) .\]

Now the key observation is that by taking n_k proportional to \frac{1}{K+1-k} one obtains a bound of the form \exp( - \tilde{O} (n / H)). Precisely let n_k = \alpha \frac{n}{K+1 - k} where \alpha is such that the n_k‘s sum to n (morally \alpha is of order 1 / \log K). Then we have

    \[n_k \Delta_{(K-k+1)}^2 \geq \frac{\alpha n}{\max_{2 \leq i \leq K} i \Delta_{(i)}^{-2}} \geq \frac{\alpha n}{\sum_{i=2}^K \frac{1}{\Delta_{(i)}^2}} \geq c \frac{n}{\log(K) H} ,\]

for some numerical constant c>0. Thus we proved that SR satisfies

\mathbb{P}(J \neq i^*) \leq K^2 \exp \left(- c \frac{n}{\log(K) H} \right), which equivalently states that SR finds the best arm with probability at least 1-\delta provided that the budget is at least of order H \log(K) \log(K / \delta).

 

The continuous case

Many questions are still open in the continuous case. As I explained at the beginning of the post, convexity might not be the best assumption from a practical point of view, but it is nonetheless a very natural mathematical problem to try to understand the best rate of convergence in this case. This is still an open problem, and you can see the state of the art for upper bounds in this paper by Agarwal, Foster, Hsu, Kakade and Rakhlin, and for lower bounds in this paper by Shamir. In my opinion a more ‘practical’ assumption on the function is simply to assume that it is Lipschitz in some metric. The HOO (Hierarchical Optimistic Optimization) algorithm attains interesting performances when the metric is known, see this paper by myself, Munos, Stoltz and Szepesvari (similar results were obtained independently by Kleinberg, Slivkins and Upfal in this paper). Recently progress has been made for the case where the metric is unknown, see this paper by Slivkins, this paper by Munos, and this one by Bull. Finally let me remark that this Lipschitzian assumption is very weak, and thus one cannot hope to solve high-dimensional problem with the above algorithms. In fact, these algorithms are designed for small dimensional problems (say dimension less than 5 or so). In standard optimization one can solve problems in very large dimension because of the convexity assumption. For 0^{th}-order optimization I think that we still don’t have a natural assumption that would allow us to scale-up the algorithms to higher dimensional problems.

Posted in Optimization | Leave a comment

ORF523: Acceleration by randomization for a sum of smooth and strongly convex functions

In this lecture we are interested in the following problem:

    \[\min_{x \in \mathbb{R}^n} \frac{1}{m} \sum_{i=1}^m f_i(x) ,\]

where each f_i is \beta-smooth and \alpha strongly-convex. We denote by Q=\frac{\beta}{\alpha} the condition number of these functions (which is also the condition number of the average).

Using a gradient descent directly on the sum with step-size \frac{2}{\alpha + \beta} one obtains a rate of convergence of order \exp( - c t / Q) for some numerical constant c>0 (see this lecture). In particular one can reach an \epsilon-optimal point in time O(m Q N \log(1/\epsilon)) with this method, where O(N) is the time to compute the gradient of a single function f_i(x). Using Nesterov’s Accelerated Gradient Descent (see this lecture) one can reduce the time complexity to O(m \sqrt{Q} N \log(1/\epsilon)).

Using the SGD method described in the previous lecture (with the step-size/averaging described here) one can reach an \epsilon-optimal point in time O( \log(m) N / (\alpha \epsilon)) which is much worse than the basic gradient descent if one is looking for a fairly accurate solution. We are interested in designing a randomized algorithm in the spirit of SGD that would leverage the smoothness of the function to attain a linear rate of convergence (i.e., the dependency in \epsilon would be of the form \log(1/\epsilon)).

 

Stochastic Average Gradient (SAG)

We first describe the SAG method proposed in this paper by Le Roux, Schmidt and Bach (2012). The method is simple and elegant: let y_0 = 0, x_1 \in \mathbb{R}^n and for t \geq 1 let i_t be drawn uniformly at random in \{1, \hdots, m\} and

    \[y_t(i) = \left\{ \begin{array}{ccc} \nabla f_i(x_t) & \text{if} & i=i_t \\ y_{t-1}(i) & \text{if} & i\neq i_t \end{array} \right. ; \qquad x_{t+1} = x_t - \frac{\eta}{m} \sum_{i=1}^m y_t(i) .\]

They show that in the regime where m \geq 8 Q, SAG (with step-size \eta = \frac{1}{2 m \alpha} and with x_1 obtained as the average of m iterates of SGD) has a rate of convergence of order \exp( - c t / m) (see Proposition 2 in their paper for a more precise statement). Since each step has a time-complexity of O(N \log(m)) this results in an overall time-complexity of O(m N \log(m) \log(1/\epsilon)), which is much better than the complexity of the plain gradient descent (or even of Nesterov’s Accelerated Gradient Descent). Unfortunately the proof of this result is quite difficult. We will now describe another strategy with similar performance but with a much simpler analysis.

 

Short detour through Fenchel’s duality

Before describing the next strategy we first recall a few basic facts about Fenchel duality. First the Fenchel conjugate of a function f: \mathbb{R}^n \rightarrow \mathbb{R} is defined by:

    \[f^*(u) = \sup_{x \in \mathbb{R}^n} u^{\top} x - f(x) .\]

Note that f^* is always a convex function (possibly taking value +\infty). If f is convex, then for any u \in \partial f(x) one clearly has f^*(u) + f(x) = u^{\top} x. Finally one can show that if f is \beta-smooth then f^* is \frac{1}{\beta}-strongly convex (one can use the second Lemma from this lecture).

 

Stochastic Dual Coordinate Ascent (SDCA)

We describe now the SDCA method proposed in this paper by Shalev-Shwartz and Zhang. We need to make a few more assumptions on the functions in the sum, and precisely we will now assume that we want to solve the following problem (note that all norms here are Euclidean):

    \[\min_{w \in \mathbb{R}^n} \frac{1}{m} \sum_{i=1}^m f_i(w^{\top} x_i) + \frac{\alpha}{2} \|w\|^2 ,\]

where

  • f_i : \mathbb{R} \rightarrow \mathbb{R} is convex, \beta-smooth.
  • f_i(s) \geq 0, \forall s \in \mathbb{R} and f_i(0) \leq 1.
  • \|x_i\| \leq 1.

Denote

    \[P(w) = \frac{1}{m} \sum_{i=1}^m f_i(w^{\top} x_i) + \frac{\alpha}{2} \|w\|^2\]

the primal objective. For the dual objective we introduce a dual variable \nu(i) for each data point x_i (as well as a temporary primal variable z(i)):

    \begin{eqnarray*} \min_{w \in \mathbb{R}^n} P(w) & = & \min_{z \in \mathbb{R}^m, w} \max_{\nu \in \mathbb{R}^m} \frac{1}{m} \sum_{i=1}^m \bigg( f_i(z(i)) + \nu(i) (z(i) - w^{\top} x_i) \bigg) + \frac{\alpha}{2} \|w\|^2 \\ & = & \max_{\nu \in \mathbb{R}^m} \min_{z \in \mathbb{R}^m, w} \frac{1}{m} \sum_{i=1}^m \bigg( f_i(z(i)) + \nu(i) (z(i) - w^{\top} x_i) \bigg) + \frac{\alpha}{2} \|w\|^2 \\ & = & \max_{\nu \in \mathbb{R}^m} \min_{w \in \mathbb{R}^n} \frac{1}{m} \sum_{i=1}^m \bigg( - f_i^*(- \nu(i)) - \nu(i) w^{\top} x_i \bigg) + \frac{\alpha}{2} \|w\|^2 . \end{eqnarray*}

Next observe that by the KKT conditions one has at the optimum (w^*, \nu^*):

    \[w^* = \frac{1}{\alpha m} \sum_{i=1}^m \nu^*(i) x_i ,\]

and thus denoting

    \[D(\nu) = \frac{1}{m} \sum_{i=1}^m - f_i^*(- \nu(i)) - \frac{\alpha}{2} \left\|\frac{1}{\alpha m} \sum_{i=1}^m \nu(i) x_i \right\|^2 ,\]

we just proved

    \[\min_{w \in \mathbb{R}^n} P(w) = \max_{\nu \in \mathbb{R}^m} D(\nu) .\]

 

We can now describe SDCA: let \nu_0 = 0 and for t \geq 0 let i_t be drawn uniformly at random in \{1, \hdots, m\} and

    \[\nu_{t+1}(i) = \left\{ \begin{array}{ccc} (1-\eta) \nu_t(i) - \eta f_i'(w_t^{\top} x_i) & \text{if} & i=i_t , \\ \nu_t(i) & \text{if} & i \neq i_t . \end{array} \right.\]

Let w_t = \frac{1}{\alpha m} \sum_{i=1}^m \nu_t(i) x_i be the corresponding primal variable. The following result justifies the procedure.

Theorem (Shalev-Shwartz and Zhang 2013) SDCA with \eta = \frac{m}{Q + m} satisfies:

    \[\mathbb{E} P(w_t) - \min_{w \in \mathbb{R}^n} P(w) \leq (Q+m) \exp\left( - \frac{t}{Q+m} \right) .\]

The above convergence rate yields an overall time-complexity of O((Q+m) N \log(m) \log((Q+m)/\epsilon)) for SDCA (where O(N) is the complexity of computing a gradient of w \mapsto f_i(w^{\top} x_i)), which can be much better than the O(m Q N \log(1/\epsilon)) complexity of the plain gradient descent.

Proof: The key property that we will prove is that the primal/dual gap is controlled by the improvement in the dual objective, precisely:

(1)   \begin{equation*} \mathbb{E} \left( P(w_t) - D(\nu_t) \right) \leq (Q+m) \mathbb{E} \left( D(\nu_{t+1}) - D(\nu_t) \right) . \end{equation*}

Let us see how the above inequality yields the result. First we clearly have D(\nu) \leq P(w), and thus the above inequality immediately yields:

    \[\mathbb{E} \left( D(\nu^*) - D(\nu_{t+1}) \right) \leq \left(1 - \frac{1}{Q+m}\right) \mathbb{E} \left( D(\nu^*) - D(\nu_{t}) \right) ,\]

which by induction gives (using D(0) \geq 0 and D(\nu^*) \leq 1):

    \[\mathbb{E} \left( D(\nu^*) - D(\nu_{t+1}) \right) \leq \left(1 - \frac{1}{Q+m}\right)^{t+1} \leq \exp\left( - \frac{t+1}{Q+m} \right) ,\]

and thus using once more (1):

    \[\mathbb{E} P(w_t) - P(w^*) \leq (Q+m) \exp\left( - \frac{t}{Q+m} \right) .\]

Thus it only remains to prove (1). First observe that the primal/dual gap can be written as:

    \[P(w_t) - D(\nu_t) = \frac{1}{m} \sum_{i=1}^m \bigg( f_i(w_t^{\top} x_i) + f_i^*(- \nu_t(i)) \bigg) + \alpha \|w_t\|^2 .\]

Let us now evaluate the improvement in the dual objective assuming that coordinate i is updated in the (t+1)^{th} step:

    \begin{align*} & m \left( D(\nu_{t+1}) - D(\nu_t) \right) \\ & = - f_i^*(-\nu_{t+1}(i)) - \frac{\alpha m}{2} \|w_{t+1}\|^2 - \left(- f_i^*(-\nu_t(i)) - \frac{\alpha m}{2} \|w_t\|^2\right) \\ & = - f_i^*(-(1-\eta) \nu_t(i) + \eta f_i'(w_t^{\top} x_i)) - \frac{\alpha m}{2} \left\|w_t - \frac{\eta}{\alpha m} (f_i'(w_t^{\top} x_i) + \nu_t(i)) x_i \right\|^2 \\ & \qquad - \left(- f_i^*(-\nu_t(i)) - \frac{\alpha m}{2} \|w_t\|^2\right) . \end{align*}

Now using that f_i^* is \frac{1}{\beta}-strongly convex one has

    \begin{eqnarray*} f_i^*(-(1-\eta) \nu_t(i) + \eta f_i'(w_t^{\top} x_i)) & \leq& (1 -\eta) f_i^*(- \nu_t(i)) + \eta f_i^*(f_i'(w_t^{\top} x_i)) \\ & & \;- \frac{1}{2 \beta} \eta (1-\eta) (f_i'(w_t^{\top} x_i) + \nu_t(i))^2 , \end{eqnarray*}

which yields

    \begin{align*} & m \left( D(\nu_{t+1}) - D(\nu_t) \right) \\ & \geq \eta f_i^*(- \nu_t(i)) - \eta f_i^*(f_i'(w_t^{\top} x_i)) + \frac{\eta (1-\eta)}{2 \beta} (f_i'(w_t^{\top} x_i) - \nu_t(i))^2 \\ & + \eta (f_i'(w_t^{\top} x_i) + \nu_t(i)) w_t^{\top} x_i - \frac{\eta^2}{2\alpha m} (f_i'(w_t^{\top} x_i) + \nu_t(i))^2 \|x_i\|^2 \\ & \geq \eta f_i^*(- \nu_t(i)) - \eta f_i^*(f_i'(w_t^{\top} x_i)) + \eta (f_i'(w_t^{\top} x_i) + \nu_t(i)) w_t^{\top} x_i , \end{align*}

where in the last line we used that \eta = \frac{m}{Q+m} and \|x_i\| \leq 1. Observe now that

    \[f_i^*(f_i'(w_t^{\top} x_i)) = (w_t^{\top} x_i) f_i'(w_t^{\top} x_i) - f_i(w_t^{\top} x_i) ,\]

which finally gives

    \[m \left( D(\nu_{t+1}) - D(\nu_t) \right) \geq \eta \bigg(f_i^*(- \nu_t(i)) + f_i(w_t^{\top} x_i) + \nu_t(i) w_t^{\top} x_i \bigg),\]

and thus taking expectation one obtains

    \begin{align*} & (Q+m) \mathbb{E}_{i_t} \left( D(\nu_{t+1}) - D(\nu_t) \right) \\ & \geq \frac{1}{m} \sum_{i=1}^m \bigg( f_i(w_t^{\top} x_i) + f_i^*(- \nu_t(i)) + \nu_t(i) w_t^{\top} x_i \bigg)\\ & = P(w_t) - D(\nu_t), \end{align*}

which concludes the proof of (1) and the proof of the theorem.

\Box

Posted in Optimization | 2 Comments

ORF523: Noisy oracles

Today we start the third (and last) part of our class titled: ‘Optimization and Randomness’. An important setting that we will explore is the one of optimization with noisy oracles:

  • Noisy 0^{th}-order oracle: given x \in \mathcal{X} it outputs Y such that \mathbb{E}(Y | x) = f(x).
  • Noisy 1^{st}-order oracle: given x \in \mathcal{X} it outputs \tilde{g} such that \mathbb{E}(\tilde{g} | x) \in \partial f(x).

Somewhat surprisingly several of the first-order algorithms we studied so far are robust to noise and attain the same performance if one feeds them with the answers of a noisy 1^{st} order oracle. This was first observed by Robbins and Monro in the early Fifties, and the resulting set of techniques is often referred to as stochastic approximation methods. For instance let us consider the Stochastic Mirror Descent:

    \begin{align*} & \nabla \Phi(y_{t+1}) = \nabla \Phi(x_{t}) - \eta \tilde{g}_t, \ \text{where} \ \mathbb{E}(\tilde{g_t} | x_t) \in \partial f(x_t) , \\ & x_{t+1} \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y_{t+1}) . \end{align*}

The analysis of this algorithm follows the exact same lines than the standard version. The key observation is that:

    \begin{eqnarray*} \mathbb{E} \ f(x_t) - f(x) & \leq & \mathbb{E} \ g_t^{\top} (x_t - x), \ \text{where} \ g_t \in \partial f(x_t) \\ & = & \mathbb{E} \left( \mathbb{E} (\tilde{g}_t|x_t)^{\top} (x_t - x) \right), \\ & = & \mathbb{E} \ \tilde{g}_t^{\top} (x_t - x) , \end{eqnarray*}

and now one can repeat the standard proof since \tilde{g}_t was used in the algorithm. This yields the following result.

Theorem. Let \Phi be a mirror map \kappa-strongly convex on \mathcal{X} \cap \mathcal{D} with respect to \|\cdot\|, and let R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1). Assume that f is convex and that the noisy oracle is such that \mathbb{E} \|\tilde{g}\|_*^2 \leq \sigma^2. Then Stochastic Mirror Descent with \eta = \frac{R}{\sigma} \sqrt{\frac{2 \kappa}{t}} satisfies

    \[\mathbb{E} f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - \min_{x \in \mathcal{X}} f(x) \leq R \sigma \sqrt{\frac{2}{\kappa t}} .\]

Of course the exact same reasoning applies to the Mirror Descent for Saddle Points. One can also get the rate 1/t for strongly convex functions using the idea described in this lecture. High-probability results can be obtained using Markov’s inequality, and stronger concentration results can be proved under a subgaussian assumption on the tails of the estimates \tilde{g}, see Proposition 2.2. here. 

Let us now see some application of this noisy framework.

 

Statistical Learning

We are interested in the solving the following problem:

    \[\min_{w \in \mathcal{W}} \mathbb{E} \ \ell(w, Z) ,\]

where \ell : \mathcal{W} \times \mathcal{Z} is a known loss function such that \ell(\cdot, z) is a convex function \forall z \in \mathcal{Z}, and Z is a random variable whose distribution is unknown. While the distribution of Z is unknown, we assume that we have access to a sequence of m i.i.d. examples Z_1, \hdots, Z_m. A typical example within this framework is the case where z=(x,y) represents a pair of (input, output) and the function \ell is one of the loss function we saw in this lecture, for example the logistic loss \ell(w, (x,y)) = \log(1 + \exp(- y w^{\top} x)) with \mathcal{W} \subset \mathbb{R}^n, \mathcal{Z} = \mathbb{R}^n \times \{-1,1\}. 

Let g(w,z) be a subgradient of \ell(\cdot, z) at w, and let f(w) = \mathbb{E} \ \ell(w, Z). We assume that \mathbb{E} g(w, Z) \in \partial f(w). Note that we cannot perform a gradient descent on f(w) as we would need to know the distribution of Z to compute a subgradient of f. On the other hand one has access to unbiased estimates of these subgradients with the i.i.d. examples Z_1, \hdots, Z_m. Thus one can use the Stochastic Mirror Descent, where at each iteration a new example is used to compute the noisy subgradient, precisely we take \tilde{g}_t = g(w_t, Z_t). In particular with m examples one can only run the algorithm for m steps, and one gets to a precision of order 1/\sqrt{m}. If in addition f is strongly convex then one can in fact obtain a precision of order 1/m with the step sizes/averaging described in this lecture.

 

Acceleration by randomization

In the previous example there was no alternative to the use of noisy subgradients as the distribution of Z was unknown. However in some cases, even if the distribution of Z is known, it might be beneficial to artificially sample examples from the distribution in order to avoid computing difficult integrals. Consider for example the problem of minimizing an average of convex functions:

    \[\min_{x \in \mathcal{X}} \frac{1}{m} \sum_{i=1}^m f_i(x) .\]

We have seen several important examples of this form where m might be very large. Note that computing a subgradient of f(x) = \frac{1}{m} \sum_{i=1}^m f_i(x) takes O(m N) time where O(N) is the time to compute the subgradient of a single function f_i(x). Thus a standard gradient descent algorithm will take O(m N / \epsilon^2) time to reach an \epsilon-optimal point. On the other hand one can simply sample (with replacement) an index i \in [m] at every iteration (which takes time O(\log m)), and perform a step of gradient descent using only the subgradient of f_i. This method will reach an \epsilon-optimal point (in expectation) in time O(\log(m) N / \epsilon^2), which is much cheaper than the requirement for GD!

Note that if f is also strongly convex, then GD needs O(m N / \epsilon) time to reach an \epsilon-optimal point, and SGD needs only O(\log(m) N / \epsilon) time (with appropriate step sizes/averaging as discussed above). The situation becomes more favorable to GD if in addition f is smooth (recall that for strongly convex and smooth functions one has an exponential rate of convergence for GD), and we will come back to this in the next lecture.

 

Let us consider another example which we studied at the end of the previous lecture:

    \[\min_{x \in \Delta_n} \max_{y \in \Delta_n} x^{\top} A y ,\]

where A \in \mathbb{S}^n (the symmetry assumption is just for sake of notation). It is clear that the rate of convergence of MD-SP for this problem is of order \|A\|_{\max} \sqrt{\frac{\log n}{t}}. Furthermore each step requires to compute a subgradient in x and y, which are of the form A y and Ax. Thus the time complexity of one step is of order O(n^2) (which is the time complexity of a matrix-vector multiplication). Overall to reach an \epsilon-optimal points one needs O(\|A\|_{\max}^2 \frac{n^2 \log n}{\epsilon^2}) elementary operations. Now the key observation is that the matrix-vector multiplication can easily be randomized, especially since x, y \in \Delta_n. Let A_1, \hdots, A_n be the column vectors of A, then A x = \sum_{i=1}^n x_i A_i, and if I is a random variable such that \mathbb{P}(I=i) = x_i one has \mathbb{E} A_I = A x. Using this randomization in a Stochastic MD-SP one obtains the exact same rate of convergence (in expectation) but now each step has time complexity of order O(n), resulting in an overall complexity to reach an \epsilon-optimal point (in expectation) of O(\|A\|_{\max}^2 \frac{n \log n}{\epsilon^2}). This result is especially amazing since we obtain a sublinear time algorithm (the complexity is \tilde{O}(n) while the ‘size’ of the problem is O(n^2)). One can propose an even better algorithm based on Mirror Prox with a complexity of \tilde{O}(n / \epsilon) and we refer the reader to this book chapter by Juditsky and Nemirovski for more details.

Posted in Optimization | 1 Comment

ORF523: Mirror Prox

Today I’m going to talk about a variant of Mirror Descent invented by Nemirovski in 2004 (see this paper). This new algorithm is called Mirror Prox and it is designed to minimize a smooth convex function f over a compact convex set \mathcal{X} \subset \mathbb{R}^n. We assume that the smoothness is measured in some arbitrary norm \|\cdot\|, precisely we assume that \forall x, y \in \mathcal{X}, \|\nabla f(x) - \nabla f(y) \|_* \leq \beta \|x - y\|.

Let \Phi : \mathcal{D} \rightarrow \mathbb{R} be a mirror map on \mathcal{X} (see this post for a definition) and let x_1 \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x). Mirror Prox is described by the following equations:

    \begin{align*} & \nabla \Phi(y_{t+1}') = \nabla \Phi(x_{t}) - \eta \nabla f(x_t), \\ \notag \\ & y_{t+1} \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y_{t+1}') , \\ \notag \\ & \nabla \Phi(x_{t+1}') = \nabla \Phi(x_{t}) - \eta \nabla f(y_{t+1}), \\ \notag\\ & x_{t+1} \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,x_{t+1}') . \end{align*}

In words the algorithm first makes a step of Mirror Descent to go from x_t to y_{t+1}, and then it makes a similar step to obtain x_{t+1}, starting again from x_t but this time using the gradient of f evaluated at y_{t+1} (instead of x_t). The following result justifies the procedure.

Theorem. Let \Phi be a mirror map \kappa-strongly convex on \mathcal{X} \cap \mathcal{D} with respect to \|\cdot\|. Let R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1) and f be convex and \beta-smooth w.r.t. \|\cdot\|. Then Mirror Prox with \eta = \frac{\kappa}{\beta} satisfies (with y_1 = x_1)

    \[f\bigg(\frac{1}{t} \sum_{s=1}^t y_s \bigg) - f(x^*) \leq \frac{\beta R^2}{\kappa t} .\]

Proof: Let x \in \mathcal{X} \cap \mathcal{D}. We write

    \begin{align*} & f(y_{t+1}) - f(x) \\ & \leq \nabla f(y_{t+1})^{\top} (y_{t+1} - x) \\ & = \nabla f(y_{t+1})^{\top} (x_{t+1} - x) + \nabla f(x_t)^{\top} (y_{t+1} - x_{t+1}) \\ & \qquad + (\nabla f(y_{t+1}) - \nabla f(x_t))^{\top} (y_{t+1} - x_{t+1}) . \end{align*}

We will now bound separately these three terms. For the first one we have (by definition of the method, first-order optimality condition of the projection, and the definition of the Bregman divergence)

    \begin{align*} & \eta \nabla f(y_{t+1})^{\top} (x_{t+1} - x) \\ & = ( \nabla \Phi(x_t) - \nabla \Phi(x_{t+1}'))^{\top} (x_{t+1} - x) \\ & = ( \nabla \Phi(x_t) - \nabla \Phi(x_{t+1}))^{\top} (x_{t+1} - x) \\ & = D_{\Phi}(x,x_t) - D_{\Phi}(x, x_{t+1}) - D_{\Phi}(x_{t+1}, x_t) . \end{align*}

For the second term using the same properties than above and the strong-convexity of the mirror map one has

    \begin{align*} & \eta \nabla f(x_t)^{\top} (y_{t+1} - x_{t+1}) \\ & = ( \nabla \Phi(x_t) - \nabla \Phi(y_{t+1}'))^{\top} (y_{t+1} - x_{t+1}) \\ & = ( \nabla \Phi(x_t) - \nabla \Phi(y_{t+1}))^{\top} (y_{t+1} - x_{t+1}) \\ & = D_{\Phi}(x_{t+1},x_t) - D_{\Phi}(x_{t+1}, y_{t+1}) - D_{\Phi}(y_{t+1}, x_t) \\ & \leq D_{\Phi}(x_{t+1},x_t) - \frac{\kappa}{2} \|x_{t+1} - y_{t+1} \|^2 - \frac{\kappa}{2} \|y_{t+1} - x_t\|^2 . \end{align*}

Finally for the last term, using H\”older’s inequality, \beta-smoothness, and the fact 2 ab \leq a^2 + b^2 one has

    \begin{align*} & (\nabla f(y_{t+1}) - \nabla f(x_t))^{\top} (y_{t+1} - x_{t+1}) \\ & \leq \|\nabla f(y_{t+1}) - \nabla f(x_t)\|_* \cdot \|y_{t+1} - x_{t+1} \| \\ & \leq \beta \|y_{t+1} - x_t\| \cdot \|y_{t+1} - x_{t+1} \| \\ & \leq \frac{\beta}{2} \|y_{t+1} - x_t\|^2 + \frac{\beta}{2} \|y_{t+1} - x_{t+1} \|^2 . \end{align*}

Thus summing up these three terms and using that \eta = \frac{\kappa}{\beta} one gets

    \[f(y_{t+1}) - f(x) \leq \frac{D_{\Phi}(x,x_t) - D_{\Phi}(x,x_{t+1})}{\eta} ,\]

and this inequality is also true for t=0 (see the bound for the first term). The proof is then concluded as usual.

\Box

 

Mirror Prox for Saddle Points

Following the generalization of Mirror Descent for Saddle Points (MD-SP, see this post) one can propose a version of Mirror Prox for Saddle Points (MP-SP). In fact the importance of Mirror Prox lies primarily in its application to compute the saddle point of a smooth convex-convave function. Indeed, the key observation is that many non-smooth convex functions admit a smooth saddle-point representation (see examples below). With such a representation one can then use MP-SP to obtain a rate of convergence of order 1/t, despite the fact that the original function is non-smooth! Recall that we proved that for non-smooth convex functions the optimal black-box rate is of order 1/\sqrt{t}. However MP-SP is not a black-box method in the sense that it makes use of the structure of the function (by representing it as a saddle-point). Nonetheless MP-SP still enjoys the nice time-complexity property of black-box methods since it is a first-order method. It is a win-win situation!

Let us now describe more precisely MP-SP. Let \mathcal{X} \subset \mathbb{R}^n be a compact and convex set endowed with some norm \|\cdot\|_{\mathcal{X}} and a mirror map \Phi_{\mathcal{X}} (defined on \mathcal{D_{\mathcal{X}}}), and let \mathcal{Y} \subset \mathbb{R}^m be a compact and convex set endowed with some norm \|\cdot\|_{\mathcal{Y}} and a mirror map \Phi_{\mathcal{Y}} (defined on \mathcal{D_{\mathcal{Y}}}). Let \phi : \cX \times \cY \rightarrow \mathbb{R} be a continuously differentiable function such that \phi(\cdot, y) is convex and \phi(x, \cdot) is concave. We assume that \phi is (\beta_{11}, \beta_{12}, \beta_{22}, \beta_{21})-smooth in the sense that:

    \begin{align*} & \|\nabla_x \phi(x,y) - \nabla_x \phi(x',y) \|_{\mathcal{X},*} \leq \beta_{11} \|x-x'\|_{\mathcal{X}} , \\ & \|\nabla_x \phi(x,y) - \nabla_x \phi(x,y') \|_{\mathcal{X},*} \leq \beta_{12} \|y-y'\|_{\mathcal{Y}} , \\ & \|\nabla_y \phi(x,y) - \nabla_y \phi(x,y') \|_{\mathcal{Y},*} \leq \beta_{22} \|y-y'\|_{\mathcal{Y}} , \\ & \|\nabla_y \phi(x,y) - \nabla_y \phi(x',y) \|_{\mathcal{Y},*} \leq \beta_{21} \|x-x'\|_{\mathcal{X}} , \\ \end{align*}

We consider the mirror map \Phi((x,y)) = \Phi_{\mathcal{X}}(x) + \Phi_{\mathcal{Y}}(y) on \mathcal{Z} = \mathcal{X} \times \mathcal{Y} (defined on \mathcal{D} = \mathcal{D_{\mathcal{X}}} \times \mathcal{D_{\mathcal{Y}}}). Let z_1 \in \mathrm{argmin}_{z \in \mathcal{Z} \cap \mathcal{D}} \Phi(z). Then for t \geq 1, let z_t=(x_t, y_t) and w_t=(u_t,v_t) be defined by the following equations (with w_1=z_1):

    \begin{align*} & \nabla \Phi(w_{t+1}') = \nabla \Phi(z_{t}) - \eta (\nabla_x \phi(x_t, y_t), - \nabla_y \phi(x_t, y_t)), \\ \notag \\ & w_{t+1} \in \mathrm{argmin}_{z \in \mathcal{Z} \cap \mathcal{D}} D_{\Phi}(z,w_{t+1}') , \\ \notag \\ & \nabla \Phi(z_{t+1}') = \nabla \Phi(z_{t}) - \eta (\nabla_x \phi(u_t, v_t), - \nabla_y \phi(u_t, v_t)), \\ \notag\\ & z_{t+1} \in \mathrm{argmin}_{z \in \mathcal{Z} \cap \mathcal{D}} D_{\Phi}(z,z_{t+1}') . \end{align*}

 

Theorem. Assume that \Phi_{\mathcal{X}} is 1-strongly convex on \mathcal{X} \cap \mathcal{D}_{\mathcal{X}} with respect to \|\cdot\|_{\mathcal{X}} and let R^2_{\mathcal{X}} = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi_{\mathcal{X}}(x) - \inf_{x \in \mathcal{X} \cap \mathcal{D}} \Phi_{\mathcal{X}}(x) (with similar definition for \Phi_{\mathcal{Y}}). Let \beta = \max(\beta_{11}, \beta_{12}, \beta_{22}, \beta_{21}). Then SP Mirror Descent with \eta = \frac{1}{2 \beta} satisfies

    \[\max_{y \in \mathcal{Y}} \phi\left( \frac1{t} \sum_{s=1}^t u_s,y \right) - \min_{x \in \mathcal{X}} \phi\left(x, \frac1{t} \sum_{s=1}^t v_s \right) \leq \frac{2 \beta (R^2_{\mathcal{X}} + R^2_{\mathcal{X}})}{t}.\]

Proof: First we endow \mathcal{Z} with the norm defined by

    \[\|z\|_{\mathcal{Z}} = \sqrt{\|x\|_{\mathcal{X}}^2 + \|y\|_{\mathcal{Y}}^2} .\]

It is immediate that \Phi is 1-strongly convex with respect to \|\cdot\|_{\mathcal{Z}} on \mathcal{Z} \cap \mathcal{D}. Furthermore one can easily check that

    \[\|z\|_{\mathcal{Z},*} = \sqrt{\|x\|_{\mathcal{X},*}^2 + \|y\|_{\mathcal{Y},*}^2} ,\]

and thus the vector fields (\nabla_x \phi(x, y), - \nabla_y \phi(x,y)) used in MP-SP satisfy:

    \[\|(\nabla_x \phi(x, y), - \nabla_y \phi(x,y)) - (\nabla_x \phi(x', y'), - \nabla_y \phi(x',y'))\|_{\mathcal{Z},*}^2 \leq 4 \beta^2 \|(x,y) - (x',y') \|_{\mathcal{Z}}^2 .\]

It suffices now to use the exact same proof than for the standard Mirror Prox starting with:

    \begin{eqnarray*} \phi\left( \frac1{t} \sum_{s=1}^t x_s,y \right) - \phi\left(x, \frac1{t} \sum_{s=1}^t y_s \right) & \leq & \frac{1}{t} \sum_{s=1}^t \bigg( \phi\left(x_s,y \right) - \phi\left(x, y_s \right) \bigg) \\ & \leq & \frac{1}{t} \sum_{s=1}^t g_s^{\top} (z_s - z) . \end{eqnarray*}

\Box

 

Smooth saddle-point representation of non-smooth functions

Assume that the function f to be optimized is of the form:

    \[f(x) = \max_{1 \leq i \leq m} f_{i} (x) ,\]

where the functions f_i(x) are convex, \beta-smooth and L-Lipschitz in some norm \|\cdot\|. Minimizing f over \mathcal{X} is equivalent to finding a saddle point over \mathcal{X} \times \Delta_{m} since one has:

    \[f(x) = \max_{y \in \Delta_{m}} y^{\top} (f_1(x), \hdots, f_{m}(x)) .\]

Thus we are looking for the saddle point of \phi(x,y) = y^{\top} (f_1(x), \hdots, f_{m}(x)) which is convex in x, and concave (in fact linear!) in y. Furthermore \phi is smooth and it is an easy exercise to verify that in this case one has \beta_{22} = 0, \beta_{21} = L, \beta_{11} = \beta, and \beta_{12} = L. Thus a rate of convergence (to the minimizer of f) of order (\beta + L) (R^2_{\mathcal{X}} + \log(\ell)) /t can be obtained by applying MP-SP with appropriate mirror maps (in particular the update in the y-space has to be done with the negative entropy). Note that is rate is particularly striking since the oracle lower bound of 1/\sqrt{t} was proved precisely for this type of function! Here we are able to get around this lower bound by making use of the structure of the function. Remark also that if you think about what this algorithm is doing to optimize the function f you get a very very elegant procedure for which it is highly non-trivial a priori to prove something about its rate of convergence.

Finally note that if \beta and L are of different order of magnitudes one can obtain a better result by considering the mirror map \Phi((x,y)) = a \Phi_{\mathcal{X}}(x) + b \Phi_{\mathcal{Y}}(y) and optimizing over the values of a and b.

 

Another interesting example arises when \phi(x,y) = x^{\top} A y with A \in \mathbb{R}^{n \times m}, \mathcal{X} = \Delta_n and \mathcal{Y} = \Delta_m. With the \ell_1-norm on both spaces it is easy to see that \beta_{12} = \beta_{21} = \| A \|_{\mathrm{max}} (the entrywise max norm), and of course \beta_{11} = \beta_{22} = 0. Thus we obtain here a convergence rate of order \| A \|_{\mathrm{max}} (\log(n) + \log(m)) / t. Again if n and m are of different order of magnitudes one can improve the result to obtain a rate of order \| A \|_{\mathrm{max}} \sqrt{\log(n) \log(m)} / t by considering the mirror map \Phi((x,y)) = a \Phi_{\mathcal{X}}(x) + b \Phi_{\mathcal{Y}}(y) and optimizing over the values of a and b.

Many more interesting examples can be derived and we refer the reader to the original paper of Nemirovski as well as this book chapter by Juditsky and Nemirovski.

Posted in Optimization | 2 Comments

COLT 2013 accepted papers

The accepted papers for COLT 2013 have just been revealed. You can also find the list below with a link to the arxiv version when I could find one (if you want me to add a link just send me an email).

I also take the opportunity of this post to remind my student readers that you can apply for a ‘student travel award’ to come participate in COLT 2013. The instructions can be found here. So far I received very few applications so I highly encourage you to give it a try! You can check out this blog post that gives a few reasons to make the effort to travel and participate in conferences.

Update with a message from Philippe Rigollet: Sebastien Bubeck and I are local organizers of this year’s edition of the Conference On Learning Theory (COLT 2013) on June 12-14.
This means that it will take place at Princeton (in Friend 101), and of course, that it’ll be awesome.
The list of accepted papers is very exciting and has been posted on the official COLT 2013 website:
http://orfe.princeton.edu/conferences/colt2013 (see also below).

Thanks to generous sponsors, we have been able to lower the registration costs to a historic minimum (regular: $300, students: $150).
These costs include:
-a cocktail party followed by a banquet on Thursday night
-coffee breaks (with poster presentations)
-lunches (with poster presentation)
-47 amazing presentations
-3 Keynote lectures by Sanjeev Arora (Princeton), Ralf Herbrich (amazon.com) and Yann LeCun (NYU) respectively

The deadline for early registration is May 13 (even if you’re local, help us support the conference by registering and not just crashing the party).
To register, go to:
http://orfe.princeton.edu/conferences/colt2013/?q=for-participants/registration

Please help us spread the word by forwarding this email to anyone interested in Learning Theory and by using your favorite social media.

 

COLT 2013 accepted papers

Posted in Uncategorized | Leave a comment

ORF523: Mirror Descent, part II/II

We start with some of the standard setups for the Mirror Descent algorithm we described in the previous post.

Standard setups for Mirror Descent

‘Ball’ setup. The simplest version of Mirror Descent is obtained by taking \Phi(x) = \frac{1}{2} \|x\|^2_2 on \mathcal{D} = \mathbb{R}^n. The function \Phi is a mirror map strongly convex w.r.t. \|\cdot\|_2, and furthermore the associated Bregman Divergence is given by D_{\Phi}(x,y) = \|x - y\|^2. Thus in that case Mirror Descent is exactly equivalent to Projected Subgradient Descent, and the rate of convergence obtained in the previous post recovers our earlier result on Projected Subgradient Descent.

‘Simplex’ setup. A more interesting choice of a mirror map is given by the negative entropy

    \[\Phi(x) = \sum_{i=1}^n x_i \log x_i,\]

on \mathcal{D} = \mathbb{R}_{++}^n. In that case the gradient update \nabla \Phi(y_{t+1}) = \nabla \Phi(x_t) - \eta \nabla f(x_t) can be written equivalently as

    \[y_{t+1}(i) = x_{t}(i) \exp\big(- \eta [\nabla f(x_t) ]_i \big) , \ i=1, \hdots, n.\]

The Bregman divergence of this mirror map is given by D_{\Phi}(x,y) = \sum_{i=1}^n x_i \log \frac{x_i}{y_i} - \sum_{i=1}^n (x_i - y_i) (also known as the generalized Kullback-Leibler divergence). It is easy to verify that the projection with respect to this Bregman divergence on the simplex \Delta_n = \{x \in \mathbb{R}_+^n : \sum_{i=1}^n x_i = 1\} amounts to a simple renormalization y \mapsto y / \|y\|_1. Furthermore it is also easy to verify that \Phi is 1-strongly convex w.r.t. \|\cdot\|_1 on \Delta_n (this result is known as Pinsker’s inequality). Note also that one has x_1 = (1/n, \hdots, 1/n) and R^2 = \log n when \mathcal{X} = \Delta_n. 

The above observations imply that when minimizing on the simplex \Delta_n a function f with subgradients bounded in \ell_{\infty}-norm, Mirror Descent with the negative entropy achieves a rate of convergence of order \sqrt{\frac{\log n}{t}}. On the other the regular Subgradient Descent achieves only a rate of order \sqrt{\frac{n}{t}} in this case!

‘Spectrahedron’ setup. We consider here functions defined on matrices, and we are interested in minimizing a function f on the spectrahedron \mathcal{S}_n defined as:

    \[\mathcal{S}_n = \left\{X \in \mathbb{S}_+^n : \mathrm{Tr}(X) = 1 \right\} .\]

In this setting we consider the mirror map on \mathcal{D} = \mathbb{S}_{++}^n given by the negative von Neumann entropy:

    \[\Phi(X) = \sum_{i=1}^n \lambda_i(X) \log \lambda_i(X) ,\]

where \lambda_1(X), \hdots, \lambda_n(X) are the eigenvalues of X. It can be shown that the gradient update \nabla \Phi(Y_{t+1}) = \nabla \Phi(X_t) - \eta \nabla f(X_t) can be written equivalently as

    \[Y_{t+1} = \exp\big(\log X_t - \eta \nabla f(X_t) \big) ,\]

where the matrix exponential and matrix logarithm are defined as usual. Furthermore the projection on \mathcal{S}_n is a simple trace renormalization.

With highly non-trivial computation one can show that \Phi is \frac{1}{2}-strongly convex with respect to the Schatten 1-norm defined as

    \[\|X\|_1 = \sum_{i=1}^n \lambda_i(X).\]

It is easy to see that one has x_1 = \frac{1}{n} I_n and R^2 = \log n for \mathcal{X} = \mathcal{S}_n. In other words the rate of convergence for optimization on the spectrahedron is the same than on the simplex!

 

Mirror Descent for Saddle Points

We consider now the following saddle-point problem: Let \mathcal{X} \subset \mathbb{R}^n, \mathcal{Y} \subset \mathbb{R}^m be compact convex sets, and \phi : \cX \times \cY \rightarrow \mathbb{R} be a continuous function such that \phi(\cdot, y) is convex and \phi(x, \cdot) is concave. We are interested in finding z^*=(x^*, y^*) \in \mathcal{X} \times \mathcal{Y} =: \mathcal{Z} such that

    \[\phi(x^*,y^*) = \inf_{x \in \mathcal{X}} \sup_{y \in \mathcal{Y}} \phi(x,y) = \sup_{y \in \mathcal{Y}} \inf_{x \in \mathcal{X}} \phi(x,y) .\]

(Note that the above equality uses Sion’s minimax theorem from this lecture.) We measure the quality of a candidate solution z=(x,y) through the primal/dual gap

    \[\max_{y' \in \mathcal{Y}} \phi(x,y') - \min_{x' \in \mathcal{X}} \phi(x',y) .\]

The key observation now is that for any x,y,x',y', one has with g_x \in \partial_x \phi(x,y) and -g_y \in \partial_y (- \phi(x,y)),

    \[\phi(x,y) - \phi(x',y) \leq g_x^{\top} (x-x'),\]

and

    \[- \phi(x,y) - (- \phi(x,y')) \leq (- g_y)^{\top} (y-y') .\]

In particular, denoting g_z = (g_x, - g_y) we just proved

    \[\max_{y' \in \mathcal{Y}} \phi(x,y') - \min_{x' \in \mathcal{X}} \phi(x',y) \leq g_z^{\top} (z - z') ,\]

for some z' \in \mathcal{Z}. This inequality suggests that one can solve the saddle-point problem with a Mirror Descent in the space \mathcal{Z} using the ‘vector field’ given by g_z. Precisely this gives the following SP Mirror Descent:

Let \Phi_{\mathcal{X}} be a mirror map on \mathcal{X} (defined on \mathcal{D_{\mathcal{X}}}) and let \Phi_{\mathcal{Y}} be a mirror map on \mathcal{Y} (defined on \mathcal{D_{\mathcal{Y}}}). We consider now the mirror map \Phi((x,y)) = \Phi_{\mathcal{X}}(x) + \Phi_{\mathcal{Y}}(y) on \mathcal{Z} (defined on \mathcal{D} = \mathcal{D_{\mathcal{X}}} \times \mathcal{D_{\mathcal{Y}}}).

Let z_1 \in \mathrm{argmin}_{z \in \mathcal{Z} \cap \mathcal{D}} \Phi(z). Then for t \geq 1, let w_{t+1} \in \mathcal{D} such that

    \[\nabla \Phi(w_{t+1}) = \nabla \Phi(z_{t}) - \eta g_t,\]

where g_t = (g_x, - g_y) with g_x \in \partial_x \phi(x_t,y_t) and -g_y \in \partial_y (- \phi(x_t,y_t)). Finally let

    \[z_{t+1} \in \mathrm{argmin}_{z \in \mathcal{Z} \cap \mathcal{D}} D_{\Phi}(z,w_{t+1}) .\]

The following result describes the performances of SP Mirror Descent.

Theorem. Assume that \Phi_{\mathcal{X}} is \kappa_{\mathcal{X}}-strongly convex on \mathcal{X} \cap \mathcal{D}_{\mathcal{X}} with respect to \|\cdot\|_{\mathcal{X}}, and let R^2_{\mathcal{X}} = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi_{\mathcal{X}}(x) - \Phi_{\mathcal{X}}(x_1). Assume that \phi(\cdot, y) is L_{\mathcal{X}}-Lipschitz w.r.t. \|\cdot\|_{\mathcal{X}}. Define similarly \kappa_{\mathcal{Y}}, R^2_{\mathcal{Y}}, L_{\mathcal{Y}}. Then SP Mirror Descent with \eta = \frac{R^2_{\mathcal{X}} + R^2_{\mathcal{Y}}}{\sqrt{\frac{L_{\mathcal{X}}^2}{\kappa_{\mathcal{X}}} + \frac{L_{\mathcal{Y}}^2}{\kappa_{\mathcal{Y}}}}} \sqrt{\frac{2}{t}} satisfies

    \[\max_{y \in \mathcal{Y}} \phi\left( \frac1{t} \sum_{s=1}^t x_s,y \right) - \min_{x \in \mathcal{X}} \phi\left(x, \frac1{t} \sum_{s=1}^t y_s \right) \leq \sqrt{\frac{2 \left( R^2_{\mathcal{X}} + R^2_{\mathcal{Y}} \right) \left(\frac{L_{\mathcal{X}}^2}{\kappa_{\mathcal{X}}} + \frac{L_{\mathcal{Y}}^2}{\kappa_{\mathcal{Y}}} \right) }{t}}.\]

Proof: First we endow \mathcal{Z} with the norm defined by

    \[\|z\|_{\mathcal{Z}} = \sqrt{\kappa_{\mathcal{X}} \|x\|_{\mathcal{X}}^2 + \kappa_{\mathcal{Y}} \|x\|_{\mathcal{Y}}^2} .\]

It is immediate that \Phi is 1-strongly convex with respect to \|\cdot\|_{\mathcal{Z}} on \mathcal{Z} \cap \mathcal{D}. Furthermore one can easily check that

    \[\|z\|_{\mathcal{Z},*} = \sqrt{\frac1{\kappa_{\mathcal{X}}} \|x\|_{\mathcal{X},*}^2 + \frac1{\kappa_{\mathcal{Y}}} \|x\|_{\mathcal{Y},*}^2} ,\]

and thus the vector field (g_t) used in the SP mirror descent satisfies:

    \[\|g_t\|_{\mathcal{Z},*} \leq \sqrt{\frac{L_{\mathcal{X}}^2}{\kappa_{\mathcal{X}}} + \frac{L_{\mathcal{Y}}^2}{\kappa_{\mathcal{Y}}}} .\]

It suffices now to use the exact same proof than for the standard Mirror Descent starting with:

    \begin{eqnarray*} \phi\left( \frac1{t} \sum_{s=1}^t x_s,y \right) - \phi\left(x, \frac1{t} \sum_{s=1}^t y_s \right) & \leq & \frac{1}{t} \sum_{s=1}^t \bigg( \phi\left(x_s,y \right) - \phi\left(x, y_s \right) \bigg) \\ & \leq & \frac{1}{t} \sum_{s=1}^t g_s^{\top} (z_s - z) . \end{eqnarray*}

\Box

Posted in Optimization | 3 Comments

ORF523: Mirror Descent, part I/II

In this post we are interested in minimizing a convex function f over a compact convex set \mathcal{X} \subset \mathbb{R}^n under the assumption that f is L-Lipschitz on \mathcal{X} with respect to some arbitrary norm \|\cdot\| (precisely we assume that \forall x \in \mathcal{X}, g \in \partial f(x), \|g\|_* \leq L). As we have seen in this post, using the correct norm to measure the variation of a function can make an enormous difference. We describe now a method invented by Nemirovski and Yudin to deal with this situation.

Let us for a moment abstract the situation a little bit and forget that we are trying to do optimization in finite dimension. We already observed that Projected Subgradient Descent works in an arbitrary Hilbert space \mathcal{H} (think of \mathcal{H} = \ell_2). However we are now interested in the more general situation where we are optimizing the function in some Banach space \mathcal{B} (for example \mathcal{B} = \ell_1). In that case the Gradient Descent strategy does not even make sense: indeed the gradients (more formally the Fréchet derivative) \nabla f(x) are elements of the dual space \mathcal{B}^* and thus one cannot perform the computation x - \eta \nabla f(x) (it simply does not make sense). We did not have this problem for optimization in an Hilbert space \mathcal{H} since by Riesz representation theorem \mathcal{H}^* is isometric to \mathcal{H}. The great insight of Nemirovski and Yudin is that one can still do a gradient descent by first mapping the point x \in \mathcal{B} into the dual space \mathcal{B}^*, then performing the gradient update in the dual space, and finally mapping back the resulting point to the primal space \mathcal{B}. Of course the new point in the primal space might lie outside of the constraint set \mathcal{X} \subset \mathcal{B} and thus we need a way to project back the point on the constraint set \mathcal{X}. The rest of this lecture will make this idea more precise.

 

Preliminaries

We consider a non-empty convex open set \mathcal{D} \subset \mathbb{R}^n such that the constraint set \mathcal{X} is included in its closure, that is \mathcal{X} \subset \overline{\mathcal{D}}, and such that \mathcal{X} \cap \mathcal{D} \neq \emptyset.

Now we consider a continuously differentiable and strictly convex function \Phi defined on \mathcal{D}. This function \Phi will play the role of the mirror map between the primal and dual space. Precisely a point x \in \mathcal{X} \cap \mathcal{D} in the primal is mapped to \nabla \Phi(x) in the ‘dual’ (note that here all points lie in \mathbb{R}^n and the notions of ‘primal’ and ‘dual’ space only have an intuitive meaning). In other words the gradient update of Mirror Descent with the mirror map \Phi will take the form \nabla \Phi(x) - \eta \nabla f(x). Next this dual point needs to be mapped back to the primal space, which is always possible under the condition that \nabla \Phi(\mathcal{D}) = \mathbb{R}^n (weaker assumptions are possible but we will not need them), that is one can find y \in \mathcal{D} such that

    \[\nabla \Phi(y) = \nabla \Phi(x) - \eta \nabla f(x) .\]

Now this point y might lie outside of the constraint set \mathcal{X}. To project back on the set of constraint we use the Bregman divergence associated to \Phi defined by

    \[D_{\Phi}(x,y) = \Phi(x) - \Phi(y) - \nabla \Phi(y)^{\top} (x - y) .\]

The projection of y \in \mathcal{D} onto \mathcal{X} then takes the following form:

    \[\mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y) .\]

To ensure the existence of this projection we assume now that \Phi satisfies

    \[\lim_{x \rightarrow \partial \mathcal{D}} \|\nabla \Phi(x)\| = + \infty .\]

With this assumption x \mapsto D_{\Phi}(x,y) is locally increasing on the boundary of \mathcal{D}, which together with the fact that \mathcal{X} is compact, and x \mapsto D_{\Phi}(x,y) is stricly convex implies the existence and uniqueness of the projection defined above.

We say that \Phi : \mathcal{D} \rightarrow \mathbb{R} is a mirror map if it satisfies all the above assumptions.

 

Mirror Descent

We can now describe the Mirror Descent strategy based on a mirror map \Phi. Let x_1 \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x). Then for t \geq 1, let y_{t+1} \in \mathcal{D} such that

    \[\nabla \Phi(y_{t+1}) = \nabla \Phi(x_{t}) - \eta g_t, \ \text{where} \ g_t \in \partial f(x_t) ,\]

and

    \[x_{t+1} \in \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} D_{\Phi}(x,y_{t+1}) .\]

 

Theorem Let \Phi be a mirror map. Assume also that \Phi is strongly convex on \mathcal{X} \cap \mathcal{D} with respect to \|\cdot\|, that is

    \[\forall x, y \in \mathcal{X} \cap \mathcal{D}, \Phi(x) - \Phi(y) \leq \nabla \Phi(x)^{\top}(x-y) - \frac{\alpha}{2} \|x-y\|^2.\]

Let R^2 = \sup_{x \in \mathcal{X} \cap \mathcal{D}} \Phi(x) - \Phi(x_1) and f be convex and L-Lipschitz w.r.t. \|\cdot\|, then Mirror Descent with \eta = \frac{R}{L} \sqrt{\frac{2 \alpha}{t}} satisfies

    \[f\bigg(\frac{1}{t} \sum_{s=1}^t x_s \bigg) - \min_{x \in \mathcal{X}} f(x) \leq RL \sqrt{\frac{2}{\alpha t}} .\]

 

Proof: Let x \in \mathcal{X} \cap \mathcal{D}. The claimed bounded will be obtained by taking a limit x \rightarrow x^*. Now by convexity of f, the definition of the Mirror Descent, and the definition of the Bregman divergence one has

    \begin{eqnarray*} f(x_s) - f(x) & \leq & g_s^{\top} (x_s - x) \\ & = & \frac{1}{\eta} (\nabla \Phi(x_s) - \nabla \Phi(y_{s+1})^{\top} (x_s - x) \\ & = & \frac{1}{\eta} \bigg( D_{\Phi}(x, x_s) + D_{\Phi}(x_s, y_{s+1}) - D_{\Phi}(x, y_{s+1}) \bigg). \end{eqnarray*}

Now the key observation is that by writing down the first order optimality condition for x_{s+1} one obtains

    \[(\nabla \Phi(x_{s+1}) - \nabla \Phi(y_{s+1}))^{\top} (x_{s+1} - x) \leq 0 ,\]

which can be written equivalently as

    \[D_{\Phi}(x, y_{s+1}) \geq D_{\Phi}(x, x_{s+1}) + D_{\Phi}(x_{s+1}, y_{s+1}) .\]

Thus the term D_{\Phi}(x, x_s) - D_{\Phi}(x, x_{s+1}) will lead to a telescopic sum when summing over s=1 to s=t, and it remains to bound the other term as follows:

    \begin{align*} & D_{\Phi}(x_s, y_{s+1}) - D_{\Phi}(x_{s+1}, y_{s+1}) \\ & = \Phi(x_s) - \Phi(x_{s+1}) - \nabla \Phi(y_{s+1})^{\top} (x_{s} - x_{s+1}) \\ & \leq (\nabla \Phi(x_s) - \nabla \Phi(y_{s+1}))^{\top} (x_{s} - x_{s+1}) - \frac{\alpha}{2} \|x_s - x_{s+1}\|^2 \\ & = \eta g_s^{\top} (x_{s} - x_{s+1}) - \frac{\alpha}{2} \|x_s - x_{s+1}\|^2 \\ & \leq \eta L \|x_{s+1} - x_s\| - \frac{\alpha}{2} \|x_s - x_{s+1}\|^2 \\ & \leq \frac{(\eta L)^2}{2 \alpha}. \end{align*}

We proved

    \[\sum_{s=1}^t \bigg(f(x_s) - f(x)\bigg) \leq \frac{D_{\Phi}(x,x_1)}{\eta} + \eta \frac{L^2 t}{2 \alpha},\]

which concludes the proof up to trivial computation.

\Box

 

Mirror Descent as a proximal method

One can rewrite Mirror Descent as follows:

    \begin{eqnarray*} x_{t+1} & = & \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \ D_{\Phi}(x,y_{t+1}) \\ & = & \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \ \Phi(x) - \nabla \Phi(y_{t+1})^{\top} x \\ & = & \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \ \Phi(x) - (\nabla \Phi(x_{t}) - \eta g_t)^{\top} x \\ & = & \mathrm{argmin}_{x \in \mathcal{X} \cap \mathcal{D}} \ \eta g_t^{\top} x + D_{\Phi}(x,x_t) . \\ \end{eqnarray*}

Nowadays this last expression is usually taken as the definition of Mirror Descent (see this paper by Beck and Teboulle). It gives a proximal point of view on Mirror Descent: the algorithm is trying to minimize the local linearization of the function while not moving too far away from the previous point (with distances measured via the Bregman divergence of the mirror map).

 

Next time we will see some of the ‘standard setups’ for Mirror Descent, that is pairs of (constraint set, mirror map) that ‘go well together’.

Posted in Optimization | 2 Comments

ORF523: ISTA and FISTA

As we said in the previous lecture it seems stupid to consider that we are in a black-box situation when in fact we know entirely the function to be optimized. Consider for instance the LASSO objective \| X w - Y \|_2^2 + \lambda \|w\|_1 where one wants to minimize over w \in \mathbb{R}^n. By resorting to black-box procedures one would solve this problem with Subgradient Descent and the rate of convergence would be of order 1/\sqrt{t} as this function is non-smooth (and potentially non-strongly convex). However we will see now that one can take advantage of the form of the LASSO objective which is a sum of a smooth part and a simple non-smooth part to obtain rates as fast as 1/t^2.

In this lecture (which follows this paper by Beck and Teboulle) we consider the unconstrained minimization of a sum of two functions f and g that satisfy the following requirements:

(i) f+g admits a minimizer x^* on \mathbb{R}^n.

(ii) f and g are convex, and f is \beta-smooth.

(iii) g is known and f is accessible with a 1^{st}-order oracle.

As we will see next, for the proposed algorithm to be computationally efficient g also needs to be ‘simple’. For instance a separable function (i.e., g(x) = \sum_{i=1}^n g_i(x_i)) will be considered as a simple function. Our prime example will be g(x) = \|x\|_1.

 

ISTA (Iterative Shrinkage-Thresholding Algorithm)

Recall that the Gradient Descent algorithm to optimize the smooth function f is simply given by

    \[x_{t+1} = x_t - \eta \nabla f(x_t) ,\]

which can be written in the proximal form as

    \[x_{t+1} = \mathrm{argmin}_{x \in \mathbb{R}^n} \ f(x_t) + \nabla f(x_t)^{\top} (x-x_t) + \frac{1}{2\eta} \|x - x_t\|^2_2 .\]

Now here one wants to minimize f+g, and g is assumed to be known and ‘simple’. It seems very natural to consider the following iterative procedure known as ISTA:

    \begin{eqnarray*} x_{t+1} & = & \mathrm{argmin}_{x \in \mathbb{R}^n} \ f(x_t) + \nabla f(x_t)^{\top} (x-x_t) + \frac{1}{2\eta} \|x - x_t\|^2_2 + g(x) \\ & = & \mathrm{argmin}_{x \in \mathbb{R}^n} \ g(x) + \frac{1}{2\eta} \|x - (x_t - \eta \nabla f(x_t)) \|_2^2 . \end{eqnarray*}

In terms of convergence rate it is not too hard to show that ISTA has the same convergence rate on f+g than Gradient Descent on f, more precisely with \eta=\frac{1}{\beta} one has

    \[f(x_t) + g(x_t) - (f(x^*) + g(x^*)) \leq \frac{\beta \|x_1 - x^*\|^2_2}{2 t} .\]

This improved convergence rate over Subgradient Descent comes at a price: computing x_{t+1} may be a difficult optimization problem by itself in general, and this is why one needs to assume that g is ‘simple’. For instance if g can be written as g(x) = \sum_{i=1}^n g_i(x_i) then one can compute x_{t+1} by solving n convex problems in dimension 1. In the case where g(x) = \lambda \|x\|_1 this one-dimensional problem is given by:

    \[\min_{x \in \mathbb{R}} \ \lambda |x| + \frac{1}{2 \eta}(x - x_0)^2, \ \text{where} \ x_0 \in \mathbb{R} .\]

Elementary computations show that this problem has an analytical solution given by \tau_{\lambda \eta}(x_0), where \tau is the shrinkage operator defined by

    \[\tau_{\alpha}(x) = (|x|-\alpha)_+ \mathrm{sign}(x) .\]

 

FISTA (Fast ISTA)

As we have seen in this lecture, the optimal rate of convergence for smooth functions can be obtained with Nesterov’s Accelerated Gradient Descent. Combining this idea with ISTA one gets FISTA which is described as follows. Let

    \[\lambda_0 = 0, \ \lambda_{s} = \frac{1 + \sqrt{1+ 4 \lambda_{s-1}^2}}{2}, \ \text{and} \ \gamma_s = \frac{1 - \lambda_s}{\lambda_{s+1}}.\]

Let x_1 = y_1 an arbitrary initial point, and

    \begin{eqnarray*} y_{s+1} & = & \mathrm{argmin}_{x \in \mathbb{R}^n} \ g(x) + \frac{\beta}{2} \|x - (x_s - \eta \nabla f(x_s)) \|_2^2 , \\ x_{s+1} & = & (1 - \gamma_s) y_{s+1} + \gamma_s y_s . \end{eqnarray*}

Again it is not hard to show that the rate of FISTA is similar to the one of Nesterov’s Accelerated Gradient Descent, more precisely:

    \[f(y_t) + g(y_t) - (f(x^*) + g(x^*)) \leq \frac{2 \beta \|x_1 - x^*\|^2}{t^2} .\]

Posted in Optimization | 4 Comments