In optimization we are ‘given’ a set $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$, and a real-valued function $latex {f}&fg=000000$ defined on this set. The objective is to find a point $latex {x^* \in \mathcal{X}}&fg=000000$ which minimizes $latex {f}&fg=000000$ on $latex {\mathcal{X}}&fg=000000$.

This goal is ambitious. It encompasses an enormous variety of problems, from engineering to politics. Here is an example that immediately suggests the difficulty of the problem: building an AI (Artificial Intelligence) for a game is an optimization problem, since one wants to find a strategy that maximizes the chances of winning.

In this course we are going to explore the frontiers of what we know about mathematical optimization. We will discuss both positive results: how can you solve such and such optimization problems; but also negative ones with statements like: ‘this family of problems is too hard to be solved in reasonable time’. The course will be divided into three parts, each one explores a different aspect of mathematical optimization.

**Part 1: Computational complexity of optimization**

The idea here is to look at optimization from the most fundamental perspective: how many ‘elementary operations’ do I need to do to find $latex {x^*}&fg=000000$? We will see that this question is very far from being resolved, and that there might be good reasons why this is such a difficult question. Here is sample of contents that will go into this chapter:

- The ellipsoid method
- Conic Programming and Interior point methods
- P vs. NP
- SDP relaxation of MAX-CUT
- Extended formulations and Yannakakis’ Theorem
- Dynamic programming

**Part 2: Oracle complexity of optimization**

The first part of the course focuses on the complexity in terms of ‘elementary operations’. Here we take another point of view: we simply count the number of ‘questions’ we ask about the function $latex {f}&fg=000000$. More formally we assume that we have an oracle that can give us the value and gradient of $latex {f}&fg=000000$ at any point $latex {x \in \mathcal{X}}&fg=000000$. The oracle complexity is then the number of calls to this oracle one needs to do to find $latex {x^*}&fg=000000$. While being somehow less fundamental than the computational complexity, the oracle complexity allows for an almost ‘complete’ theory. Here is a partial list of topics for this part:

- Oracle complexity of Lipschitz optimization
- Gradient Descent for smooth convex optimization (respectively smooth and strongly convex)
- Oracle complexity lower bounds for smooth convex optimization (respectively smooth and strongly convex)
- Optimal methods: Nesterov’s Accelerated Gradient Descent, FISTA
- Mirror Descent
- Mirror Descent for saddle points
- Coordinate Descent, Conditional Gradient Method (aka Frank-Wolfe)

**Part 3: Statistical complexity of stochastic optimization**

In many situations one does not have a perfect knowledge about the function $latex {f}&fg=000000$. In this last part of the course we explore what happens when the oracle gives noisy responses. Partial list of topics:

- Mirror Descent for stochastic optimization

- Lower bounds for stochastic optimization
- Randomized methods for deterministic optimization
- $latex {0^{th}}&fg=000000$ order stochastic optimization

In most of this course we focus on convex set of constraints $latex {\mathcal{X}}&fg=000000$, and convex functions $latex {f}&fg=000000$. I would like here to review basic definitions and results related to convexity. I will also show why, ‘in some sense’, we can restrict our attention to convexity without loss of generality.

**Convex sets**

Definition 1$latex {\mathcal{X}}&fg=000000$ is convex if for all pair of points $latex {x, y \in \mathcal{X}}&fg=000000$, the segment $latex {[x,y]}&fg=000000$ is included in $latex {\mathcal{X}}&fg=000000$, that is$latex \displaystyle \forall \gamma \in [0,1], (1-\gamma) x + \gamma y \in \mathcal{X}.&fg=000000$

$latex {\mathcal{X}}&fg=000000$ is a convex body if it is convex, compact, and with non-empty interior.

Definition 2The convex hull $latex {\mathrm{conv}(\mathcal{X})}&fg=000000$ of $latex {\mathcal{X}}&fg=000000$ is defined as the set of convex combinations of points from $latex {\mathcal{X}}&fg=000000$, that is$latex \displaystyle \mathrm{conv}(\mathcal{X}) = \left\{\sum_{i=1}^N p_i x_i , \; N \geq 1, x_1, \hdots, x_N \in \mathcal{X}, p_1, \hdots, p_N \geq 0 \; \text{s.t.} \; \sum_{i=1}^N p_i =1\right\} .&fg=000000$

It is easy to check that the convex hull can also be defined as the minimal (for inclusion) convex set that contains $latex {\mathcal{X}}&fg=000000$. A basic result about convex sets that we shall use extensively is the Separation Theorem.

Theorem 3 (Separation Theorem)Let $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$ be a closed convex set, and $latex {x_0 \in {\mathbb R}^n \setminus \mathcal{X}}&fg=000000$. Then, there exists $latex {w \in {\mathbb R}^n}&fg=000000$ and $latex {t \in {\mathbb R}}&fg=000000$ such that$latex \displaystyle w^{\top} x_0 < t, \; \text{and} \; \forall x \in \mathcal{X}, w^{\top} x \geq t.&fg=000000$

Note that if $latex {\mathcal{X}}&fg=000000$ is not closed then one can only guarantee that $latex {w^{\top} x_0 \leq w^{\top} x, \forall x \in \mathcal{X}}&fg=000000$ (and $latex {w \neq 0}&fg=000000$). A related result is the Supporting Hyperplane Theorem.

Theorem 4 (Supporting Hyperplane Theorem)Let $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$ be a convex set, and $latex {x_0 \in \partial \mathcal{X}}&fg=000000$. Then, there exists $latex {w \in {\mathbb R}^n, w \neq 0}&fg=000000$ such that$latex \displaystyle \forall x \in \mathcal{X}, w^{\top} x \geq w^{\top} x_0.&fg=000000$

**Convex functions**

Definition 5Let $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$ be convex, and $latex {f : \mathcal{X} \rightarrow {\mathbb R}}&fg=000000$. $latex {f}&fg=000000$ is convex if it always lies below its chords, that is$latex \displaystyle \forall (x,y,\gamma) \in \mathcal{X}^2 \times [0,1], f((1-\gamma) x + \gamma y) \leq (1-\gamma)f(x) + \gamma f(y) .&fg=000000$

It is obvious that a function $latex {f}&fg=000000$ is convex if and only if its epigraph

$latex \displaystyle \mathrm{epi}(f) = \{(x,t) \in \mathcal{X} \times {\mathbb R} : t \geq f(x) \} ,&fg=000000$

is a convex set.

Definition 6Let $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$, and $latex {f : \mathcal{X} \rightarrow {\mathbb R}}&fg=000000$. Then $latex {g \in {\mathbb R}^n}&fg=000000$ is a subgradient of $latex {f}&fg=000000$ at $latex {x \in \mathcal{X}}&fg=000000$ if for any $latex {y \in \mathcal{X}}&fg=000000$ one has$latex \displaystyle f(x) – f(y) \leq g^{\top} (x – y) .&fg=000000$

The set of subgradients of $latex {f}&fg=000000$ at $latex {x}&fg=000000$ is denoted $latex {\partial f (x)}&fg=000000$.

Proposition 7Let $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$ be convex, and $latex {f : \mathcal{X} \rightarrow {\mathbb R}}&fg=000000$. If $latex {\forall x \in \mathcal{X}, \partial f(x) \neq \emptyset}&fg=000000$ then $latex {f}&fg=000000$ is convex. Conversely if $latex {f}&fg=000000$ is convex then for any $latex {x \in \mathrm{int}(\mathcal{X}), \partial f(x) \neq \emptyset}&fg=000000$. Furthermore if $latex {f}&fg=000000$ is convex and differentiable at $latex {x}&fg=000000$ then $latex {\nabla f(x) \in \partial f(x)}&fg=000000$.

*Proof:* The first claim is almost trivial: let $latex {g \in \partial f((1-\gamma) x + \gamma y)}&fg=000000$, then by definition one has

$latex \displaystyle \begin{array}{rcl} & & f((1-\gamma) x + \gamma y) \leq f(x) + \gamma g^{\top} (y – x) , \\ & & f((1-\gamma) x + \gamma y) \leq f(y) + (1-\gamma) g^{\top} (x – y) , \end{array} &fg=000000$

which clearly shows that $latex {f}&fg=000000$ is convex by adding the two (appropriately rescaled) inequalities.

Now let us prove that a convex function $latex {f}&fg=000000$ has subgradients in the interior of $latex {\mathcal{X}}&fg=000000$. We build a subgradient by using a supporting hyperplane to the epigraph of the function. Let $latex {x \in \mathcal{X}}&fg=000000$. Then clearly $latex {(x,f(x)) \in \partial \ \mathrm{epi}(f)}&fg=000000$, and $latex {\mathrm{epi}(f)}&fg=000000$ is a convex set. Thus by using the Supporting Hyperplane Theorem, there exists $latex {(a,b) \in {\mathbb R}^n \times {\mathbb R}}&fg=000000$ such that

$latex \displaystyle a^{\top} x + b f(x) \geq a^{\top} y + b t, \forall (y,t) \in \mathrm{epi}(f) . \ \ \ \ \ (1)&fg=000000$

Clearly, by letting $latex {t}&fg=000000$ tends to infinity, one can see that $latex {b \leq 0}&fg=000000$. Now let us assume that $latex {x}&fg=000000$ is in the interior of $latex {\mathcal{X}}&fg=000000$. Then for $latex {\epsilon > 0}&fg=000000$ small enough, $latex {y=x + \epsilon a \in \mathcal{X}}&fg=000000$, which implies that $latex {b}&fg=000000$ cannot be equal to $latex {0}&fg=000000$. Thus rewriting (1) for $latex {t=f(y)}&fg=000000$ one obtains

$latex \displaystyle f(x) – f(y) \leq \frac{1}{|b|} a^{\top} (x – y) .&fg=000000$

Thus $latex {a / |b| \in \partial f(x)}&fg=000000$ which concludes the proof of the second claim.

Finally let $latex {f}&fg=000000$ be a convex and differentiable function. Then by definition:

$latex \displaystyle \begin{array}{rcl} f(y) & \geq & \frac{f((1-\gamma) x + \gamma y) – (1- \gamma) f(x)}{\gamma} \\ & = & f(x) + \frac{f(x + \gamma (y – x)) – f(x)}{\gamma} \\ & \rightarrow_{\gamma \rightarrow 0} & f(x) + \nabla f(x)^{\top} (y-x), \end{array} &fg=000000$

which shows that $latex {\nabla f(x) \in \partial f(x)}&fg=000000$. $latex \Box&fg=000000$

**Why convexity?**

This section is neither meant to provide a formal argument, nor to give an algorithmic idea. However it shows that, intuitively, assuming that $latex {f}&fg=000000$ and $latex {\mathcal{X}}&fg=000000$ are convex comes ‘without loss of generality’. In particular this should be taken as a word of caution: an optimization problem with convex constraints and convex objective is **not** easy in general. Somehow the difficulty of the problem is encoded in the geometry of $latex {\mathcal{X}}&fg=000000$: if $latex {\mathcal{X}}&fg=000000$ is ‘nice’, then minimizing convex objectives over $latex {\mathcal{X}}&fg=000000$ is ‘easy’. The rest of the class will make this statement much more precise. Let us now show why convexity is ‘universal’. First note that we can do the following rewriting of our problem:

$latex \displaystyle \inf_{x \in \mathcal{X}} f(x) = \inf_{(x,t) \in \mathrm{epi}(f)} t ,&fg=000000$

and a solution $latex {(x^*,t^*)}&fg=000000$ to the latter problem immediately gives a solution to the former. Thus we have already reduced the case of a general function $latex {f}&fg=000000$, to a simple linear function of the form $latex {f(x) = c^{\top} x}&fg=000000$ for some $latex {c \in {\mathbb R}^n}&fg=000000$. Now remark that we always have

$latex \displaystyle \inf_{x \in \mathcal{X}} c^{\top} x = \inf_{x \in \mathrm{conv}(\mathcal{X})} c^{\top} x .&fg=000000$

This follows from the following computation:

$latex \displaystyle \begin{array}{rcl} \inf_{x \in \mathrm{conv}(\mathcal{X})} c^{\top} x & = & \inf_{N \geq 1, x_1, \hdots, x_N \in \mathcal{X}, p_1, \hdots, p_N \geq 0 \; \text{s.t.} \; \sum_{i=1}^N p_i =1} \sum_{i=1}^N p_i (c^{\top} x_i) \\ & \geq & \inf_{N \geq 1, x_1, \hdots, x_N \in \mathcal{X}, p_1, \hdots, p_N \geq 0 \; \text{s.t.} \; \sum_{i=1}^N p_i =1} \sum_{i=1}^N p_i (\min_j c^{\top} x_j) \\ & = & \inf_{x \in \mathcal{X}} c^{\top} x . \end{array} &fg=000000$

Here a solution to $latex {\inf_{x \in \mathrm{conv}(\mathcal{X})} c^{\top} x}&fg=000000$ gives a solution to $latex {\inf_{x \in \mathcal{X}} c^{\top} x}&fg=000000$ if one is computationally able to decompose $latex {x \in \mathrm{conv}(\mathcal{X})}&fg=000000$ into a convex combination of points in $latex {\mathcal{X}}&fg=000000$ (this can be a very difficult task in some cases!). In any case we showed that

$latex \displaystyle \inf_{x \in \mathcal{X}} f(x) = \inf_{(x,t) \in \mathrm{conv}(\mathrm{epi}(f))} t ,&fg=000000$

and thus we reduced an arbitrary optimization problem to a linear optimization over a convex set.

## By Kunal Chaudhury March 11, 2015 - 6:41 am

Nice post!

I was wondering if the requiring X to have a non-empty interior is somewhat restrictive. It appears that the analysis on this page (and the subsequent analysis of the ellipsoid method) would go through (with minor changes) if one required X to have a non-empty relative interior instead.

## By mbkv September 28, 2013 - 6:07 am

I got here from Amir’s guest post on SOS. First of all, thanks to your nice post. So,the take-home point from this is that “Any arbitrary optimization problem, regardless of the nature of the objective f(x) we want to minimlze and regardless of the nature of the set(the set can be semialgebraic?) over which we want to minimize, can be reduced to a linear optimization over a convex set”.

Have i understood and stated it correctly?

## By Sebastien Bubeck September 28, 2013 - 2:25 pm

Yes this is correct, but note that the reduction is absolutely non-algorithmic, this is only a formal rewriting!

## By haipeng February 12, 2013 - 10:10 am

Proposition 7 says that every interior point has subgradients for a convex function, what about points on the boundary?

## By Sebastien Bubeck February 12, 2013 - 4:20 pm

Convex functions are really nice in the interior of the domain, but they can have basically an arbitrary behavior on the boundary. Think for example of a convex function defined on a disk $latex x^2 + y^2 \leq 1$. The only constraint for the function on the circle $latex x^2 + y^2 = 1$ is basically that it has to take values larger than inside the disk. Apart from that the function can do anything on the circle.

If you add continuity to your function, then things cannot be too wild, but subgradients can still be empty, think for example of $latex x \mapsto – \sqrt{x}$ on $latex [0,1] .$

## By Emre February 5, 2013 - 10:49 pm

First!

Also, really good lecture notes. Please keep this up.

## By Sebastien Bubeck February 6, 2013 - 8:17 am

Thanks Emre ;).