Monthly Archives: February 2013
ORF523: Turing Machines
So far we have seen several algorithms to solve various optimization problems. For example for LPs we have seen that the ellipsoid method has a computational complexity of roughly $latex {O(n^4)}&fg=000000$, and we improved this bound with Interior Point Methods … Continue reading
ORF523: Classification, SVM, Kernel Learning
In this post I will go over some simple examples of applications for the optimization techniques that we have seen in the previous lectures. The general theme of these examples is the canonical Machine Learning problem of classification. Since the … Continue reading
Guest post by Philippe Rigollet: Estimating structured graphs
Last wednesday, Sourav Chatterjee was speaking in the Mathematics Colloquium here at Princeton. While he is most famous for his contributions to probability, especially using Stein’s method, this talk was rather statistical. The problem was interesting and its solution simple enough that … Continue reading
ORF523: Lagrangian duality
In this post I review very basic facts on Lagrangian duality. The presentation is extracted from Chapter 5 of Boyd and Vandenberghe. We consider the following problem: $latex \displaystyle \inf_{x \in {\mathbb R}^n : f_i(x) \leq 0, h_j(x) = 0, … Continue reading
ORF523: IPMs for LPs and SDPs
We have seen that, roughly, the complexity of Interior Point Methods to optimize linear functions on a convex set $latex {\mathcal{X}}&fg=000000$ (with non-empty interior) with a $latex {\nu}&fg=000000$-self-concordant barrier $latex {F}&fg=000000$ is of order $latex {O\left(M \sqrt{\nu} \log \frac{\nu}{\epsilon} \right)}&fg=000000$ … Continue reading
ORF523: Interior Point Methods
Khachiyan’s ellipsoid method was such a breakthrough that it made it to The New York Times in 1979. The next time that optimization theory was featured in The New York Times was only a few years later, in 1984, and … Continue reading
ORF523: LP, SDP, and Conic Programming
Recall from the previous lecture that we can ‘solve’ the problem $latex {\min_{x \in \mathcal{X}} f(x)}&fg=000000$ (with $latex {\mathcal{X}}&fg=000000$ a convex body in $latex {{\mathbb R}^n}&fg=000000$ and $latex {f}&fg=000000$ a convex function) in ‘time’ $latex {O( \max(M, n^2) n^2 )}&fg=000000$ … Continue reading
ORF523: the ellipsoid method
In this lecture we describe the wonderful ellipsoid method. Recall that an ellipsoid is a convex set of the form $latex \displaystyle \mathcal{E} = \{x \in {\mathbb R}^n : (x – c)^{\top} H^{-1} (x-c) \leq 1 \} ,&fg=000000$ where $latex … Continue reading
ORF523 (‘Advanced Optimization’): Introduction
In optimization we are ‘given’ a set $latex {\mathcal{X} \subset {\mathbb R}^n}&fg=000000$, and a real-valued function $latex {f}&fg=000000$ defined on this set. The objective is to find a point $latex {x^* \in \mathcal{X}}&fg=000000$ which minimizes $latex {f}&fg=000000$ on $latex {\mathcal{X}}&fg=000000$. … Continue reading