In this post I review very basic facts on Lagrangian duality. The presentation is extracted from Chapter 5 of Boyd and Vandenberghe.

We consider the following problem:

$latex \displaystyle \inf_{x \in {\mathbb R}^n : f_i(x) \leq 0, h_j(x) = 0, i=1,\hdots,m, j=1,\hdots,p} f(x) . \ \ \ \ \ (1)&fg=000000$

Let $latex {\mathcal{D}}&fg=000000$ be the non-empty domain of definition for this problem (that is the intersection of the domain of definitions of $latex {f, f_1, \hdots, f_m, h_1, \hdots, h_p}&fg=000000$). For $latex {x \in \mathcal{D}, \lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p}&fg=000000$, the Lagrangian associated to the above problem is defined by

$latex \displaystyle L(x, \lambda, \nu) = f(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) .&fg=000000$

**Weak duality**

First note that, if $latex {x \in \mathcal{D}}&fg=000000$ is not a valid point for the *primal problem* (1), then

$latex \displaystyle \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} L(x, \lambda, \nu) = + \infty,&fg=000000$

whereas for $latex {x \in \mathcal{D}}&fg=000000$ which is valid point for (1) one has

$latex \displaystyle \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} L(x, \lambda, \nu) = f(x) .&fg=000000$

$latex \displaystyle \inf_{x\in \mathcal{D}} \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} L(x, \lambda, \nu) = \inf_{x \in {\mathbb R}^n : f_i(x) \leq 0, h_j(x) = 0, i=1,\hdots,m, j=1,\hdots,p} f(x) . \ \ \ \ \ (2)&fg=000000$

On the other hand note that for any $latex {x \in \mathcal{D}}&fg=000000$ which is valid point for (1), and $latex {\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p}&fg=000000$, one also has

$latex \displaystyle L(x, \lambda, \nu) \leq f(x) ,&fg=000000$

so that

$latex \displaystyle \inf_{x \in \mathcal{D}} L(x, \lambda, \nu) \leq \inf_{x \in {\mathbb R}^n : f_i(x) \leq 0, h_j(x) = 0, i=1,\hdots,m, j=1,\hdots,p} f(x),&fg=000000$

$latex \displaystyle \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} \inf_{x \in \mathcal{D}} L(x, \lambda, \nu) \leq \inf_{x \in {\mathbb R}^n : f_i(x) \leq 0, h_j(x) = 0, i=1,\hdots,m, j=1,\hdots,p} f(x) . \ \ \ \ \ (3)&fg=000000$

Putting together (2) and (3) we arrive at the statement of *weak duality*:

$latex \displaystyle \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} \inf_{x \in \mathcal{D}} L(x, \lambda, \nu) \leq \inf_{x \in \mathcal{D}} \sup_{\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p} L(x, \lambda, \nu). \ \ \ \ \ (4)&fg=000000$

As we have seen in (2), the right-hand side of the above inequality corresponds to the primal problem given in (1). We now refer to the left-hand side as the *dual problem*, which corresponds to the maximization over $latex {\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p}&fg=000000$ of the following function:

$latex \displaystyle g(\lambda, \nu) := \inf_{x \in \mathcal{D}} L(x, \lambda, \nu) . \ \ \ \ \ (5)&fg=000000$

**Strong duality**

Weak duality always holds true (note that we did not make any convexity assumption so far), it corresponds to the fact that a ‘max-min’ is always smaller than a ‘min-max’. We say that *strong duality* holds if the inequality (4) holds with equality. The next two results give different conditions under which a ‘max-min’ is equal to a ‘min-max’.

Theorem 1Assume that $latex {f, f_1, \hdots, f_m}&fg=000000$ are convex functions, $latex {h_1, \hdots, h_p}&fg=000000$ are affine functions, and that there exists a point $latex {x}&fg=000000$ in the relative interior of the convex set $latex {\mathcal{D}}&fg=000000$ (that is, in the interior of $latex {\mathcal{D}}&fg=000000$ when $latex {\mathcal{D}}&fg=000000$ is viewed as a subset of the affine subspace generated by $latex {\mathcal{D}}&fg=000000$) such that$latex \displaystyle f_i(x) < 0, i =1,\hdots, m \; \text{(Slater’s condition)} .&fg=000000$

Then strong duality holds, that is (4) holds with equality.

Theorem 2 (Sion’s minimax)Let $latex {\mathcal{X} \subset {\mathbb R}^n, \mathcal{Y} \subset {\mathbb R}^m}&fg=000000$ be convex sets such that one of them is compact. Let $latex {\phi : \mathcal{X} \times \mathcal{Y} \rightarrow {\mathbb R}}&fg=000000$ be a continuous function such that $latex {\phi(\cdot, y)}&fg=000000$ is convex and $latex {\phi(x, \cdot)}&fg=000000$ is concave. Then:$latex \displaystyle \sup_{y \in \mathcal{Y}} \inf_{x \in \mathcal{X}} \phi(x,y) = \inf_{x \in \mathcal{X}} \sup_{y \in \mathcal{Y}} \phi(x,y) .&fg=000000$

**KKT conditions**

Now let us assume that we are in the conditions of Theorem 1 (with convex objective, convex inequalities constraints, affine equality constraints, and Slater’s condition). Furthermore we assume from now on that all functions in play are differentiable.

Let $latex {x^*}&fg=000000$ be a primal solution (that is a solution to the optimization problem (1)), and let $latex {(\lambda^*, \nu^*)}&fg=000000$ be a dual solution (that is a solution of the maximization of $latex {g(\lambda, \nu)}&fg=000000$ defined in (5) over $latex {\lambda \in {\mathbb R}_+^m, \nu \in {\mathbb R}^p}&fg=000000$).

The following holds true (the first line follows by strong duality, the second by definition, the third is trivial, and the last one by the fact that $latex {x^*}&fg=000000$ and $latex {(\lambda^*, \nu^*)}&fg=000000$ are valid points for their respective optimization problems):

$latex \displaystyle \begin{array}{rcl} f(x^*) & = & g(\lambda^*, \nu^*) \\ & = & \inf_{x \in \mathcal{D}} L(x, \lambda^*, \nu^*) \\ & \leq & L(x^*, \lambda^*, \nu^*) \\ & \leq & f(x^*) . \end{array} &fg=000000$

Thus both inequalities are in fact equalities. For the second one this implies that $latex {\sum_{i=1}^m \lambda_i^* f_i(x^*) = 0}&fg=000000$, but since each term is non-positive, this means

$latex \displaystyle \forall i =1, \hdots, m, \lambda_i f_i(x^*) = 0 \; \text{(Complementary slackness at optimum).}&fg=000000$

On the other hand the first inequality now says that $latex {x^*}&fg=000000$ is a minimizer over $latex {\mathcal{D}}&fg=000000$ of $latex {L(x, \lambda^*, \nu^*)}&fg=000000$, thus by first order optimality one has

$latex \displaystyle \nabla f(x^*) + \sum_{i=1}^m \lambda_i^* \nabla f_i(x^*) + \sum_{i=1}^p \nu_i^* \nabla h_i(x^*) = 0 .&fg=000000$

Complementary slackness, together with the above equality and the fact that $latex {x^*}&fg=000000$ and $latex {(\lambda^*, \nu^*)}&fg=000000$ are valid points for their respective optimization problems are called the KKT (Karush-Kuhn-Tucker) conditions. We just proved that these conditions are necessary (and for this we did not need convexity, just strong duality), but in fact (thanks to convexity) the KKT conditions are also sufficient for optimality.

**Duality gap**

By weak duality one always has $latex {g(\lambda, \nu) \leq f(x^*)}&fg=000000$, and thus

$latex \displaystyle f(x) – f(x^*) \leq f(x) – g(\lambda, \nu) .&fg=000000$

The right-hand side in the above inequality is called the *duality gap*, and it gives an upper bound on how suboptimal is the point $latex {x}&fg=000000$ for the primal problem. In other words a dual point $latex {(\lambda, \nu)}&fg=000000$ gives a certificate on how good is a primal point $latex {x}&fg=000000$.

Interior Point Methods can be turned into an optimization procedure for *both* the primal and the dual, by tracing a central path in each problem. This is particularly useful, as by doing so one can evaluate the suboptimality gap in the primal at any given time.