# Local max-cut in smoothed polynomial time

Omer Angel, Yuval Peres, Fan Wei, and myself have just posted to the arXiv our paper showing that local max-cut is in smoothed polynomial time. In this post I briefly explain what is the problem, and I give a short proof of the previous state of the art result on this problem, which was a paper by Etscheid and Roglin showing that local max-cut is in quasi-polynomial time.

Local max-cut and smoothed analysis

Let be a connected graph with vertices and be an edge weight function. The local max-cut problem asks to find a partition of the vertices whose total cut weight

is locally maximal, in the sense that one cannot increase the cut weight by changing the value of at a single vertex (recall that actually finding the global maximum is NP-hard). See the papers linked to above for motivation on this problem.

There is a simple local search algorithm for this problem, sometimes referred to as “FLIP”: start from some initial and iteratively flip vertices (i.e. change the sign of at a vertex) to improve the cut weight until reaching a local maximum. It is easy to build instances where FLIP takes exponential time, however in “practice” it seems that FLIP always converges quickly. This motivates the smoothed analysis of FLIP, that is we want to understand what is the typical number of steps for FLIP when the edge weights is perturbed by a small amount of noise. Formally we now assume that the weight on edge is given by a random variable which has a density with respect to the Lebesgue measure bounded from above by (for example this forbids to be too close to a point mass). We assume that these random variables are independent.

Theorem(Etscheid and Roglin [2014]): With probability FLIP terminates in steps for some universal constant .

We improve this result from quasi-polynomial to polynomial, assuming that we put some noise on the interaction between every pair of vertices, or in other words assuming that the graph is complete.

Theorem(Angel, Bubeck, Peres, Wei [2016]): Let be complete. With probability FLIP terminates in steps.

I will now prove the Etscheid and Roglin result.

Proof strategy

To simplify notation let us introduce the Hamiltonian

We want to find a local max of . For any and , we denote by the state equal to except for the coordinate corresponding to which is flipped. For such there exists a vector such that

More specifically is defined by

We say that flipping a vertex is a move, and it is an improving move if the value of strictly improves. We say that a sequence of moves is -slow from if

It is sufficient to show that with high probability there is no -slow sequence with say and (indeed in this case after steps FLIP must have stopped, for otherwise the value would exceed the maximal possible value of ). We will do this in in three main steps, a probability step, a linear algebra step, and a combinatorial step.

Probability step

Lemma: Let be linearly independent vectors in . Then one has

Proof: The inequality follows from a simple change of variables. Let be a full rank matrix whose first rows are and it is completed so that is the identity on the subspace orthogonal to . Let be the density of , and the density of . One has and the key observation is that since has integer coefficients, its determinant must be an integer too, and since it is non-zero one has Thus one gets:

Linear algebra step

Lemma: Consider a sequence of improving moves with distinct vertices (say ) that repeat at least twice in this sequence. Let be the corresponding move coefficients, and for each let (respectively ) be the first (respectively second) time at which moves. Then the vectors , , are linearly independent. Furthermore for any that did not move between the times and one has (and for any ).

Proof: The last sentence of the lemma is obvious. For the linear independence let be such that . Consider a new graph with vertex set and such that is connected to if appears an odd number of times between the times and . This defines an oriented graph, however if is connected to but is not connected to then one has while (and furthermore for any ) and thus . In other words we can consider a subset of where is an undirected graph, and outside of this subset is identically zero. To reduce notation we simply assume that is undirected. Next we observe that if and are connected then one must have (this uses the fact that we look at the {\em first} consecutive times at which the vertices move) and in particular (again using that for any ) one must have . Now let be some connected component of , and let be the unique value of on . Noting that the ‘s corresponding to different components of have difference support (more precisely with one has for any and for any ) one obtains . On the other hand since the sequence of moves is improving one must have , which implies and finally (thus concluding the proof of the linear independence).

Combinatorial step

Lemma: Let . There exists and such that the number of vertices that repeat at least twice in the segment is at least .

Proof (from ABPW): Define the surplus of a sequence to be the difference between the number of elements and the number of distinct elements in the sequence. Let be the maximum surplus in any segment of length in . Observe that . Let us now assume that for any segment of length , the number of vertices that repeat at least twice is at most . Then one has by induction

This shows that has to be greater than which concludes the proof.

Putting things together

We want to show that with one has

By the combinatorial lemma we know that it is enough to show that:

Now using the probability lemma together with the linear algebra lemma (and observing that critically only depends on the value of at the vertices in , and thus the union bound over only gives a factor instead of ) one obtains that the above probability is bounded by

which concludes the proof of the Etscheid and Roglin result.

Note that a natural route to get a polynomial-time bound from the above proof would be to remove the term in the combinatorial lemma but we show in our paper that this is impossible. Our result comes essentially from improvements to the linear algebra step (this is more difficult as the Etscheid and Roglin linear algebra lemma is particularly friendly for the union bound step, so we had to find another way to do the union bound).

This entry was posted in Optimization, Probability theory. Bookmark the permalink.