## Lecture 5. Proof of Kadison-Singer (3)

This is the last installment of the proof of the Kadison-Singer theorem. After several plot twists, we have finally arrived at the following formulation of the problem. If you do not recall how we got here, this might be a good time to go back and read the previous posts.

Theorem 1.Let be positive semidefinite matrices in such that

Define the multivariate polynomial

Then

The entire goal of this lecture is to prove the above theorem.

Let us recall that we already proved such a result *without* the derivatives (this is almost trivial).

Lemma 1.whenever .

The key challenge is to understand what happens to the region without roots under operations of the form . To this end, we introduced in the previous lecture the *barrier function*

It is easily seen that the roots of coincide with the points where the barrier function equals one. As the barrier function is easily bounded in our setting using the Jacobi formula for determinants, we can immediately bound the roots of .

Unfortunately, this simple argument is only sufficient to apply the operation once, while we must apply such operations. The difficulty is that while it is easy to control the barrier function of the polynomial , we do not know how to control directly the barrier function of the polynomial . To solve this problem, we must develop a more sophisticated version of the barrier argument that provides control not only of the roots of the derivative polynomial, but also of its barrier function: once we accomplish this, we will be able to iterate this argument to complete the proof.

**The multivariate barrier argument**

In the multivariate barrier argument, we will keep track of an octant in which the multivariate polynomial of interest has no roots. We will use the following terminology.

Definition.A point is said to bound the roots of the multivariate polynomial if whenever , , , .

This notion is illustrated in the following figure:

We can now formulate the multivariate barrier argument that is at the heart of the proof.

Theorem 2.Let be a real stable polynomial. Suppose that bounds the roots of , and

for some and . Then bounds the roots of , and

for all .

Note that the barrier function assumption

would already be enough to ensure that bounds the roots of ; this is essentially what we proved in the last lecture (combined with a basic monotonicity property of the barrier function that we will prove below). The key innovation in Theorem 2 is that we do not only bound the roots of , but we also control *its* barrier function . This allows us to iterate this theorem over and over again to add more derivatives to the polynomial. To engineer this property, we must build some extra room (a gap of ) into our bound on the barrier function of . Once we understand the proof, we will see that this idea arises very naturally.

Up to the proof of Theorem 2, we now have everything we need to prove Theorem 1 (and therefore the Kadison-Singer theorem). Let us complete this proof first, so that we can concentrate for the remainder of this lecture on proving Theorem 2.

**Proof of Theorem 1.** In the previous lecture, we used the Jacobi formula to show that

for all . To start the barrier argument, let us therefore choose and such that

Initially, by Lemma 1, we see that bounds the roots of and that the barrier function satisfies the assumption of Theorem 2. That is, we start in the following situation:

Applying Theorem 2, we find that still bounds the roots of . Moreover, we have control over the barrier function of this derivative polynomial, however at a point that lies above :

for all . This is illustrated in the following figure:

We now have everything we need to apply Theorem 2 again to the polynomial (recall that and its derivative polynomials are all real stable, as we proved early on in these lectures). In the next step, we control the barrier in the -direction to obtain, in a picture:

We can now repeat this process in the -direction, etc. After iterations, we have evidently proved that

All that remains is to choose and . To optimize the bound, we minimize subject to the constraint . This is achieved by and , which completes the proof.

Of course, it remains to prove Theorem 2. It turns out that this is quite easy, once we develop some (nontrivial!) properties of the barrier functions of real-stable polynomials.

**Some properties of barrier functions**

Throughout this section, let be a real stable polynomial. As is real stable, the univariate polynomial is real-rooted, and we denote its roots by

We can then represent the polynomial as

and therefore the barrier function as

Some simple properties of the barrier function follow easily from this expression.

Lemma 2.Suppose that bounds the roots of . Then the barrier function is positive, decreasing and convex for .

**Proof.** As bounds the roots of , we have for all . Thus clearly the barrier function is positive. To show that it is decreasing, we note that

Likewise, convexity follows as

when bounds the roots of .

The main property that we need in order to prove Theorem 2 is that these monotonicity and convexity properties of the barrier function also hold when we vary other coordinates. This seems innocuous, but is actually much harder to prove (and requires a clever idea!)

Lemma 3.Suppose that bounds the roots of . Then the barrier function is positive, decreasing and convex for .

**Remark.** There is of course nothing special about the use (for notational simplicity) of the first two coordinates in Lemmas 2 and 3. We can identically consider the barrier function in any direction and obtain monotonicity and convexity in any other direction , as will be needed in the proof of Theorem 2. In fact, as the remaining coordinates are frozen, Lemmas 2 and 3 are really just statements about the properties of *bivariate* real stable polynomials.

The difficulty in proving Lemma 3 is that while behaves very nicely as a function of , it is much less clear how it behaves as a function of : to understand this, we must understand how the roots vary as a function of . In general, this might seem like a hopeless task. Surprisingly, however, the roots of real stable polynomials exhibit some remarkable behavior.

Lemma 4.Let be a bivariate real-stable polynomial, and let be the roots of . Then each is a decreasing function.

This is enough to prove Lemma 3.

**Proof of Lemma 3.** That the barrier function is positive follows precisely as in the proof of Lemma 2 (using the fact that if bounds the roots of , then also bounds the roots of ).

It remains to establish monotonicity and convexity. There is no loss of generality in assuming that is a bivariate polynomial in only, and that we can write

for roots . We can therefore write

But as bounds the roots of , we have for all . As is also decreasing, we have

which is precisely what we set out to show.

We must still prove Lemma 4. This seems to be quite a miracle: why should such a property be true? To get some intuition, let us first consider an apparently very special case where is a polynomial of degree one, that is, . The root of the polynomial is clearly given by

Suppose that is nondecreasing, that is, that and have opposite sign. Then cannot be real stable! Indeed, for any real root , the point is also a root, which violates real stability if and have opposite sign (as both coordinates have strictly positive or negative imaginary parts). Thus for polynomials of degree one, real stability trivially implies that is decreasing.

While this is very intuitive, it also seems at first sight like it does not help much in understanding nontrivial polynomials of higher degree. Nonetheless, this simple observation proves to be the key to understanding the case of general polynomials. This idea is that the property that is decreasing is *local*. If there is a point at which this property is violated, then we can Taylor expand the polynomial around that point to reduce to the degree one case, and thereby obtain a contradiction.

**Proof of Lemma 4.** By the implicit function theorem, the maps are continuous and everywhere except at a finite number of points. Therefore, if the conclusion fails, then there must exist a root and a (nondegenerate) point such that

We will use this to bring ourselves to a contradiction.

Let us write , so is a root of . We Taylor expand around this point. Note that

so that

where we have used that . Similarly, we obtain

As , we obtain the Taylor expansion

for a suitable constant and where .

We now conclude by a perturbation argument. Define the univariate polynomials

Letting , we obtain , which evidently has a root with strictly positive real part. By the continuity of roots of polynomials, must still have a root with strictly positive real part when is sufficiently small (this follows readily using Rouché’s theorem). But this implies that has a root with and , contradicting real stability.

**Conclusion of the proof**

All that remains is to finally prove Theorem 2. The hard work is behind us: with the monotonicity and convexity properties of Lemmas 2 and 3 in hand, the proof of Theorem 2 is straightforward.

**Proof of Theorem 2.** As the barrier function is coordintewise decreasing, the assumption

implies that

It follows immediately that bounds the roots of .

Let us now control the barrier function of . As is positive above the roots of ,

We can therefore write the barrier function as

Note that as the barrier function is decreasing, the numerator in this expression is positive. Moreover, by the assumption of Theorem 2 and monotonicity of the barrier function, we have whenever . We can therefore estimate

where we have used convexity of the barrier function (that is, we used the first-order condition for the convexity of a function ). And Kadison-Singer is now proved.

*Epilogue: our presentation of the proof of the Kadison-Singer theorem has largely followed the approach from the blog post by T. Tao, which simplifies some of the arguments in the original paper by A. W. Marcus, D. A. Spielman, and N. Srivastava. Both are very much worth reading!*