## Lecture 4. Proof of Kadison-Singer (2)

Recall that we are in the middle of proving the following result.

Theorem.Let be independent random vectors in whose distribution has finite support. Suppose that

Then

with positive probability.

Define the random matrix and its characteristic polynomial as

As is positive definite, one representation of the matrix norm is . The proof of the above theorem consists of two main parts:

- Show that with positive probability.
- Show that .

The first statement was proved in the previous two lectures. The goal of this lecture and the next is to prove the second statement.

**Mixed characteristic polynomials**

The key tool that was used in the previous lectures in the *mixed characteristic polynomial* of the matrices , which we defined as follows:

We recall the two crucial observations used in the previous lectures:

- is multilinear in .
- If have rank one (this is essential!), then .

As in our case are *independent* random matrices of rank one, this implies

where we define for simplicity .

Note that the matrices are no longer of rank one (in particular, it is not true that is the characteristic polynomial of : that would make the remainder of the proof quite trivial!) However, by assumption, the matrices satisfy

To complete the proof, our aim will be to show that for any matrices with these properties, the maximal root of can be at most .

Theorem.Let be positive semidefinite matrices in . Suppose that

Then

As the matrices sum to the identity , it will be convenient to rewrite the definition of the mixed characteristic polynomial slightly using this property:

where we defined the multivariate polynomial

Our aim is to show that whenever .

How should we go about proving such a property? The first observation we should make is that the polynomial itself has no roots in an even larger region.

Lemma.whenever .

**Proof.** Note that

as and . Thus is nonsingular whenever .

One might now hope that we can follow a similar idea to what we did in the previous lectures. In the previous lectures, we wanted to show that the mixed characteristic polynomial is real-rooted. To this end, we first showed that the determinant in the definition of the mixed characteristic polynomial is real stable (the appropriate multivariate generalization or real-rootedness), and then we showed that this property is preserved when we take derivatives of the form .

In an ideal world, we would show precisely the same thing here: if the polynomial has no roots in the region , then perhaps has no roots there as well, etc. If this were true, then we would in fact have shown that has no roots in the region . Unfortunately, this is clearly false: that would imply an upper bound of zero for the matrix norm in our original problem (impossible in general!) Nonetheless, this general strategy proves to be the right approach. It is indeed not true that the region with no roots is preserved by the operation . However, it turns out that we will be able to control by how much this region shrinks in successive applications of . The control that we will develop will prove to be sufficiently sharp in order to obtain the desired result.

Evidently, the key problem that we face is to understand what happens to the roots of a polynomial when we apply an operation of the form . In the remainder of this lecture, we will investigate this problem in a *univariate* toy setting. This univariate approach will not be sufficiently sharp to yield the desired result, but will give us significant insight into the ingredients that are needed in the proof. In the following lecture, we will complete the proof by developing a multivariate version of these ideas.

**Barrier functions: a toy problem**

Let us put aside for the moment our original problem, and consider the following simplified setting. Let be a univariate polynomial with real roots . What can we say about the locations of the roots of the polynomial ? Clearly, the latter polynomial has a root at if and only if

The function is very interesting; let us investigate what it looks like. Note that we can always represent a polynomial in terms of its roots as

for some constant . We can therefore readily compute

As we assumed that the roots of are real, the function looks something like this:

The function blows up at the roots of ; this function is therefore referred to as the *barrier function*, in analogy with a similar notion in optimization. The values where it is equal to one determine the locations of the roots of . It follows immediately from the shape of the barrier function that the roots of our two polynomials are interlaced, as can be seen by inspecting the above figure. Note, moreover, that , so that it is unfortunately the case that the maximal root of a polynomial increases under the operation . However, we can control the location of the maximal root if we are able to control the barrier function.

Let us now return to our matrix problem and try to see how this idea can be useful. We will consider the first step towards establishing the desired result: what happens to the locations of the roots of the polynomial when we apply a single operation ? (We will ultimately want to iterate such an argument for every coordinate to control the roots of the mixed characteristic polynomial.)

To this end, let us fix for the time being, and define

We have already shown that . In order to control the roots of , we need to control the barrier function . The stroke of luck that we have at this point is that the derivative of the logarithm of a determinant is a remarkably nice object.

Lemma (Jacobi formula).If is invertible, then .

**Proof.** First, we note that

It therefore suffices to prove that

To this end, we use directly the definition of the determinant:

where the sum is over permutations . Setting , we see that the only term that survives is the one that satisfies for all , that is, the identity permutation. We therefore obtain (as )

This completes the proof.

Using the Jacobi formula, we immediately find

It therefore follows that

This brings us one step closer to the desired conclusion!

We have seen above that we can control the roots of and . Of course, the next step will be to control the roots of . Unfortunately, a direct application of the barrier function method does not lend itself well to iteration. The problem is that while we could easily control the barrier function of using the Jacobi formula, it is not so obvious how to control the barrier function of .

Instead, we are going to develop in the following lecture a multivariate version of the barrier argument. In each consecutive application of , we will control the region of in which the polynomial has no roots; at the same time, we will also obtain control over the barrier function of the polynomial in the next stage. When implemented properly, this procedure will allow us to iterate the barrier argument without requiring explicit control of the barrier function except at the first stage.

*Many thanks to Mark Cerenzia for scribing this lecture!*