Guest post by Philippe Rigollet: Estimating structured graphs

Last wednesday, Sourav Chatterjee was speaking in the Mathematics Colloquium here at Princeton. While he is most famous for his contributions to probability, especially using Stein’s method, this talk was rather statistical. The problem was interesting and its solution simple enough that it can fit in this blog post. Sourav’s full paper can be found on ArXiv.

I present a slightly simpler proof of his main result. It yields a bound that depends on a more natural quantity that may be much smaller than the one involving the nuclear norm and that appears in the original paper Chatterjee (2012).


The problem

Let $latex {P=\{p_{i,j}\}}&fg=000000$ be an $latex {n \times n}&fg=000000$ symmetric matrix with elements $latex {p_{i,j} \in [0,1]}&fg=000000$. Let $latex {G}&fg=000000$ be the undirected random graph on $latex {n}&fg=000000$ vertices obtained by placing edges $latex {\{i,j\}}&fg=000000$ independently with probability $latex {p_{i,j}}&fg=000000$ respectively.

The goal is to produce an estimator of $latex {P}&fg=000000$, i.e., a $latex {n \times n}&fg=000000$ matrix $latex {\hat P}&fg=000000$, that is measurable with respect to $latex {G}&fg=000000$ and such that the Frobenius norm $latex {\|\hat P – P\|_F}&fg=000000$ is small with high probability.

The title of this post suggests that we need some structure on the $latex {P}&fg=000000$. Sourav’s article points to a variety of examples where structure appears in the spectrum of $latex {P}&fg=000000$. As a result, the method and analysis described below are spectral.


Thresholded eigen-decomposition

Let $latex {A}&fg=000000$ denote the adjacency matrix of $latex {G}&fg=000000$ and observe that the matrix $latex {A-P}&fg=000000$ is symmetric, has zero mean and its elements $latex {\xi_{i,j}}&fg=000000$ are independent and lie in the interval $latex {[-1,1]}&fg=000000$ almost surely. This is not quite a Wigner matrix (the variances are not identical) but up to multiplicative constant, its operator norm $latex {\|A-P\|}&fg=000000$ is of the order $latex {\sqrt{n}}&fg=000000$, just like a Wigner matrix. Formally, the following lemma holds. Its proof is postponed to the end of the post.

Lemma With probability at least $latex {1-\delta}&fg=000000$, it holds

$latex \displaystyle \|A-P\|^2 \le 2\log(6)n +\log(4/\delta)=:C_\delta^2(n)\,. &fg=000000$


Since we can decompose our observed matrix $latex {A=P+(A-P)}&fg=000000$, the eigenvalues of $latex {P}&fg=000000$ that are smaller than $latex {\|A-P\|}&fg=000000$ should disappear in the noise. Moreover, those that are larger than $latex {\|A-P\|}&fg=000000$ should be visible in the spectrum on $latex {A}&fg=000000$.

Consider the eigen-decompositions of $latex {P}&fg=000000$ and $latex {A}&fg=000000$:

$latex \displaystyle P=\sum_{i=1}^n \mu_i u_iu_i^\top\,,\quad A=\sum_{i=1}^n \lambda_i v_iv_i^\top\,, &fg=000000$

where the eigenvalues are ordered by decreasing order of magnitude and define the following estimator,

$latex \displaystyle \hat P=\sum_{i=1}^n \lambda_i \mathbf{1}( |\lambda_i|>2C_\delta(n)) v_i v_i^\top\,. &fg=000000$

Note that Sourav Chatterjee uses $latex {1.001 \sqrt{n}}&fg=000000$ instead of $latex {2C_\delta(n)}&fg=000000$ but his proof requires much more complicated arguments for essentially the same result (see the original paper). Also, his result holds in expectation and we allow here a tunable level of confidence.


Theorem With probability at least $latex {1-\delta}&fg=000000$, estimator $latex {\hat P}&fg=000000$ satisfies

$latex \displaystyle \|\hat P -P\|_F \le 12\sqrt{ \sum_{i=1}^n \{\mu_i^2 \wedge C_n(\delta)\}} &fg=000000$

Proof: Assume that we are on the event $latex {\{\|A-P\|\le C_\delta(n)\}}&fg=000000$. We begin by a simple consequence of the lemma. Let $latex {S =\{ i\,:\, |\lambda_i|>2C_\delta(n)\}}&fg=000000$ and define the matrix

$latex \displaystyle P_{S}=\sum_{i \in {S}}\mu_iu_iu_i^\top\,. &fg=000000$

The lemma together with Weyl’s inequality implies that

$latex \displaystyle |\lambda_i -\mu_i|\le \|A-P\| \le C_\delta(n)\,, \forall\, i \ \ \ \ \ (1)&fg=000000$

It yields $latex {|\lambda_i| \le |\mu_i| + C_\delta(n)}&fg=000000$ so that $latex {|\mu_i|>C_\delta(n)}&fg=000000$ for all $latex {i \in S}&fg=000000$ (by definition of $latex {S}&fg=000000$). Therefore, $latex {S \subset \{ i\,:\, |\mu_i|>C_\delta(n)\}}&fg=000000$. Similarly, we find that the complement of $latex {S}&fg=000000$ is such that $latex {S^c\subset \{ i\,:\, |\mu_i|\le 3C_\delta(n)\}}&fg=000000$.

We now return to the main part of the proof. By the triangle inequality, we have

$latex \displaystyle \|\hat P-P\|_F\le \|\hat P- P_{S}\|_F+\|P_{S}-P\|_F\,, &fg=000000$

and we treat each term independently.

$latex \displaystyle \|\hat P-P_{S}\|_F \le \sqrt{\mathrm{rank}(\hat P-P_{S})}\|\hat P-P_{S}\| &fg=000000$

On the one hand, $latex {\mathrm{rank}(\hat P-P_{S}) \le 2|S|}&fg=000000$. On the other hand,

$latex \displaystyle \begin{array}{rcl} \|\hat P-P_{S}\|&\le &\|\hat P-A\|+\|A- P\|+\|P-P_S\| \\ &\le &2C_\delta(n)+C_\delta(n)+\|P-P_S\| \\ &= &3C_\delta(n)+\max_{i \in S^c} |\mu_i|\le 6C_\delta(n)\,. \end{array} &fg=000000$

where we used the definition of $latex {\hat P}&fg=000000$ and the lemma. Together, the above three displays yield

$latex \displaystyle \|\hat P-P\|_F\le 6\sqrt{2\sum_{i \in S}C_\delta^2(n)} + \sqrt{\sum_{i\in S^c}\mu_i^2} \le (6\sqrt{2}+3)\sqrt{\sum_{i=1}^n \{\mu_i^2 \wedge C_n(\delta)\}}\,. &fg=000000$

$latex \Box&fg=000000$



This is not exactly the bound from Chatterjee (2012). Indeed, the main Theorem 2.1 in Chatterjee (2012) yields

$latex \displaystyle \mathbb{E}\|\hat P -P\|_F^2\le C \|P\|_*\sqrt{n}+ Ce^{-Cn}\,, &fg=000000$

where $latex {\|P\|_*=\sum_{i=1}^n|\mu_i|}&fg=000000$ is the nuclear norm of $latex {P}&fg=000000$.

Note first that our theorem yields a bound on the median:

$latex \displaystyle \mathbb{M}\|\hat P -P\|_F^2 \le C \sum_{i=1}^n \{\mu_i^2 \wedge n\}\,. &fg=000000$

How do the two bounds compare? Note first that our bound is always at least as good as Chatterjee’s bound:

$latex \displaystyle \sum_{i=1}^n \{\mu_i^2 \wedge n\}\le \sum_{i:|\mu_i|\le \sqrt{n}}\mu_i^2 + \sum_{i:|\mu_i|> \sqrt{n}}n\le \|P\|_*\sqrt{n} &fg=000000$

Moreover, there may be a large gap between the two quantities. For example, if $latex {P=I_n}&fg=000000$ is the identity matrix, then

$latex \displaystyle \|P\|_*\sqrt{n} =n^{3/2}\,, \qquad \text{and} \qquad \sum_{i=1}^n \{\mu_i^2 \wedge n\}=\sum_{i=1}^n\mu_i^2 =n\,. &fg=000000$

This can also be the case with low rank matrices. Take the matrix the all-one matrix $latex {P=\mathbf{1}_n\mathbf{1}_n^\top}&fg=000000$, where $latex {\mathbf{1}_n \in \mathbb{R}^n}&fg=000000$ is a vector of ones. In this case $latex {n}&fg=000000$ is the only nonzero eigenvalue and

$latex \displaystyle \|P\|_*\sqrt{n} =n^{3/2}\,, \qquad \text{and} \qquad \sum_{i=1}^n \{\mu_i^2 \wedge n\}=n\,. &fg=000000$

Moreover, the ratio between these quantities can be made arbitrary close to 0 by taking $latex {\|P\|/\sqrt{n}}&fg=000000$ arbitrary close to 0. Indeed,

$latex \displaystyle \sum_{i=1}^n \{\mu_i^2 \wedge n\}\le \big( \max_i|\mu_i| \big) \sum_{i=1}^n |\mu_i| = \frac{\|P\|}{\sqrt{n}} \|P\|_*\sqrt{n}\,. &fg=000000$


Proof of the lemma

Let $latex {\mathcal{N}}&fg=000000$ be a $latex {\frac12}&fg=000000$-net of the unit Euclidean ball $latex {\mathcal{B}}&fg=000000$ of $latex {\mathbb{R}^n}&fg=000000$. Using a volume packing argument (see Vershynin (2011)), it is not hard to show that we can always take $latex {\mathcal{N}}&fg=000000$ of cardinality $latex {|\mathcal{N}| \le 6^n}&fg=000000$. Moreover, for any vector $latex {t \in \mathbb{R}^n}&fg=000000$,

$latex \displaystyle \sup_{x \in \mathcal{B}}x^\top t\le 2\sup_{x \in \mathcal{N}}x^\top t\,. &fg=000000$

It implies

$latex \displaystyle \|A-P\|=\sup_{(x,y) \in \mathcal{B}^2} x^\top(A-P)y\le 4\sup_{(x,y) \in \mathcal{N}^2} x^\top(A-P)y\,. &fg=000000$

Using a union bound, we find for any $latex {\varepsilon>0}&fg=000000$,

$latex \displaystyle \mathbb{P}(\|A-P\|> \varepsilon)\le 4\sum_{(x,y) \in \mathcal{N}^2}\mathbb{P}(x^\top(A-P)y>\varepsilon) &fg=000000$

We conclude a Chernoff bound to get for any $latex {s>0}&fg=000000$,

$latex \displaystyle \begin{array}{rcl} \mathbb{P}(x^\top(A-P)y>\varepsilon)&\le &\mathbb{E}(e^{sx^\top(A-P)y}) e^{-s\varepsilon}\\ &= &\prod_{i}\mathbb{E}(e^{sx_iy_i\xi_{i,i}}) \prod_{i<j}\mathbb{E}(e^{2sx_iy_j\xi_{i,j}}) e^{-s\varepsilon}\,. \end{array} &fg=000000$

Using Hoeffding’s lemma, since $latex {|\xi_{i,j}|\le 1}&fg=000000$, we get

$latex \displaystyle \prod_{i}\mathbb{E}(e^{sx_iy_i\xi_{i,i}}) \prod_{i<j}\mathbb{E}(e^{2sx_iy_j\xi_{i,j}}) \le e^{\frac{s^2}{4}} &fg=000000$

Putting together the above three displays and optimizing over $latex {s}&fg=000000$ yields

$latex \displaystyle \mathbb{P}(\|A-P\|> \varepsilon)\le 4|\mathcal{N}|^2 e^{-\varepsilon^2}\le 4e^{-\varepsilon^2+2n\log(6)}\,. &fg=000000$

It completes the proof of the lemma.

This entry was posted in Random graphs. Bookmark the permalink.