Discrepancy algorithm inspired by gradient descent and multiplicative weights; after Levy, Ramadas and Rothvoss

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines ideas from gradient descent and multiplicative weights in a non-trivial (yet very simple) way. Below I reprove Spencer’s 6 deviations theorem with their machinery (in the actual paper Levy, Ramadas and Rothvoss do more than this).

First let me remind you the setting (see also this previous blog post for some motivation on discrepancy and a bit more context; by the way it is funny to read the comments in that post after this): given v_1, \hdots, v_n \in \mathbb{S}^{n-1} one wants to find x \in \{-1,1\}^n (think of it as a “coloring” of the coordinates) such that \max_{i \in [n]} |x \cdot v_i| \leq C for some numerical constant C>0 (when v_i is a normalized vectors of 1‘s and 0‘s the quantity |x \cdot v_i| represents the unbalancedness of the coloring in the set corresponding to v_i). Clearly it suffices to give a method to find x \in [-1,1]^n with at least half of its coordinates equal to -1 and 1 and such that \max_{i \in [n]} |x \cdot v_i| \leq C' for some numerical constant C'>0 (indeed one can then simply recurse on the coordinates not yet set to -1 or 1; this is the so-called “partial coloring” argument). Note also that one can drop the absolute value by taking v_i and -v_i (the number of constraints then becomes 2n but this is easy to deal with and we ignore it here for sake of simplicity).

The algorithm

Let x_0 = 0, w_0 = 1 \in \mathbb{R}^n. We run an iterative algorithm which keeps at every time step t \in \mathbb{N} a subspace U_t of valid update directions and then proceeds as follows. First find (using for instance a basis for U_t) z_t \in \mathbb{S}^{n-1} \bigcap U_t such that

(1)   \begin{equation*}  \sum_{i=1}^n \frac{w_t(i)}{\|w\|_1} (v_i \cdot z_t)^2 \leq \frac{1}{\mathrm{dim}(U_t)} . \end{equation*}

Then update x_{t+1}= x_t + \lambda_t z_t where \lambda_t \in [0,1] is maximal so that x_{t+1} remains in [-1,1]^n. Finally update the exponential weights by w_{t+1}(i) = w_t(i) \exp( v_i \cdot (x_{t+1} - x_t) ).


It remains to describe the subspace U_t. For this we introduce the set I_t \subset [n] containing the n/16^{th} largest coordinates of w_t (the “inactive” coordinates) and the set F_t \subset [n] containing the coordinates of x_t equal to -1 or 1 (the “frozen” coordinates). The subspace U_t is now described as the set of points orthogonal to (i) x_t, (ii) e_j, j \in F_t, (iii) v_i, i \in I_t, (iv) \sum_{i=1}^n w_t(i) v_i. The intuition for (i) and (ii) is rather clear: for (i) one simply wants to ensure that the method keeps making progress towards the boundary of the cube (i.e., |x_{t+1}| > |x_t|) while for (ii) one wants to make sure that coordinates which are already “colored” (i.e., set to -1 or 1) are not updated. In particular (i) and (ii) together ensures that at each step either the norm squared of x_t augments by 1 (in particular \lambda_t=1) or that one fixes forever one of the coordinates to -1 or 1. In particular this means that after at most 3 n /2 iterations one will have a partial coloring (i.e., half of the coordinates set to -1 or 1, which was our objective). Property (iii) is about ensuring that we stop walking in the directions where we are not making good progress (there are many ways to ensure this and this precise form will make sense towards the end of the analysis). Property (iv) is closely related, and while it might be only a technical condition it can also be understood as ensuring that locally one is not increasing the softmax of the constraints, indeed (iv) exactly says that one should move orthogonally to the gradient of \log(\sum_{i=1}^n \exp(x \cdot v_i)).

The analysis

Let Z_t = \sum_{i=1}^n w_t(i). Note that since z_t is on the sphere and \lambda_t \in [0,1] one has that |v_i \cdot (x_{t+1} - x_t)| \leq 1. Thus using \exp(x) \leq 1 + x + x^2 for x \in [-1,1], as well as property (iv) (i.e., \sum_{i=1}^n w_t(i) v_i \cdot z_t = 0) and \lambda_t \in [0,1] one obtains:

    \[Z_{t+1} = \sum_{i=1}^n w_t(i) \exp(v_i \cdot (x_{t+1} - x_t)) \leq \sum_{i=1}^n w_t(i) (1 + (v_i \cdot z_t)^2) .\]

Observe now that the subspace U_t has dimension at least n/4 (say for n \geq 16) and thus by (1) and the above inequalities one gets:

    \[Z_{t+1} \leq (1+ 4/n) Z_t .\]

In particular for any t \leq 2n, Z_{t} \leq C n for some numerical constant C >0. It only remains to observe that this ensures w_{2n}(i) = O(1) for any i \in [n] (this concludes the proof since we already observed that at time 2 n at least half of the coordinates are colored). For this last implication we simply use property (iii). Indeed assume that some coordinate i satisfies at some time t \leq 2n, w_t(i) > c e for some c>0. Since each update increases the weights (multiplicatively) by at most e it means that there is a previous time (say s) where this weight was larger than c and yet it got updated, meaning that it was not in the top n/16 weights, and in particular one had Z_s \geq c n / 16 which contradicts Z_{s} \leq C n for c large enough (namely c > 16 C).

Posted in Theoretical Computer Science | Leave a comment

New journal: Mathematical Statistics and Learning

I am thrilled to announce the launch of a new journal, “Mathematical Statistics and Learning”, to be edited be the European Mathematical Society. The goal of the journal is be the natural home for the top works addressing the mathematical challenges that arise from the current data revolution (as well as breakthroughs on classical data analysis problems!). I personally wish such a journal had existed for at least a decade and I am very happy to be part of this endeavor as an associate editor. Please consider submitting your very best mathematical work in statistics and learning to this journal!

Some more details provided by Gabor Lugosi and Shahar Mendelson on behalf of the Editorial Board:

The journal is devoted to research articles of the highest quality in all aspects of Mathematical Statistics and Learning, including those studied in traditional areas of Statistics and in Machine Learning as well as in Theoretical Computer Science and Signal Processing. We believe that at this point in time there is no venue for top level mathematical publications in those areas, and our aim is to make the new journal such a venue.

The journal’s Editorial Board consists of the Editors,

Luc Devroye (McGill),

Gabor Lugosi (UPF Barcelona),

Shahar Mendelson (Technion and ANU),

Elchanan Mossel (MIT),

Mike Steele (U. Pennsylvania),

Alexandre Tsybakov (ENSAE),

Roman Vershynin (U. Michigan),


and the Associate Editors,

Sebastien Bubeck (Microsoft Research),

Andrea Montanari (Stanford),

Jelani Nelson (Harvard),

Philippe Rigollet (MIT),

Sara van de Geer (ETH – Zurich),

Ramon van Handel (Princeton),

Rachel Ward (UT – Austin).

The success of the journal depends entirely on our community;  we need your help and support in making it the success we believe it can be. We therefore ask that you consider submitting to the journal results you think are of a very high quality.

The first issue of the journal is scheduled to appear in early 2018.

Posted in Announcement | 2 Comments

STOC 2017 accepted papers

The list of accepted papers to STOC 2017 has just been released. Following the trend in recent years there are quite a few learning theory papers! I have already blogged about the kernel-based convex bandit algorithm; as well as the smoothed poly-time local max-cut (a.k.a. asynchronous Hopfield network). Some of the other learning papers that caught my attention: yet again a new viewpoint on acceleration for convex optimization; some progress on the complexity of finding stationary point on non-convex functions; a new twist on tensor decomposition for poly-time learning of latent variable models; an approximation algorithm for low-rank approximation in ell_1 norm; a new framework to learn from adversarial data; some progress on the trace reconstruction problem (amazingly the exact same result was discovered independently by two teams, see here and here); new sampling technique for graphical models; new relevant statistical physics result; faster submodular minimization;  and finally some new results on nearest neighbor search.

Posted in Conference/workshop, Theoretical Computer Science | 2 Comments

Guest post by Miklos Racz: Confidence sets for the root in uniform and preferential attachment trees

In the final post of this series (see here for the previous posts) we consider yet another point of view for understanding networks. In the previous posts we studied random graph models with community structure and also models with an underlying geometry. While these models are important and lead to fascinating problems, they are also static in time. Many real-world networks are constantly evolving, and their understanding requires models that reflect this. This point of view brings about a host of new interesting and challenging statistical inference questions that concern the temporal dynamics of these networks. In this post we study such questions for two canonical models of randomly growing trees: uniform attachment and preferential attachment.

Models of growing graphs

A natural general model of randomly growing graphs can be defined as follows. For n \geq k \geq 1 and a graph S on k vertices, define the random graph G(n,S) by induction. First, set G(k,S) = S; we call S the seed of the graph evolution process. Then, given G(n,S), G(n+1, S) is formed from G(n,S) by adding a new vertex and some new edges according to some adaptive rule. If S is a single vertex, we write simply G(n) instead of G(n,S).

There are several rules one can consider; here we study perhaps the two most natural ones: uniform attachment and preferential attachment (denoted \mathrm{UA} and \mathrm{PA} in the following). Moreover, for simplicity we focus on the case of growing trees, where at every time step a single edge is added. Uniform attachment trees are defined recursively as follows: given \mathrm{UA}(n,S), \mathrm{UA}(n+1,S) is formed from \mathrm{UA}(n,S) by adding a new vertex u and adding a new edge uv where the vertex v is chosen uniformly at random among vertices of \mathrm{UA} \left( n, S \right), independently of all past choices. Preferential attachment trees are defined similarly, except that v is chosen with probability proportional to its degree:

    \[ \mathbb{P}\left( v = i \, \middle| \, \mathrm{PA}(n, S) \right) = \frac{d_{\mathrm{PA}(n, S)}(i)}{2 \left( n - 1 \right)}, \]

where for a tree T we denote by d_{T} (u) the degree of vertex u in T.

Questions: detection and estimation

The most basic questions to consider are those of detection and estimation. Can one detect the influence of the initial seed graph? If so, is it possible to estimate the seed? Can one find the root if the process was started from a single node? We introduce these questions in the general model of randomly growing graphs described above, even though we study them in the special cases of uniform and preferential attachment trees later.

The detection question can be rephrased in the terminology of hypothesis testing. Given two potential seed graphs S and T, and an observation R which is a graph on n vertices, one wishes to test whether R \sim G(n, S) or R \sim G(n, T). The question then boils down to whether one can design a test with asymptotically (in n) nonnegligible power. This is equivalent to studying the total variation distance between G(n, S) and G(n, T), so we naturally define

    \begin{equation*} \delta(S, T) := \lim_{n \to \infty} \mathrm{TV}(G(n, S), G(n, T)), \end{equation*}

where G(n,S) and G(n,T) are random elements in the finite space of unlabeled graphs with n vertices. This limit is well-defined because \mathrm{TV}(G(n, S), G(n, T)) is nonincreasing in n (since if G(n,S) = G(n,T), then the evolution of the random graphs can be coupled such that G(n', S) = G(n', T) for all n' \geq n) and always nonnegative.

If the seed has an influence, it is natural to ask whether one can estimate S from G(n,S) for large n. If so, can the subgraph corresponding to the seed be located in G(n,S)? We study this latter question in the simple case when the process starts from a single vertex called the root. (In the case of preferential attachment, starting from a single vertex is not well-defined; in this case we start the process from a single edge and the goal is to find one of its endpoints.) A root-finding algorithm is defined as follows. Given G(n) and a target accuracy \epsilon \in (0,1), a root-finding algorithm outputs a set H\left( G(n), \epsilon \right) of K(\epsilon) vertices such that the root is in H\left( G(n), \epsilon \right) with probability at least 1-\epsilon (with respect to the random generation of G(n)).

An important aspect of this definition is that the size of the output set is allowed to depend on \epsilon, but not on the size n of the input graph. Therefore it is not clear that root-finding algorithms exist at all. Indeed, there are examples when they do not exist: consider a path that grows by picking one of its two ends at random and extending it by a single edge. However, it turns out that in many interesting cases root-finding algorithms do exist. In such cases it is natural to ask for the best possible value of K(\epsilon).

The influence of the seed

Consider distinguishing between a PA tree started from a star with 10 vertices, S_{10}, and a PA tree started from a path with 10 vertices, P_{10}. Since the preferential attachment mechanism incorporates the rich-get-richer phenomenon, one expects the degree of the center of the star in \mathrm{PA}(n,S_{10}) to be significantly larger than the degree of any of the initial vertices in the path in \mathrm{PA}(n,P_{10}). This intuition guided Bubeck, Mossel, and Racz when they initiated the theoretical study of the influence of the seed in PA trees. They showed that this intuition is correct: the limiting distribution of the maximum degree of the PA tree indeed depends on the seed. Using this they were able to show that for any two seeds S and T with at least 3 vertices and different degree profiles we have

    \[\delta_{\mathrm{PA}} (S,T) > 0.\]

However, statistics based solely on degrees cannot distinguish all pairs of nonisomorphic seeds. This is because if S and T have the same degree profiles, then it is possible to couple \mathrm{PA}(n,S) and \mathrm{PA}(n,T) such that they have the same degree profiles for every n. In order to distinguish between such seeds, it is necessary to incorporate information about the graph structure into the statistics that are studied. This was done successfully by Curien, Duquesne, Kortchemski, and Manolescu, who analyzed statistics that measure the geometry of large degree nodes. These results can be summarized in the following theorem.

Theorem: The seed has an influence in PA trees in the following sense. For any trees S and T that are nonisomorphic and have at least 3 vertices, we have

    \[\delta_{\mathrm{PA}}(S,T) > 0.\]

In the case of uniform attachment, degrees do not play a special role, so initially one might even think that the seed has no influence in the limit. However, it turns out that the right perspective is not to look at degrees but rather the sizes of appropriate subtrees (we shall discuss such statistics later). By extending the approach of Curien et al. to deal with such statistics, Bubeck, Eldan, Mossel, and Racz showed that the seed has an influence in uniform attachment trees as well.

Theorem: The seed has an influence in UA trees in the following sense. For any trees S and T that are nonisomorphic and have at least 3 vertices, we have

    \[\delta_{\mathrm{UA}}(S,T) > 0.\]

These results, together with a lack of examples showing opposite behavior, suggest that for most models of randomly growing graphs the seed has influence.

Question: How common is the phenomenon observed in Theorems 1 and 2? Is there a natural large class of randomly growing graphs for which the seed has an influence? That is, models where for any two seeds S and T (perhaps satisfying an extra condition), we have \delta (S,T) > 0. Is there a natural model where the seed has no influence?

Finding Adam

These theorems about the influence of the seed open up the problem of finding the seed. Here we present the results of Bubeck, Devroye, and Lugosi who first studied root-finding algorithms in the case of UA and PA trees.

They showed that root-finding algorithms indeed exist for PA trees and that the size of the best confidence set is polynomial in 1/\epsilon.

Theorem: There exists a polynomial time root-finding algorithm for PA trees with

    \[K(\epsilon) \leq c \frac{\log^{2} (1/\epsilon)}{\epsilon^{4}}\]

for some finite constant c. Furthermore, there exists a positive constant c' such that any root-finding algorithm for PA trees must satisfy

    \[K(\epsilon) \geq \frac{c'}{\epsilon}.\]

They also showed the existence of root-finding algorithms for UA trees. In this model, however, there are confidence sets whose size is subpolynomial in 1/\epsilon. Moreover, the size of any confidence set has to be at least superpolylogarithmic in 1/\epsilon.

Theorem: There exists a polynomial time root-finding algorithm for UA trees with

    \[K(\epsilon) \leq \exp \left( c \tfrac{\log (1/\epsilon)}{\log \log (1/\epsilon)} \right)\]

for some finite constant c. Furthermore, there exists a positive constant c' such that any root-finding algorithm for UA trees must satisfy

    \[K(\epsilon) \geq \exp \left( c' \sqrt{\log (1/\epsilon)} \right).\]

These theorems show an interesting quantitative difference between the two models: finding the root is exponentially more difficult in PA than in UA. While this might seem counter-intuitive at first, the reason behind this can be traced back to the rich-get-richer phenomenon: the effect of a rare event where not many vertices attach to the root gets amplified by preferential attachment, making it harder to find the root.

Proofs using Polya urns

We now explain the basic ideas that go into proving Theorems 3 and 4 and prove some simpler cases. While UA and PA are arguably the most basic models of randomly growing graphs, the evolution of various simple statistics, such as degrees or subtree sizes, can be described using even simpler building blocks: Polya urns. In this post we assume familiarity with Polya urns; we refer to the lecture notes for a primer on Polya urns for the interested reader.

A root-finding algorithm based on the centroid

We start by presenting a simple root-finding algorithm for UA trees. This algorithm is not optimal, but its analysis is simple and highlights the basic ideas.

For a tree T, if we remove a vertex v \in V(T), then the tree becomes a forest consisting of disjoint subtrees of the original tree. Let \psi_{T} \left( v \right) denote the size (i.e., the number of vertices) of the largest component of this forest. A vertex v that minimizes \psi_{T} \left( v \right) is known as a centroid of T; one can show that there can be at most two centroids. We define the confidence set H_{\psi} by taking the set of K vertices with smallest \psi values.

Theorem: The centroid-based H_{\psi} defined above is a root-finding algorithm for the UA tree. More precisely, if

    \[K \geq \frac{5}{2} \frac{\log \left( 1 / \epsilon \right)}{\epsilon},\]


    \[ \liminf_{n \to \infty} \mathbb{P} \left( 1 \in H_{\psi} \left( \mathrm{UA} \left( n \right)^{\circ} \right) \right) \geq 1 - \frac{4\epsilon}{1-\epsilon}, \]

where 1 denotes the root, and \mathrm{UA} \left( n \right)^{\circ} denotes the unlabeled version of \mathrm{UA} \left( n \right).

Proof: We label the vertices of the UA tree in chronological order. We start by introducing some notation that is useful throughout the proof. For 1 \leq i \leq k, denote by T_{i,k} the tree containing vertex i in the forest obtained by removing in \mathrm{UA}\left( n \right) all edges between vertices \left\{ 1, \dots, k \right\}. See the figure for an illustration.


Let \left| T \right| denote the size of a tree T, i.e., the number of vertices it contains. Note that the vector

    \[\left( \left| T_{1, k} \right|, \dots, \left| T_{k,k} \right| \right)\]

evolves according to the classical P\’olya urn with k colors, with initial state \left( 1, \dots, 1 \right). Therefore the normalized vector

    \[\left( \left| T_{1, k} \right|, \dots, \left| T_{k,k} \right| \right) / n\]

converges in distribution to a Dirichlet distribution with parameters \left( 1, \dots, 1 \right).

Now observe that

    \[ \mathbb{P} \left( 1 \notin H_{\psi} \right) \leq \mathbb{P} \left( \exists i > K : \psi \left( i \right) \leq \psi \left( 1 \right) \right) \leq \mathbb{P} \left( \psi \left( 1 \right) \geq \left( 1 - \epsilon \right) n \right) + \mathbb{P} \left( \exists i > K : \psi \left( i \right) \leq \left( 1 - \epsilon \right) n \right). \]

We bound the two terms appearing above separately, starting with the first one. Note that

    \[\psi \left( 1 \right) \leq \max \left\{ \left| T_{1,2} \right|, \left| T_{2,2} \right| \right\},\]

and both \left| T_{1,2} \right| / n and \left| T_{2,2} \right| / n converge in distribution to a uniform random variable in \left[ 0, 1 \right]. Hence a union bound gives us that

    \[ \limsup_{n \to \infty} \mathbb{P} \left( \psi \left( 1 \right) \geq \left( 1 - \epsilon \right) n \right) \leq 2 \lim_{n \to \infty} \mathbb{P} \left( \left| T_{1,2} \right| \geq \left( 1 - \epsilon \right) n \right) = 2 \epsilon. \]

For the other term, first observe that for any i > K we have

    \[ \psi \left( i \right) \geq \min_{1 \leq k \leq K} \sum_{j = 1, j \neq k}^{K} \left| T_{j,K} \right|. \]

Now using results on P\’olya urns we have that for every k such that 1 \leq k \leq K, the random variable

    \[\frac{1}{n} \sum_{j = 1, j \neq k}^{K} \left| T_{j,K} \right|\]

converges in distribution to the \mathrm{Beta} \left( K - 1, 1 \right) distribution. Hence by a union bound we have that

    \begin{align*} \limsup_{n \to \infty} \mathbb{P} \left( \exists i > K : \psi \left( i \right) \leq \left( 1 - \epsilon \right) n \right) &\leq \lim_{n \to \infty} \mathbb{P} \left( \exists 1 \leq k \leq K : \sum_{j = 1, j \neq k}^{K} \left| T_{j,K} \right| \leq \left( 1 - \epsilon \right) n \right) \\ &\leq K \left( 1 - \epsilon \right)^{K - 1}. \end{align*}

Putting together the two bounds gives that

    \[ \limsup_{n \to \infty} \mathbb{P} \left( 1 \notin H_{\psi} \right) \leq 2 \epsilon + K \left( 1 - \epsilon \right)^{K-1}, \]

which concludes the proof due to the assumption on K.



The same estimator H_{\psi} works for the preferential attachment tree as well, if one takes

    \[K \geq C \frac{\log^{2} \left( 1 / \epsilon \right)}{\epsilon^{4}}\]

for some positive constant C. The proof mirrors the one above, but involves a few additional steps; we refer to Bubeck et al. for details.

For uniform attachment the bound on K given by Theorem 5 is not optimal. It turns out that it is possible to write down the maximum likelihood estimator (MLE) for the root in the UA model; we do not do so here, see Bubeck et al. One can view the estimator H_{\psi} based on the centroid as a certain “relaxation” of the MLE. By constructing a certain “tighter” relaxation of the MLE, one can obtain a confidence set with size subpolynomial in 1/\epsilon as described in Theorem 4. The analysis of this is the most technical part of Bubeck et al. and we refer to this paper for more details.

Lower bounds

As mentioned above, the MLE for the root can be written down explicitly. This aids in showing a lower bound on the size of a confidence set. In particular, Bubeck et al. define a set of trees whose probability of occurrence under the UA model is not too small, yet the MLE provably fails, giving the lower bound described in Theorem 4. We refer to Bubeck et al. for details.

On the other hand, for the PA model it is not necessary to use the structure of the MLE to obtain a lower bound. A simple symmetry argument suffices to show the lower bound in Theorem 3, which we now sketch.

First observe that the probability of error for the optimal procedure is non-decreasing with n, since otherwise one could simulate the process to obtain a better estimate. Thus it suffices to show that the optimal procedure must have a probability of error of at least \epsilon for some finite n. We show that there is some finite n such that with probability at least 2\epsilon, the root is isomorphic to at least 2c / \epsilon vertices in \mathrm{PA}(n). Thus if a procedure outputs at most c/\epsilon vertices, then it must make an error at least half the time (so with probability at least \epsilon).

Observe that the probability that the root is a leaf in \mathrm{PA}(n) is

    \[\frac{1}{2} \times \frac{3}{4} \times \dots \times \left( 1 - \frac{1}{2n} \right) = \Theta \left( 1 / \sqrt{n} \right).\]

By choosing n = \Theta \left( 1 / \epsilon^{2} \right), this happens with probability \Theta \left( \epsilon \right). Furthermore, conditioned on the root being a leaf, with constant probability vertex 2 is connected to \Theta \left( \sqrt{n} \right) = \Theta \left( 1 / \epsilon \right) leaves, which are then isomorphic to the root.

Open problems

There are many open problems and further directions that one can pursue; the four papers we have discussed contain 20 open problems and conjectures alone, and we urge the reader to have a look and try to solve them!

Posted in Probability theory, Random graphs | 2 Comments

Geometry of linearized neural networks

This week we had the pleasure to host Tengyu Ma from Princeton University who told us about the recent progress he has made with co-authors to understand various linearized versions of neural networks. I will describe here two such results, one for Residual Neural Networks and one for Recurrent Neural Networks.

Some properties to look for in non-convex optimization

We will say that a function f admits first order optimality (respectively second order optimality) if all critical points (respectively all local minima) of f are global minima (of course first order optimality implies second order optimality for smooth functions). In particular with first order optimality one has that gradient descent converges to the global minimum, and with second order optimality this is also true provided that one avoids saddle points. To obtain rates of convergence it can be useful to make more quantitative statements. For example we say that f is \alpha-Polyak if

    \[\|\nabla f(x)\|^2 \geq \alpha (f(x) - f^*) .\]

Clearly \alpha-Polyak implies first order optimality, but more importantly it also implies linear convergence rate for gradient descent on f. A variant of this condition is \alpha-weak-quasi-convexity:

    \[\langle \nabla f(x), x-x^*\rangle \geq \alpha (f(x) - f^*) ,\]

in which case gradient descent converges at the slow non-smooth rate 1/\sqrt{t} (and in this case it is also robust to noise, i.e. one can write a stochastic gradient descent version). The proofs of these statements just mimic the usual convex proofs. For more on these conditions see for instance this paper.

Linearized Residual Networks

Recall that a neural network is just a map x \in \mathbb{R}^n \mapsto \sigma \circ A_L \circ \sigma A_{L-1} \hdots \sigma \circ A_0 (x) where A_0,\hdots, A_{L} are linear maps (i.e. they are the matrices parametrizing the neural network) and \sigma is some non-linear map (the most popular one, ReLu, is the just the coordinate-wise positive part). Alternatively you can think of a neural network as a sequence of hidden states h_0, h_1,\hdots,h_L where h_0=x and h_{t+1} = \sigma(A_t h_{t}). In 2015 a team of researcher at MSR Asia introduced the concept of a residual neural network where the hidden states are now updated as before for t even but for t odd we set h_{t+1} = h_{t-1} + \sigma(A_t h_t). Apparently this trick allowed them to train much deeper networks, though it is not clear why this would help from a theoretical point of view (the intuition is that at least when the network is initialized with all matrices being 0 it still does something non-trivial, namely it computes the identity).

In their most recent paper Moritz Hardt and Tengyu Ma try to explain why adding this “identity connection” could be a good idea from a geometric point of view. They consider an (extremely) simplified model where there is no non-linearity, i.e. \sigma is the identity map. A neural network is then just a product of matrices. In particular the landscape we are looking at for least-squares with such a model is of the form:

    \[g(A_0,\hdots, A_L) = \mathbb{E}_{(x,y) \sim \nu} \|\prod_{i=0}^L A_i x - y\|^2 ,\]

which is of course a non-convex function (just think of the function (a,b) \mapsto (ab-1)^2 and observe that on the segment a=b it gives the non-convex function (a^2-1)^2). However it actually satisfies the second-order optimality condition:

Proposition [Kawaguchi 2016]

Assume that x has a full rank covariance matrix and that y=Rx for some deterministic matrix R. Then all local minima of g are global minima.

I won’t give the proof of this result as it requires to take the second derivative of g which is a bit annoying (I will give below the proof of the first derivative). Now in this linearized setting the residual network version (where the identity connection is added at every layer) corresponds simply to a reparametrization around the identity, in other words we consider now the following function:

    \[f(A_0,\hdots, A_L) = \mathbb{E}_{(x,y) \sim \nu} \|\prod_{i=0}^L (A_i+\mathrm{I}) x - y\|^2 .\]

Proposition [Hardt and Ma 2016]

Assume that x has a full rank covariance matrix and that y= Rx for some deterministic matrix R. Then f has first order optimality on the set \{(A_0, \hdots, A_L) : \|A_i\| <1\}.

Thus adding the identity connection makes the objective function better behave around the starting points with all-zeros matrices (in the sense that gradient descent doesn’t have to worry about avoiding saddle points). The proof is just a few lines of standard calculations to take derivatives of functions with matrix-valued inputs.

Proof: One has with E = R - \prod_{i=0}^L (A_i+\mathrm{I}) and \Sigma = \E x x^{\top},

    \[f = \E (E x)^{\top} Ex = \mathrm{Tr}(E \Sigma E^{\top}) =: \|E\|_{\Sigma}^2 ,\]

so with E_{<i} = \prod_{j <i} (A_j + \mathrm{I}) and E_{>i}=\prod_{j >i} (A_j + \mathrm{I}),

    \begin{eqnarray*} f(A_0,\hdots, A_i + V, \hdots, A_L) & = & \|R - \prod_{j <i} (A_j + \mathrm{I}) \times (A_i + V +\mathrm{I}) \times \prod_{j >i} (A_j + \mathrm{I})\|_{\Sigma}^2 \\ & = & \|E + E_{<i} V E_{>i}\|_{\Sigma}^2 \\ & = & \|E\|_{\Sigma}^2 + 2 \langle \Sigma E, E_{<i} V E_{>i} \rangle + \|E_{<i} V E_{>i}\|_{\Sigma}^2 , \end{eqnarray*}

which exactly means that the derivative of f with respect to A_i is equal to E_{>i}^{\top} \Sigma E E_{<i}^{\top}. On the set under consideration one has that E_{>i} and E_{<i} are invertible (and so is \Sigma by assumption), and thus if this derivative is equal to 0 it muts be that E=0 and thus f=0 (which is the global minimum).

Linearized recurrent neural networks

The simplest version of a recurrent neural network is as follows. It is a mapping of the form (x_1,\hdots,x_T) \mapsto (y_1,\hdots,y_T) (we are thinking of doing sequence to sequence prediction). In these networks the hidden state is updated as h_{t+1} = \sigma_1(A h_{t} + B x_{t}) (with h_1=0) and the output is y_t = \sigma_2(C h_t + D x_t). I will now describe a paper by Hardt, Ma and Recht (see also this blog post) that tries to understand the geometry of least-squares for this problem in the linearized version where \sigma_1 = \sigma_2 = \mathrm{Id}. That is we are looking at the function:

    \[f(\hat{A}, \hat{B}, \hat{C}, \hat{D})=\mathbb{E}_{(x_t)} \frac{1}{T} \sum_{t=1}^T \|\hat{y}_t - y_t\|^2 ,\]

where (y_t) is obtained from (x_t) via some unknown recurrent neural network with parameters A,B,C,D. First observe that by induction one can easily see that h_{t+1} = \sum_{k=1}^t A^{t-k} B x_k and y_t = D x_t + \sum_{k=1}^{t-1} C A^{t-1-k} B x_k. In particular, assuming that (x_t) is an i.i.d. isotropic sequence one obtains

    \[\E \|y_t - \hat{y}_t\|_2^2 = \|D-\hat{D}\|_F^2 + \sum_{k=1}^{t-1} \|\hat{C} \hat{A}^{t-1-k} \hat{B} - C A^{t-1-k} B\|_F^2 ,\]

and thus

    \[f(\hat{A}, \hat{B}, \hat{C}, \hat{D})=\|D-\hat{D}\|_F^2 + \sum_{k=1}^{T-1} (1- \frac{k}{T}) \|\hat{C} \hat{A}^{k-1} \hat{B} - C A^{k-1} B\|_F^2 .\]

In particular we see that the effect of \hat{D} is decoupled from the other variables and that is appears as a convex function, thus we will just ignore it. Next we make the natural assumption that the spectral radius of A is less than 1 (for otherwise the influence of the initial input x_1 is growing over time which doesn’t seem natural) and thus up to some small error term (for large T) one can consider the idealized risk:

    \[(\hat{A},\hat{B}, \hat{C}) \mapsto \sum_{k=0}^{+\infty} \|\hat{C} \hat{A}^{k} \hat{B} - C A^{k} B\|_F^2 .\]

The next idea is a cute one which makes the above expression more tractable. Consider the series r_k = C A^k B and its Fourier transform:

    \[G(\theta) = \sum_{k=0}^{+\infty} r_k \exp(i k \theta) = C (\sum_{k=0}^{+\infty} (\exp(i \theta) A)^k) B = C(\mathrm{I} - \exp(i \theta) A)^{-1} B .\]

By Parseval’s theorem the idealized risk is equal to the L_2 distance between G and \hat{G} (i.e. \int_{[-\pi, \pi]} \|G(\theta)-\hat{G}(\theta)\|_F^2 d\theta). We will now show that under appropriate further assumptions, for any \theta that \|G(\theta) - \hat{G}(\theta)\|_F^2 is weakly-quasi-convex in (\hat{A},\hat{B},\hat{C}) (in particular this shows that the idealized risk is weakly-quasi-convex). The big assumption that Hardt, Ma and Recht make is that the system is a “single-input single-output” model, that is both x_t and y_t are scalar. In this case it turns out that control theory shows that there is a “canonical controlable form” where B=(0,\hdots,0,1), C=(c_1,\hdots,c_n) and A has zeros everywhere except on the upper diagonal where it has ones and on the last row where it has a_n,\hdots, a_1 (I don’t know the proof of this result, if some reader has a pointer for a simple proof please share in the comments!). Note that with this form the system is simple to interpret as one has Ah = (h(2),\hdots, h(n-1), \langle a,h\rangle) and Ah+Bx =(h(2),\hdots, h(n-1), \langle a,h\rangle+x). Now with just a few lines of algebra:

    \[G(\theta) = \frac{c_1 + \hdots + c_{n} z^{n-1}}{z^n + a_1 z^{n-1} + \hdots + a_n}, \ \text{where} \ z=\exp(i \theta) .\]

Thus we are just asking to check the weak-quasi-convexity of

    \[(\hat{a}, \hat{c}) \mapsto |\frac{\hat{c}_1 + \hdots + \hat{c}_{n} z^{n-1}}{z^n + \hat{a}_1 z^{n-1} + \hdots + \hat{a}_n} - \frac{u}{v}|^2 .\]

Weak-quasi-convexity is preserved by linear functions, so we just need to understand the map

    \[(\hat{u},\hat{v}) \in \mathbb{C} \times \mathbb{C} \mapsto |\frac{\hat{u}}{\hat{v}} - \frac{u}{v}|^2 ,\]

which is weak-quasi-convex provided that \hat{v} has a positive inner product with v. In particular we just proved the following:

Theorem [Hardt, Ma, Recht 2016]

Let C(a) := \{z^n + a_1 z^{n-1} + \hdots + a_n , z \in \mathbb{C}, |z|=1\} and assume there is some cone \mathcal{C} \subset \mathbb{C}^2 of angle less than \pi/2-\alpha such that C(a) \subset \mathcal{C}. Then the idealized risk is \alpha-weakly-quasi-convex on the set of \hat{a} such that C(\hat{a}) \subset \mathcal{C}.

(In the paper they specifically pick the cone \mathcal{C} where the imaginary part is larger than the real part.) This theorem naturally suggests that by overparametrizing the network (i.e. adding dimensions to a and c) one could have a nicer landscape (indeed in this case the above condition can be easier to check), see the paper for more details!

Posted in Optimization | 1 Comment

Local max-cut in smoothed polynomial time

Omer Angel, Yuval Peres, Fan Wei, and myself have just posted to the arXiv our paper showing that local max-cut is in smoothed polynomial time. In this post I briefly explain what is the problem, and I give a short proof of the previous state of the art result on this problem, which was a paper by Etscheid and Roglin showing that local max-cut is in quasi-polynomial time.

Local max-cut and smoothed analysis

Let G = (V,E) be a connected graph with n vertices and w: E\rightarrow [-1,1] be an edge weight function. The local max-cut problem asks to find a partition of the vertices \sigma: V\rightarrow \{-1,1\} whose total cut weight

    \[\frac12 \sum_{uv \in E} w(uv) \big(1-\sigma(u)\sigma(v)\big) ,\]

is locally maximal, in the sense that one cannot increase the cut weight by changing the value of \sigma at a single vertex (recall that actually finding the global maximum is NP-hard). See the papers linked to above for motivation on this problem.

There is a simple local search algorithm for this problem, sometimes referred to as “FLIP”: start from some initial \sigma_0 and iteratively flip vertices (i.e. change the sign of \sigma at a vertex) to improve the cut weight until reaching a local maximum. It is easy to build instances (\sigma_0, w) where FLIP takes exponential time, however in “practice” it seems that FLIP always converges quickly. This motivates the smoothed analysis of FLIP, that is we want to understand what is the typical number of steps for FLIP when the edge weights is perturbed by a small amount of noise. Formally we now assume that the weight on edge e \in E is given by a random variable X_e \in [-1,1] which has a density with respect to the Lebesgue measure bounded from above by \phi (for example this forbids X_e to be too close to a point mass). We assume that these random variables are independent.

Theorem(Etscheid and Roglin [2014]): With probability 1-o(1) FLIP terminates in O((\phi^c n^{c \log(n)}) steps for some universal constant c>0.

We improve this result from quasi-polynomial to polynomial, assuming that we put some noise on the interaction between every pair of vertices, or in other words assuming that the graph is complete.

Theorem(Angel, Bubeck, Peres, Wei [2016]): Let G be complete. With probability 1-o(1) FLIP terminates in O(\phi^5 n^{15.1}) steps.

I will now prove the Etscheid and Roglin result.

Proof strategy

To simplify notation let us introduce the Hamiltonian

    \[H(\sigma) =-\frac{1}{2} \sum_{uv \in E} X_{uv} \sigma(u) \sigma(v).\]

We want to find a local max of H. For any \sigma \in \{-1,1\}^V and v \in V, we denote by \sigma^{-v} the state equal to \sigma except for the coordinate corresponding to v which is flipped. For such \sigma,v there exists a vector \alpha = \alpha(\sigma,v) \in \{-1,0,1\}^E such that

    \[H(\sigma^{-v}) = H(\sigma) + \langle \alpha, X \rangle .\]

More specifically \alpha=(\alpha_{uw})_{uw \in E} is defined by

    \[\left\{\begin{array}{ll} \alpha_{uv} = \sigma(v) \sigma(u) & \forall u \neq v \\ \alpha_{uw} = 0 & \text{if} \ v \not\in \{u,w\} \end{array}\right.\]

We say that flipping a vertex is a move, and it is an improving move if the value of H strictly improves. We say that a sequence of moves L=(v_1,\hdots, v_{\ell}) is \epsilon-slow from \sigma_0 if

    \[H(\sigma_{i}) -  H(\sigma_{i-1}) \in (0, \epsilon], \forall i \in [\ell].\]

It is sufficient to show that with high probability there is no \epsilon-slow sequence with say \ell=2n and \epsilon = 1/(\phi n^{\log(n)})^c (indeed in this case after 2n \times (\phi n^{\log(n)})^c \times n^2 steps FLIP must have stopped, for otherwise the H value would exceed the maximal possible value of n^2). We will do this in in three main steps, a probability step, a linear algebra step, and a combinatorial step.

Probability step

Lemma: Let \alpha_1, \hdots, \alpha_k be k linearly independent vectors in \mathbb{Z}^E. Then one has

    \[\mathbb{P}(\forall i \in [k], \langle \alpha_i, X \rangle \in (0,\epsilon]) \leq (\phi \epsilon)^{k} .\]

Proof: The inequality follows from a simple change of variables. Let A \in \mathbb{Z}^{|E| \times E} be a full rank matrix whose first k rows are \alpha_1, \hdots, \alpha_k and it is completed so that A is the identity on the subspace orthogonal to \alpha_1, \hdots, \alpha_k. Let g be the density of A X, and f=\prod_{i=1}^{|E|} f_i the density of X. One has g(y) = |\mathrm{det}(A^{-1})| f(A^{-1} y)| and the key observation is that since A has integer coefficients, its determinant must be an integer too, and since it is non-zero one has |\mathrm{det}(A^{-1})|=|\mathrm{det}(A)^{-1}| \leq 1 . Thus one gets:

    \begin{align*} \mathbb{P}(\forall i \in [k], \langle \alpha_i, X \rangle \in (0,\epsilon]) & = \int_{(0,\epsilon]^k \times \mathbb{R}^{|E|-k}} g(y) dy \\ & \leq \int_{(0,\epsilon]^k \times \mathbb{R}^{|E|-k}} \prod_{i=1}^{|E|} f_i((A^{-1} y)_i) dy \\ & \leq (\phi \epsilon)^k \int_{\mathbb{R}^{|E|-k}} \prod_{i=k+1}^{|E|} f_i(y_i) dy = (\phi \epsilon)^k. \end{align*}

Linear algebra step

Lemma: Consider a sequence of \ell improving moves with k distinct vertices (say v_1, \hdots, v_k) that repeat at least twice in this sequence. Let \alpha_1, \hdots, \alpha_{\ell} \in \{-1,0,1\}^E be the corresponding move coefficients, and for each i \in [k] let s_i (respectively t_i) be the first (respectively second) time at which v_i moves. Then the vectors \tilde{\alpha}_i = \alpha_{s_i} + \alpha_{t_i} \in \mathbb{Z}^E, i \in [k], are linearly independent. Furthermore for any v that did not move between the times s_i and t_i one has \tilde{\alpha}_i(vv_i) = 0 (and for any e \not\ni v_i, \tilde{\alpha}_i(e) = 0).

Proof: The last sentence of the lemma is obvious. For the linear independence let \lambda \in \R^k be such that \sum_{i=1}^k \lambda_i \tilde{\alpha}_i = 0. Consider a new graph H with vertex set [k] and such that i is connected to j if v_i appears an odd number of times between the times s_j and t_j. This defines an oriented graph, however if i is connected to j but j is not connected to i then one has \tilde{\alpha}_j(v_iv_j) \in \{-2,2\} while \tilde{\alpha}_i(v_iv_j)=0 (and furthermore for any m \not\in \{i,j\}, \tilde{\alpha}_m(v_iv_j)=0) and thus \lambda_j=0. In other words we can consider a subset of [k] where H is an undirected graph, and outside of this subset \lambda is identically zero. To reduce notation we simply assume that H is undirected. Next we observe that if i and j are connected then one must have \tilde{\alpha}_j(v_iv_j)= - \tilde{\alpha}_i(v_iv_j) (this uses the fact that we look at the {\em first} consecutive times at which the vertices move) and in particular (again using that for any m \not\in \{i,j\}, \tilde{\alpha}_m(v_iv_j)=0) one must have \lambda_i=\lambda_j. Now let C be some connected component of H, and let \lambda_C be the unique value of \lambda on C. Noting that the \tilde{\alpha}‘s corresponding to different components of H have difference support (more precisely with C_E := \cup_{j \in C} \{e \ni j\} one has for any i \in C, \tilde{\alpha}_i |_{C_E^c} = 0 and for any i \not\in C, \tilde{\alpha}_i |_{C_E}=0) one obtains \lambda_C \sum_{i \in C} \tilde{\alpha}_i = 0. On the other hand since the sequence of moves is improving one must have \sum_{i \in C} \tilde{\alpha}_i \neq 0, which implies \lambda_C = 0 and finally \lambda = 0 (thus concluding the proof of the linear independence).

Combinatorial step

Lemma: Let v_1, \hdots, v_{2n} \in V. There exists \ell \in \mathbb{N} and i \in [2n - \ell] such that the number of vertices that repeat at least twice in the segment v_i, \hdots, v_{i + \ell} is at least \ell / (2 \log_2(n)).

Proof (from ABPW): Define the surplus of a sequence to be the difference between the number of elements and the number of distinct elements in the sequence. Let s_{\ell} be the maximum surplus in any segment of length \ell in v_1, \hdots, v_{2n}. Observe that s_{2n} \geq n. Let us now assume that for any segment of length \ell, the number of vertices that repeat at least twice is at most \epsilon \ell. Then one has by induction

    \[s_{2n} \leq 2 s_{n} + \epsilon 2n \leq 2 \epsilon n \log_2(n) .\]

This shows that \epsilon has to be greater than 1/(2 \log_2(n)) which concludes the proof.

Putting things together

We want to show that with \epsilon = 1/(\phi n^{\log(n)})^c one has

    \[\mathbb{P}(\exists (\sigma_0, L) \; \epsilon-\text{slow and} \; |L| = 2n) = o(1) .\]

By the combinatorial lemma we know that it is enough to show that:

    \begin{align*} & \sum_{\ell=1}^{2n} \mathbb{P}(\exists (\sigma_0, L) \; \epsilon-\text{slow}, \; |L| = \ell, \; \text{and} \\ & \text{L has at least} \; \ell/(2 \log_2(n)) \; \text{repeating vertices}) = o(1) . \end{align*}

Now using the probability lemma together with the linear algebra lemma (and observing that critically \tilde{\alpha} only depends on the value of \sigma at the vertices in L, and thus the union bound over \sigma_0 only gives a 2^{\ell} factor instead of 2^{n}) one obtains that the above probability is bounded by

    \[2^{\ell} n^{\ell} (\phi \epsilon)^{\ell/(2 \log_2(n))} ,\]

which concludes the proof of the Etscheid and Roglin result.

Note that a natural route to get a polynomial-time bound from the above proof would be to remove the \log(n) term in the combinatorial lemma but we show in our paper that this is impossible. Our result comes essentially from improvements to the linear algebra step (this is more difficult as the Etscheid and Roglin linear algebra lemma is particularly friendly for the union bound step, so we had to find another way to do the union bound).

Posted in Optimization, Probability theory | 2 Comments

Prospective graduate student? Consider University of Washington!

This post is targeted at prospective graduate students, especially foreigners from outside the US, who are primarily interested in optimization but also have a taste for probability theory (basically readers of this blog!). As a foreigner myself I remember that during my undergraduate studies my view of the US was essentially restricted to the usual suspects, Princeton, Harvard, MIT, Berkeley, Stanford. These places are indeed amazing, but I would like to try to raise awareness that, in terms of the interface optimization/probability, University of Washington (and especially the theory of computation group there) has a reasonable claim for the place with the most amazing opportunities right now in this space:

  1. The junior faculty (Shayan Oveis Gharan, Thomas Rothvoss, and Yin Tat Lee) are all doing groundbreaking (and award-winning) work at the interface optimization/probability. In my opinion the junior faculty roster is a key element in the choice of grad school, as typically junior faculty have much more time to dedicate to students. In particular I know that Yin Tat Lee is looking for graduate students starting next Fall.
  2. Besides the theory of computation group, UW has lots of resources in optimization such as TOPS (Trends in Optimization Seminar), and many optimization faculty in various department (Maryam Fazel, Jim Burke, Dmitriy Drusvyatskiy, Jeff Bilmes, Zaid Harchaoui) which means many interesting classes to take!
  3. The Theory Group at Microsoft Research is just a bridge away from UW, and we have lots of activities on optimization/probability there too. In fact I am also looking for one graduate student, to be co-advised with a faculty from UW.

Long story short, if you are a talented young mathematician interested in making a difference in optimization then you should apply to the CS department at UW, and here is the link to do so.

Posted in Uncategorized | 3 Comments

Guest post by Miklos Racz: Entropic central limit theorems and a proof of the fundamental limits of dimension estimation

In this post we give a proof of the fundamental limits of dimension estimation in random geometric graphs, based on the recent work of Bubeck and Ganguly. We refer the reader to the previous post for a detailed introduction; here we just recall the main theorem we will prove.

Theorem [Bubeck and Ganguly]

If the distribution \mu is log-concave, i.e., if it has density f(\cdot) = e^{-\varphi(\cdot)} for some convex function \varphi, and if \frac{d}{n^{3} \log^{2} \left( d \right)} \to \infty, then

(1)   \begin{equation*} \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right) \to 0, \end{equation*}

where \mathcal{W}_{n,d} is an appropriately scaled Wishart matrix coming from vectors having i.i.d. \mu entries and \mathcal{G}_{n} is a GOE matrix, both with the diagonal removed.

The proof hinges on a high-dimensional entropic central limit theorem, so a large part of the post is devoted to entropic central limit theorems and ways of proving them. Without further ado let us jump right in.


Pinsker’s inequality: from total variation to relative entropy

Our goal is now to bound \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right) from above. In the general setting considered here there is no nice formula for the density of the Wishart ensemble, so \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right) cannot be computed directly. Coupling these two random matrices also seems challenging.

In light of these observations, it is natural to switch to a different metric on probability distributions that is easier to handle in this case. Here we use Pinsker’s inequality to switch to relative entropy:

(2)   \begin{equation*} \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right)^{2} \leq \frac{1}{2} \mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right), \end{equation*}

where \mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right) denotes the relative entropy of \mathcal{W}_{n,d} with respect to \mathcal{G}_{n}. We next take a detour to entropic central limit theorems and techniques involved in their proof, before coming back to bounding the right hand side in (2).


An introduction to entropic CLTs

Let \phi denote the density of \gamma_{n}, the n-dimensional standard Gaussian distribution, and let f be an isotropic density with mean zero, i.e., a density for which the covariance matrix is the identity I_{n}. Then

    \begin{align*} 0 \leq \mathrm{Ent} \left( f \, \| \, \phi \right) &= \int f \log f - \int f \log \phi \\ &= \int f \log f - \int \phi \log \phi = \mathrm{Ent} \left( \phi \right) - \mathrm{Ent} \left( f \right), \end{align*}

where the second equality follows from the fact that \log \phi \left( x \right) is quadratic in x, and the first two moments of f and \phi are the same by assumption. We thus see that the standard Gaussian maximizes entropy among isotropic densities. It turns out that much more is true.

The central limit theorem states that if Z_{1}, Z_{2}, \dots are i.i.d. real-valued random variables with zero mean and unit variance, then S_{m} := \left( Z_{1} + \dots + Z_{m} \right) / \sqrt{m} converges in distribution to a standard Gaussian random variable as m \to \infty. There are many other senses in which S_{m} converges to a standard Gaussian, the entropic CLT being one of them.

Theorem [Entropic CLT]

Let Z_{1}, Z_{2}, \dots be i.i.d. real-valued random variables with zero mean and unit variance, and let S_{m} := \left( Z_{1} + \dots + Z_{m} \right) / \sqrt{m}. If \mathrm{Ent} \left( Z_{1} \, \| \, \phi \right) < \infty, then

    \[ \mathrm{Ent} \left( S_{m} \right) \nearrow \mathrm{Ent} \left( \phi \right) \]

as m \to \infty. Moreover, the entropy of S_{m} increases monotonically, i.e., \mathrm{Ent} \left( S_{m} \right) \leq \mathrm{Ent} \left( S_{m+1} \right) for every m \geq 1.

The condition \mathrm{Ent} \left( Z_{1} \, \| \, \phi \right) < \infty is necessary for an entropic CLT to hold; for instance, if the Z_{i} are discrete, then h \left( S_{m} \right) = - \infty for all m.

The entropic CLT originates with Shannon in the 1940s and was first proven by Linnik (without the monotonicity part of the statement). The first proofs that gave explicit convergence rates were given independently and at roughly the same time by Artstein, Ball, Barthe, and Naor, and Johnson and Barron in the early 2000s, using two different techniques.

The fact that \mathrm{Ent} \left( S_{1} \right) \leq \mathrm{Ent} \left( S_{2} \right) follows from the entropy power inequality, which goes back to Shannon in 1948. This implies that \mathrm{Ent} \left( S_{m} \right) \leq \mathrm{Ent} \left( S_{2m} \right) for all m \geq 0, and so it was naturally conjectured that \mathrm{Ent} \left( S_{m} \right) increases monotonically. However, proving this turned out to be challenging. Even the inequality \mathrm{Ent} \left( S_{2} \right) \leq \mathrm{Ent} \left( S_{3} \right) was unknown for over fifty years, until Artstein, Ball, Barthe, and Naor proved in general that \mathrm{Ent} \left( S_{m} \right) \leq \mathrm{Ent} \left( S_{m+1} \right) for all m \geq 1.

In the following we sketch some of the main ideas that go into the proof of these results, in particular following the technique introduced by Ball, Barthe, and Naor.


From relative entropy to Fisher information

Our goal is to show that some random variable Z, which is a convolution of many i.i.d. random variables, is close to a Gaussian G. One way to approach this is to interpolate between the two. There are several ways of doing this; for our purposes interpolation along the Ornstein-Uhlenbeck semigroup is most useful. Define

    \[ P_{t} Z := e^{-t} Z + \sqrt{1 - e^{-2t}} G \]

for t \in [0,\infty), and let f_{t} denote the density of P_{t} Z. We have P_{0} Z = Z and P_{\infty} Z = G. This semigroup has several desirable properties. For instance, if the density of Z is isotropic, then so is f_{t}. Before we can state the next desirable property that we will use, we need to introduce a few more useful quantities.

For a density function f : \mathbb{R}^{n} \to \mathbb{R}_{+}, let

    \[ \mathcal{I} \left( f \right) := \int \frac{\nabla f (\nabla f)^{T}}{f} = \E \left[ \left( \nabla \log f \right) \left( \nabla \log f \right)^{T} \right] \]

be the Fisher information matrix. The Cramer-Rao bound states that

    \[ \mathrm{Cov} \left( f \right) \succeq \mathcal{I} \left( f \right)^{-1}. \]

More generally this holds for the covariance of any unbiased estimator of the mean. The Fisher information is defined as

    \[ I \left( f \right) := \Tr \left( \mathcal{I} \left( f \right) \right). \]

It is sometimes more convenient to work with the Fisher information distance, defined as J(f) := I(f) - I(\phi) = I(f) - n. Similarly to the discussion above, one can show that the standard Gaussian minimizes the Fisher information among isotropic densities, and hence the Fisher information distance is always nonnegative.

Now we are ready to state the De Bruijn identity, which characterizes the change of entropy along the Ornstein-Uhlenbeck semigroup via the Fisher information distance:

    \[ \partial_{t} \mathrm{Ent} \left( f_{t} \right) = J \left( f_{t} \right). \]

This implies that the relative entropy between f and \phi—which is our quantity of interest—can be expressed as follows:

(3)   \begin{equation*} \mathrm{Ent} \left( f \, \| \, \phi \right) = \mathrm{Ent} \left( \phi \right) - \mathrm{Ent} \left( f \right) = \int_{0}^{\infty} J \left( f_{t} \right) dt. \end{equation*}

Thus our goal is to bound the Fisher information distance J(f_{t}).


Bounding the Fisher information distance

We first recall a classical result by Blachman and Stam that shows that Fisher information decreases under convolution.

Theorem [Blachman; Stam]

Let Y_{1}, \dots, Y_{d} be independent random variables taking values in \mathbb{R}, and let a \in \mathbb{R}^{d} be such that \left\| a \right\|_{2} = 1. Then

    \[ I \left( \sum_{i=1}^{d} a_{i} Y_{i} \right) \leq \sum_{i=1}^{d} a_{i}^{2} I \left( Y_{i} \right). \]

In the i.i.d. case, this bound becomes \left\| a \right\|_{2}^{2} I \left( Y_{1} \right).

Ball, Barthe, and Naor gave the following variational characterization of the Fisher information, which gives a particularly simple proof of Theorem 3. (See Bubeck and Ganguly for a short proof.)

Theorem [Variational characterization of Fisher information]

Let w : \mathbb{R}^{d} \to \left( 0, \infty \right) be a sufficiently smooth density on \mathbb{R}^{d}, let a \in \mathbb{R}^{d} be a unit vector, and let h be the marginal of w in direction a. Then we have

(4)   \begin{equation*} I \left( h \right) \leq \int_{\mathbb{R}^{d}} \left( \frac{\mathrm{div} \left( pw \right)}{w} \right)^{2} w \end{equation*}

for any continuously differentiable vector field p : \mathbb{R}^{d} \to \mathbb{R}^{d} with the property that for every x, \left\langle p \left( x \right), a \right\rangle = 1. Moreover, if w satisfies \int \left\| x \right\|^{2} w \left( x \right) < \infty, then there is equality for some suitable vector field p.

The Blachman-Stam theorem follows from this characterization by taking the constant vector field p \equiv a. Then we have \mathrm{div} \left( pw \right) = \left\langle \nabla w, a \right\rangle, and so the right hand side of (4) becomes a^{T} \mathcal{I} \left( w \right) a, where recall that \mathcal{I} is the Fisher information matrix. In the setting of Theorem 3 the density w of \left( Y_{1}, \dots, Y_{d} \right) is a product density: w \left( x_{1}, \dots, x_{d} \right) = f_{1} \left( x_{1} \right) \times \dots \times f_{d} \left( x_{d} \right), where f_{i} is the density of Y_{i}. Consequently the Fisher information matrix is a diagonal matrix, \mathcal{I} \left( w \right) = \mathrm{diag} \left( I \left( f_{1} \right), \dots, I \left( f_{d} \right) \right), and thus a^{T} \mathcal{I} \left( w \right) a = \sum_{i=1}^{d} a_{i}^{2} I \left( f_{i} \right), concluding the proof of Theorem 3 using Theorem 4.

Given the characterization of Theorem 4, one need not take the vector field to be constant; one can obtain more by optimizing over the vector field. Doing this leads to the following theorem, which gives a rate of decrease of the Fisher information distance under convolutions.

Theorem [Artstein, Ball, Barthe, and Naor]

Let Y_{1}, \dots, Y_{d} be i.i.d. random variables with a density having a positive spectral gap c. (We say that a random variable has spectral gap c if for every sufficiently smooth g, we have \mathrm{Var} \left( g \right) \leq \tfrac{1}{c} \E g'^{2}. In particular, log-concave random variables have a positive spectral gap, see Bobkov (1999).) Then for any a \in \mathbb{R}^{d} with \left\| a \right\|_{2} = 1 we have that

    \[ J \left( \sum_{i=1}^{d} a_{i} Y_{i} \right) \leq \frac{2 \left\| a \right\|_{4}^{4}}{c + (2-c) \left\| a \right\|_{4}^{4}} J \left( Y_{1} \right). \]

When a = \frac{1}{\sqrt{d}} \mathbf{1}, then \frac{2 \left\| a \right\|_{4}^{4}}{c + (2-c) \left\| a \right\|_{4}^{4}} = O \left( 1 / d \right), and thus using (3) we obtain a rate of convergence of O \left( 1 / d \right) in the entropic CLT.

A result similar to Theorem 5 was proven independently and roughly at the same time by Johnson and Barron using a different approach involving score functions.


A high-dimensional entropic CLT

The techniques of Artstein, Ball, Barthe, and Naor generalize to higher dimensions, as was recently shown by Bubeck and Ganguly.

A result similar to Theorem 5 can be proven, from which a high-dimensional entropic CLT follows, together with a rate of convergence, by using (3) again.

Theorem [Bubeck and Ganguly]

Let Y \in \mathbb{R}^{d} be a random vector with i.i.d. entries from a distribution \nu with zero mean, unit variance, and spectral gap c \in (0,1]. Let A \in \mathbb{R}^{n \times d} be a matrix such that AA^{T} = I_{n}, the n \times n identity matrix. Let

    \[\varepsilon = \max_{i \in [d]} \left( A^T A \right)_{i,i}\]


    \[\zeta = \max_{i,j \in [d], i \neq j} \left| \left( A^T A \right)_{i,j} \right|.\]

Then we have that

    \[ \mathrm{Ent} \left( A Y \, \| \, \gamma_{n} \right) \leq n \min \left\{ 2 \left( \varepsilon + \zeta^{2} d \right) / c, 1 \right\} \mathrm{Ent} \left( \nu \, \| \, \gamma_{1} \right), \]

where \gamma_{n} denotes the standard Gaussian measure in \mathrm{R}^{n}.

To interpret this result, consider the case where the matrix A is built by picking rows one after the other uniformly at random on the Euclidean sphere in \mathbb{R}^{d}, conditionally on being orthogonal to previous rows (to satisfy the isotropicity condition AA^{T} = I_{n}). We then expect to have \varepsilon \simeq n/d and \zeta \simeq \sqrt{n} / d (we leave the details as an exercise for the reader), and so Theorem 7 tells us that \mathrm{Ent} \left( A Y \, \| \, \gamma_{n} \right) \lesssim n^{2}/d.


Back to Wishart and GOE

We now turn our attention back to bounding the relative entropy \mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right); recall (2). Since the Wishart matrix contains the (scaled) inner products of n vectors in \mathbb{R}^{d}, it is natural to relate \mathcal{W}_{n+1,d} and \mathcal{W}_{n,d}, since the former comes from the latter by adding an additional d-dimensional vector to the n vectors already present. Specifically, we have the following:

    \[ \mathcal{W}_{n+1,d} = \begin{pmatrix} \mathcal{W}_{n,d} & \frac{1}{\sqrt{d}} \mathbb{X} X \\ \frac{1}{\sqrt{d}} \left( \mathbb{X} X \right)^{T} & 0 \end{pmatrix}, \]

where X is a d-dimensional random vector with i.i.d. entries from \mu, which are also independent from \mathbb{X}. Similarly we can write the matrix \mathcal{G}_{n+1} using \mathcal{G}_{n}:

    \[ \mathcal{G}_{n+1} = \begin{pmatrix} \mathcal{G}_{n} & \gamma_{n} \\ \gamma_{n}^{T} & 0 \end{pmatrix}. \]

This naturally suggests to use the chain rule for relative entropy and bound \mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right)

by induction on n. We get that

    \[ \mathrm{Ent} \left( \mathcal{W}_{n+1,d} \, \| \, \mathcal{G}_{n+1} \right) = \mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right) + \mathbb{E}_{W_{n,d}} \left[ \mathrm{Ent} \left( \tfrac{1}{\sqrt{d}} \mathbb{X} X \, | \, \mathcal{W}_{n,d} \, \| \, \gamma_{n} \right) \right]. \]

By convexity of the relative entropy we also have that

    \[ \mathbb{E}_{W_{n,d}} \left[ \mathrm{Ent} \left( \tfrac{1}{\sqrt{d}} \mathbb{X} X \, | \, \mathcal{W}_{n,d} \, \| \, \gamma_{n} \right) \right] \leq \mathbb{E}_{\mathbb{X}} \left[ \mathrm{Ent} \left( \tfrac{1}{\sqrt{d}} \mathbb{X} X \, | \, \mathbb{X} \, \| \, \gamma_{n} \right) \right]. \]

Thus our goal is to understand and bound \mathrm{Ent} \left( A X \, \| \, \gamma_{n} \right) for A \in \mathbb{R}^{n \times d}, and then apply the bound to A = \tfrac{1}{\sqrt{d}} \mathbb{X} (followed by taking expectation over \mathbb{X}). This is precisely what was done in Theorem 6, the high-dimensional entropic CLT, for A satisfying AA^T = I_{n}. Since A = \tfrac{1}{\sqrt{d}} \mathbb{X} does not necessarily satisfy AA^T = I_{n}, we have to correct for the lack of isotropicity. This is the content of the following lemma, the proof of which we leave as an exercise for the reader.


Let A \in \mathbb{R}^{n \times d} and Q \in \mathbb{R}^{n \times n} be such that QA \left( QA \right)^{T} = I_{n}. Then for any isotropic random variable X taking values in \mathbb{R}^{d} we have that

(5)   \begin{equation*} \mathrm{Ent} \left( A X \, \| \, \gamma_{n} \right) = \mathrm{Ent} \left( QA X \, \| \, \gamma_{n} \right) + \frac{1}{2} \mathrm{Tr} \left( A A^{T} \right) - \frac{n}{2} + \log \left| \det \left( Q \right) \right|. \end{equation*}

We then apply this lemma with A = \tfrac{1}{\sqrt{d}} \mathbb{X} and Q = \left( \tfrac{1}{d} \mathbb{X} \mathbb{X}^{T} \right)^{-1/2}. Observe that

    \[\mathbb{E} \mathrm{Tr} \left( A A^{T} \right) = \tfrac{1}{d} \mathbb{E} \mathrm{Tr} \left( \mathbb{X} \mathbb{X}^{T} \right) = \tfrac{1}{d} \times n \times d = n,\]

and hence in expectation the middle two terms of the right hand side of (5) cancel each other out.

The last term in (5),

    \[- \tfrac{1}{4} \log \det \left( \tfrac{1}{d} \mathbb{X} \mathbb{X}^{T} \right),\]

should be understood as the relative entropy between a centered Gaussian with covariance given by \tfrac{1}{d} \mathbb{X} \mathbb{X}^{T} and a standard Gaussian in \mathbb{R}^{n}. Controlling the expectation of this term requires studying the probability that \mathbb{X} \mathbb{X}^{T} is close to being non-invertible, which requires bounds on the left tail of the smallest singular of \mathbb{X}. Understanding the extreme singular values of random matrices is a fascinating topic, but it is outside of the scope of these notes, and so we refer the reader to Bubeck and Ganguly for more details on this point.

Finally, the high-dimensional entropic CLT can now be applied to see that

    \[\mathrm{Ent} \left( QA X \, \| \, \gamma_{n} \right) \lesssim n^{2} / d.\]

From the induction on n we get another factor of n, arriving at

    \[\mathrm{Ent} \left( \mathcal{W}_{n,d} \, \| \, \mathcal{G}_{n} \right) \lesssim n^{3} / d.\]

We conclude that the dimension threshold is d \approx n^{3}, and the information-theoretic proof that we have outlined sheds light on why this threshold is n^{3}.

Posted in Probability theory, Random graphs | Leave a comment

Guest post by Miklos Racz: The fundamental limits of dimension estimation in random geometric graphs

This post is a continuation of the previous one, where we explored how to detect geometry in a random geometric graph. We now consider the other side of the coin: when is it impossible to detect geometry and what are techniques for proving this? We begin by discussing the G(n,p,d) model introduced in the previous post and then turn to a more general setup, proving a robustness result on the threshold dimension for detection. The proof of the latter result also gives us the opportunity to learn about the fascinating world of entropic central limit theorems.

Barrier to detecting geometry: when Wishart becomes GOE

Recall from the previous post that G(n,p,d) is a random geometric graph where the underlying metric space is the d-dimensional unit sphere \mathbb{S}^{d-1} = \left\{ x \in \mathbb{R}^d : \left\| x \right\|_2 = 1 \right\}, and where the latent labels of the nodes are i.i.d. uniform random vectors in \mathbb{S}^{d-1}. Our goal now is to show the impossibility result of Bubeck, Ding, Eldan, and Racz: if d \gg n^{3}, then it is impossible to distinguish between G(n,p,d) and the Erdos-Renyi random graph G(n,p). More precisely, we have that

(1)   \begin{equation*} \mathrm{TV} \left( G(n,p), G(n,p,d) \right) \to 0 \end{equation*}

when d \gg n^{3} and n \to \infty, where \mathrm{TV} denotes total variation distance.

There are essentially three main ways to bound the total variation of two distributions from above: (i) if the distributions have nice formulas associated with them, then exact computation is possible; (ii) through coupling the distributions; or (iii) by using inequalities between probability metrics to switch the problem to bounding a different notion of distance between the distributions. Here, while the distribution of G(n,p,d) does not have a nice formula associated with it, the main idea is to view this random geometric graph as a function of an n \times n Wishart matrix with d degrees of freedom—i.e., a matrix of inner products of n d-dimensional Gaussian vectors—denoted by W(n,d). It turns out that one can view G(n,p) as (essentially) the same function of an n \times n GOE random matrix—i.e., a symmetric matrix with i.i.d. Gaussian entries on and above the diagonal—denoted by M(n). The upside of this is that both of these random matrix ensembles have explicit densities that allow for explicit computation. We explain this connection here in the special case of p = 1/2 for simplicity; see Bubeck et al. for the case of general p.

Recall that if Y_{1} is a standard normal random variable in \mathbb{R}^d, then Y_1 / \left\| Y_1 \right\| is uniformly distributed on the sphere \mathbb{S}^{d-1}. Consequently we can view G\left( n, 1/2, d \right) as a function of an appropriate Wishart matrix, as follows. Let Y be an n \times d matrix where the entries are i.i.d. standard normal random variables, and let W \equiv W (n,d) = YY^T be the corresponding n \times n Wishart matrix. Note that W_{ii} = \left\langle Y_i, Y_i \right\rangle = \left\| Y_i \right\|^2 and so \left\langle Y_i / \left\| Y_i \right\|, Y_j / \left\| Y_j \right\| \right\rangle = W_{ij} / \sqrt{W_{ii} W_{jj}}. Thus the n \times n matrix A defined as

    \[ A_{i,j} = \begin{cases} 1 & \text{if } W_{ij} \geq 0 \text{ and } i \neq j\\ 0 & \text{otherwise} \end{cases} \]

has the same law as the adjacency matrix of G\left(n,1/2,d\right). Denote the map that takes W to A by H, i.e., A = H \left( W \right).

In a similar way we can view G \left( n, 1/2 \right) as a function of an n \times n matrix drawn from the Gaussian Orthogonal Ensemble (GOE). Let M\left( n \right) be a symmetric n \times n random matrix where the diagonal entries are i.i.d. normal random variables with mean zero and variance 2, and the entries above the diagonal are i.i.d. standard normal random variables, with the entries on and above the diagonal all independent. Then B = H \left( M(n) \right) has the same law as the adjacency matrix of G(n,p). Note that B only depends on the sign of the off-diagonal elements of M\left(n \right), so in the definition of B we can replace M\left( n \right) with M \left( n, d \right) := \sqrt{d} M \left( n \right) + d I_n, where I_n is the n \times n identity matrix.

We can thus conclude that

    \begin{align*} \mathrm{TV} \left( G(n,1/2,d), G(n,1/2) \right) &= \mathrm{TV} \left( H \left( W(n,d) \right), H \left( M(n,d) \right) \right) \\ &\leq \mathrm{TV} \left( W(n,d), M(n,d) \right). \end{align*}

The densities of these two random matrix ensembles are explicit and well known (although we do not state them here), which allow for explicit calculations. The outcome of these calculations is the following result, proven independently and simultaneously by Bubeck et al. and Jiang and Li.

Theorem [Bubeck, Ding, Eldan, and Racz; Jiang and Li]

Define the random matrix ensembles W\left( n, d \right) and M \left( n, d \right) as above. If d / n^3 \to \infty, then

    \begin{equation*}\label{eq:Wishart_GOE} \mathrm{TV} \left( W \left( n, d \right), M \left( n, d \right) \right) \to 0. \end{equation*}

We conclude that it is impossible to detect underlying geometry whenever d \gg n^{3}.

The universality of the threshold dimension

How robust is the result presented above? We have seen that the detection threshold is intimately connected to the threshold of when a Wishart matrix becomes GOE. Understanding the robustness of this result on random matrices is interesting in its own right, and this is what we will pursue in the remainder of this post, which is based on a recent paper by Bubeck and Ganguly.

Let \mathbb{X} be an n \times d random matrix with i.i.d. entries from a distribution \mu that has mean zero and variance 1. The n \times n matrix \mathbb{X} \mathbb{X}^{T} is known as the Wishart matrix with d degrees of freedom. As we have seen above, this arises naturally in geometry, where \mathbb{X} \mathbb{X}^{T} is known as the Gram matrix of inner products of n points in \mathbb{R}^{d}. The Wishart matrix also appears naturally in statistics as the sample covariance matrix, where d is the number of samples and n is the number of parameters. (Note that in statistics the number of samples is usually denoted by n, and the number of parameters is usually denoted by p; here our notation is taken with the geometric perspective in mind.)

We consider the Wishart matrix with the diagonal removed, and scaled appropriately:

    \[ \mathcal{W}_{n,d} = \frac{1}{\sqrt{d}} \left( \mathbb{X} \mathbb{X}^{T} - \mathrm{diag} \left( \mathbb{X} \mathbb{X}^{T} \right) \right). \]

In many applications—such as to random graphs as above—the diagonal of the matrix is not relevant, so removing it does not lose information. Our goal is to understand how large does the dimension d have to be so that \mathcal{W}_{n,d} is approximately like \mathcal{G}_{n}, which is defined as the n \times n Wigner matrix with zeros on the diagonal and i.i.d. standard Gaussians above the diagonal. In other words, \mathcal{G}_{n} is drawn from the Gaussian Orthogonal Ensemble (GOE) with the diagonal replaced with zeros.

A simple application of the multivariate central limit theorem gives that if n is fixed and d \to \infty, then \mathcal{W}_{n,d} converges to \mathcal{G}_{n} in distribution. The main result of Bubeck and Ganguly establishes that this holds as long as d \, \widetilde{\gg} \, n^{3} under rather general conditions on the distribution \mu.

Theorem [Bubeck and Ganguly]

If the distribution \mu is log-concave, i.e., if it has density f(\cdot) = e^{-\varphi(\cdot)} for some convex function \varphi, and if \frac{d}{n^{3} \log^{2} \left( d \right)} \to \infty, then

(2)   \begin{equation*} \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right) \to 0. \end{equation*}

On the other hand, if \mu has a finite fourth moment and \frac{d}{n^{3}} \to 0, then

(3)   \begin{equation*} \mathrm{TV} \left( \mathcal{W}_{n,d}, \mathcal{G}_{n} \right) \to 1. \end{equation*}

This result extends Theorem 1 from the previous post and Theorem 1 from above, and establishes n^{3} as the universal critical dimension (up to logarithmic factors) for sufficiently smooth measures \mu: \mathcal{W}_{n,d} is approximately Gaussian if and only if d is much larger than n^{3}. For random graphs, as seen above, this is the dimension barrier to extracting geometric information from a network: if the dimension is much greater than the cube of the number of vertices, then all geometry is lost. In the setting of statistics this means that the Gaussian approximation of a Wishart matrix is valid as long as the sample size is much greater than the cube of the number of parameters. Note that for some statistics of a Wishart matrix the Gaussian approximation is valid for much smaller sample sizes (e.g., the largest eigenvalue behaves as in the limit even when the number of parameters is on the same order as the sample size (Johnstone, 2001)).

To distinguish the random matrix ensembles, we have seen in the previous post that signed triangles work up until the threshold dimension in the case when \mu is standard normal. It turns out that the same statistic works in this more general setting; when the entries of the matrices are centered, this statistic can be written as A \mapsto \mathrm{Tr} \left( A^{3} \right). We leave the details as an exercise for the reader.

We note that for (2) to hold it is necessary to have some smoothness assumption on the distribution \mu. For instance, if \mu is purely atomic, then so is the distribution of \mathcal{W}_{n,d}, and thus its total variation distance to \mathcal{G}_{n} is 1. The log-concave assumption gives this necessary smoothness, and it is an interesting open problem to understand how far this can be relaxed.

We will see the proof (and in particular the connection to entropic CLT!) in the next post.

Posted in Uncategorized | Leave a comment

Guest post by Miklos Racz: Estimating the dimension of a random geometric graph on a high-dimensional sphere

Following the previous post in which we studied community detection, in this post we study the fundamental limits of inferring geometric structure in networks. Many networks coming from physical considerations naturally have an underlying geometry, such as the network of major roads in a country. In other networks this stems from a latent feature space of the nodes. For instance, in social networks a person might be represented by a feature vector of their interests, and two people are connected if their interests are close enough; this latent metric space is referred to as the social space. We are particularly interested in the high-dimensional regime, which brings about a host of new questions, such as estimating the dimension.

A simple random geometric graph model and basic questions

We study perhaps the simplest model of a random geometric graph, where the underlying metric space is the d-dimensional unit sphere \mathbb{S}^{d-1} = \left\{ x \in \mathbb{R}^d : \left\| x \right\|_2 = 1 \right\}, and where the latent labels of the nodes are i.i.d. uniform random vectors in \mathbb{S}^{d-1}. More precisely, the random geometric graph G \left( n, p, d \right) is defined as follows. Let X_1, \dots, X_n be independent random vectors, uniformly distributed on \mathbb{S}^{d-1}. In G\left( n, p, d \right), distinct nodes i \in \left[n\right] and j \in \left[n \right] are connected by an edge if and only if \left\langle X_i, X_j \right\rangle \geq t_{p,d}, where the threshold value t_{p,d} \in \left[-1,1\right] is such that \mathbb{P} \left( \left\langle X_1, X_2 \right\rangle \geq t_{p,d} \right) = p. For example, when p = 1/2 we have t_{p,d} = 0.

The most natural random graph model without any structure is the standard Erdos-Renyi random graph G(n,p), where any two of the n vertices are independently connected with probability p.

We can thus formalize the question of detecting underlying geometry as a simple hypothesis testing question. The null hypothesis is that the graph is drawn from the Erdos-Renyi model, while the alternative is that it is drawn from G(n,p,d). In brief:

(1)   \begin{equation*} H_{0} : G \sim G(n,p), \qquad \qquad H_{1} : G \sim G(n,p,d). \end{equation*}

To understand this question, the basic quantity we need to study is the total variation distance between the two distributions on graphs, G(n,p) and G(n,p,d), denoted by \mathrm{TV} \left( G(n,p), G(n,p,d) \right); recall that the total variation distance between two probability measures P and Q is defined as \mathrm{TV} \left( P, Q \right) = \tfrac{1}{2} \left\| P - Q \right\|_{1} = \sup_{A} \left| P(A) - Q(A) \right|. We are interested in particular in the case when the dimension d is large, growing with n.

It is intuitively clear that if the geometry is too high-dimensional, then it is impossible to detect it, while a low-dimensional geometry will have a strong effect on the generated graph and will be detectable. How fast can the dimension grow with n while still being able to detect it? Most of this post will focus on this question.

If we can detect geometry, then it is natural to ask for more information. Perhaps the ultimate goal would be to find an embedding of the vertices into an appropriate dimensional sphere that is a true representation, in the sense that the geometric graph formed from the embedded points is indeed the original graph. More modestly, can the dimension be estimated? We touch on this question at the end of the post.

The dimension threshold for detecting underlying geometry

The high-dimensional setting of the random geometric graph G(n,p,d) was first studied by Devroye, Gyorgy, Lugosi, and Udina, who showed that geometry is indeed lost in high dimensions: if n is fixed and d \to \infty, then \mathrm{TV} \left( G(n,p), G(n,p,d) \right) \to 0. More precisely, they show that this convergence happens when d \gg n^{7} 2^{n^2 / 2}, but this is not tight. The dimension threshold for dense graphs was recently found by Bubeck, Ding, Eldan, and Racz, and it turns out that it is d \approx n^3, in the following sense.

Theorem [Bubeck, Ding, Eldan, and Racz 2014]

Let p \in (0,1) be fixed. Then

    \[\mathrm{TV} \left( G(n,p), G(n,p,d) \right) \to \left\{ \begin{array}  0 \;\; \text{ if } \;\; d \gg n^3  \qquad \qquad (2) \\ 1 \;\; \text{ if } \;\; d \ll n^3 \qquad \qquad (3) \end{array} \right.\]

Moreover, in the latter case there exists a computationally efficient test to detect underlying geometry (with running time O\left( n^{3} \right)).

(2)   \begin{equation*} \end{equation*}

(3)   \begin{equation*} \end{equation*}

Most of this post is devoted to understanding (3), that is, how the two models can be distinguished; the impossibility result of (2) will be discussed in a future post. At the end we will also consider this same question for sparse graphs (where p = c/n), where determining the dimension threshold is an intriguing open problem.

The triangle test

A natural test to uncover geometric structure is to count the number of triangles in G. Indeed, in a purely random scenario, vertex u being connected to both v and w says nothing about whether v and w are connected. On the other hand, in a geometric setting this implies that v and w are close to each other due to the triangle inequality, thus increasing the probability of a connection between them. This, in turn, implies that the expected number of triangles is larger in the geometric setting, given the same edge density. Let us now compute what this statistic gives us.


Given that u is connected to both v and w, v and w are more likely to be connected under G(n,p,d) than under G(n,p).


For a graph G, let A denote its adjacency matrix. Then

    \[T_{G} \left( i,j,k \right) := A_{i,j} A_{i,k} A_{j,k}\]

is the indicator variable that three vertices i, j, and k form a triangle, and so the number of triangles in G is

    \[T(G) := \sum_{\{i,j,k\} \in \binom{[n]}{3}} T_{G} \left( i,j,k \right).\]

By linearity of expectation, for both models the expected number of triangles is \binom{n}{3} times the probability of a triangle between three specific vertices. For the Erd\H{o}s-R\’enyi random graph the edges are independent, so the probability of a triangle is p^3, and thus we have

    \[ \mathbb{E} \left[ T \left( G(n,p) \right) \right] = \binom{n}{3} p^3. \]

For G(n,p,d) it turns out that for any fixed p \in \left( 0, 1 \right) we have

(4)   \begin{equation*} \mathbb{P} \left( T_{G(n,p,d)} \left( 1, 2, 3 \right) = 1 \right) \approx p^{3} \left( 1 + \frac{C_{p}}{\sqrt{d}} \right) \end{equation*}

for some constant C_{p} > 0, which gives that

    \[ \mathbb{E} \left[ T \left( G(n,p,d) \right) \right] \geq \binom{n}{3} p^3 \left( 1 + \frac{C_{p}}{\sqrt{d}} \right). \]

Showing (4) is somewhat involved, but in essence it follows from the concentration of measure phenomenon on the sphere, namely that most of the mass on the high-dimensional sphere is located in a band of O \left( 1 / \sqrt{d} \right) around the equator. We sketch here the main intuition for p=1/2, which is illustrated in the figure below.

Let X_1, X_2, and X_3 be independent uniformly distributed points in \mathbb{S}^{d-1}. Then

    \begin{multline*} \mathbb{P} \left( T_{G(n,1/2,d)} \left( 1, 2, 3 \right) = 1 \right) \\ \begin{aligned} &= \mathbb{P} \left( \langle X_1, X_2 \rangle \geq 0, \langle X_1, X_3 \rangle \geq 0, \langle X_2, X_3 \rangle \geq 0 \right) \\ &= \mathbb{P} \left( \langle X_2, X_3 \rangle \geq 0 \, \middle| \, \langle X_1, X_2 \rangle \geq 0, \langle X_1, X_3 \rangle \geq 0 \right) \mathbb{P} \left( \langle X_1, X_2 \rangle \geq 0, \langle X_1, X_3 \rangle \geq 0 \right) \\ &= \frac{1}{4} \times \mathbb{P} \left( \langle X_2, X_3 \rangle \geq 0 \, \middle| \, \langle X_1, X_2 \rangle \geq 0, \langle X_1, X_3 \rangle \geq 0 \right), \end{aligned} \end{multline*}

where the last equality follows by independence. So what remains is to show that this latter conditional probability is approximately 1/2 + c / \sqrt{d}. To compute this conditional probability what we really need to know is the typical angle is between X_1 and X_2. By rotational invariance we may assume that X_1 = (1,0,0, \dots, 0), and hence \langle X_1, X_2 \rangle = X_{2} (1), the first coordinate of X_{2}. One way to generate X_2 is to sample a d-dimensional standard Gaussian and then normalize it by its length. Since the norm of a d-dimensional standard Gaussian is very well concentrated around \sqrt{d}, it follows that X_{2}(1) is on the order of 1/\sqrt{d}. Conditioned on X_{2}(1) \geq 0, this typical angle gives the boost in the conditional probability that we see.




If X_{1} and X_{2} are two independent uniform points on \mathbb{S}^{d-1}, then their inner product \left\langle X_1, X_2 \right\rangle is on the order of 1/\sqrt{d} due to the concentration of measure phenomenon on the sphere.



Thus we see that the boost in the number of triangles in the geometric setting is \Theta \left( n^{3} / \sqrt{d} \right) in expectation:

    \[\mathbb{E} \left[ T \left( G(n,p,d) \right) \right] - \E \left[ T \left( G(n,p) \right) \right] \geq \binom{n}{3} \frac{C_p}{\sqrt{d}}.\]

To be able to tell apart the two graph distributions based on the number of triangles, the boost in expectation needs to be much greater than the standard deviation. A simple calculation shows that

    \[\mathrm{Var} \left( T \left( G \left( n, p \right) \right) \right) = \binom{n}{3} \left( p^{3} - p^{6} \right) + \binom{n}{4} \binom{4}{2} \left( p^{5} - p^{6} \right)\]

and also

    \[\mathrm{Var} \left( T \left( G \left( n, p, d \right) \right) \right) \leq n^4.\]

Thus we see that \mathrm{TV} \left( G(n,p), G(n,p,d) \right) \to 1 if n^{3} / \sqrt{d} \gg \sqrt{n^4}, which is equivalent to d \ll n^2.

Signed triangles are more powerful

While triangles detect geometry up until d \ll n^2, are there even more powerful statistics that detect geometry for larger dimensions? One can check that longer cycles also only work when d \ll n^2, as do several other natural statistics. Yet the underlying geometry can be detected even when d \ll n^{3}.

The simple idea that leads to this improvement is to consider signed triangles. We have already noticed that triangles are more likely in the geometric setting than in the purely random setting. This also means that induced wedges (i.e., when there are exactly two edges among the three possible ones) are less likely in the geometric setting. Similarly, induced single edges are more likely, and induced independent sets on three vertices are less likely in the geometric setting. The following figure summarizes these observations.


The signed triangles statistic incorporates these observations by giving the different patterns positive or negative weights. More precisely, we define

    \[ \tau \left( G \right) := \sum_{\{i,j,k\} \in \binom{[n]}{3}} \left( A_{i,j} - p \right) \left( A_{i,k} - p \right) \left( A_{j,k} - p \right). \]

The key insight motivating this definition is that the variance of signed triangles is much smaller than the variance of triangles, due to the cancellations introduced by the centering of the adjacency matrix: the \Theta \left( n^{4} \right) term vanishes, leaving only the \Theta \left( n^{3} \right) term. It is a simple exercise to show that

    \[\mathbb{E} \left[ \tau \left( G(n,p) \right) \right] = 0\]


    \[\mathrm{Var} \left( \tau \left( G(n,p) \right) \right) = \binom{n}{3} p^{3} \left( 1 - p \right)^{3}.\]

On the other hand it can be shown that

(5)   \begin{equation*} \mathbb{E} \left[ \tau \left( G(n,p,d) \right) \right] \geq c_{p} n^{3} / \sqrt{d}, \end{equation*}

so the gap between the expectations remains. Furthermore, it can also be shown that the variance also decreases for G(n,p,d) and we have

    \[\mathrm{Var} \left( \tau \left( G(n,p,d) \right) \right) \leq n^{3} + \frac{3 n^{4}}{d}.\]

Putting everything together we get that \mathrm{TV} \left( G(n,p), G(n,p,d) \right) \to 1 if n^{3} / \sqrt{d} \gg \sqrt{n^3 + n^{4} / d}, which is equivalent to d \ll n^3. This concludes the proof of (3) from Theorem 1.

Estimating the dimension

Until now we discussed detecting geometry. However, the insights gained above allow us to also touch upon the more subtle problem of estimating the underlying dimension d.

Dimension estimation can also be done by counting the “number” of signed triangles as above. However, here it is necessary to have a bound on the difference of the expected number of signed triangles between consecutive dimensions; the lower bound on \mathbb{E} \left[ \tau \left( G(n,p,d) \right) \right] in~(5) is not enough. Still, we believe that the lower bound should give the true value of the expected value for an appropriate constant c_{p}, and hence we expect to have that

(6)   \begin{equation*} \mathbb{E} \left[ \tau \left( G(n,p,d) \right) \right] - \E \left[ \tau \left( G(n,p,d+1) \right) \right] = \Theta \left( \frac{n^{3}}{d^{3/2}} \right). \end{equation*}

Thus, using the variance bound from above, we get that dimension estimation should be possible using signed triangles whenever n^{3} / d^{3/2} \gg \sqrt{n^3 + n^{4} / d}, which is equivalent to d \ll n. Showing (6) for general p seems involved; Bubeck et al. showed that it holds for p = 1/2, which can be considered as a proof of concept.

Theorem [Bubeck, Ding, Eldan, and Racz 2014]

There exists a universal constant C > 0 such that for all integers n and d_1 < d_2, one has

    \[ \mathrm{TV} \left( G (n, 1/2, d_1), G(n, 1/2, d_2) \right) \geq 1 - C \left( \frac{d_1}{n} \right)^{2}. \]

This result is tight, as demonstrated by a result of Eldan, which implies that G(n,1/2,d) and G(n,1/2,d+1) are indistinguishable when d \gg n.

The mysterious sparse regime

We conclude this post by discussing an intriguing conjecture for sparse graphs. It is again natural to consider the number of triangles as a way to distinguish between G(n,c/n) and G(n, c/n,d). Bubeck et al. show that this statistic works whenever d \ll \log^{3} \left( n \right), and conjecture that this is tight.

Conjecture [Bubeck, Ding, Eldan, and Racz 2014]

Let c > 0 be fixed and assume d / \log^{3} \left( n \right) \to \infty. Then

    \[ \mathrm{TV} \left( G \left( n, \frac{c}{n} \right), G \left( n, \frac{c}{n}, d \right) \right) \to 0. \]

The main reason for this conjecture is that, when d \gg \log^{3} \left( n \right), G(n, c/n) and G(n, c/n, d) seem to be locally equivalent; in particular, they both have the same Poisson number of triangles asymptotically. Thus the only way to distinguish between them would be to find an emergent global property which is significantly different under the two models, but this seems unlikely to exist. Proving or disproving this conjecture remains a challenging open problem. The best known bound is n^{3} from (2) (which holds uniformly over p), which is very far from \log^{3} \left( n \right)!

Posted in Probability theory, Random graphs | Leave a comment