## Discrepancy algorithm inspired by gradient descent and multiplicative weights; after Levy, Ramadas and Rothvoss

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines ideas from gradient descent and multiplicative weights in a non-trivial (yet very simple) way. Below I reprove Spencer’s deviations theorem with their machinery (in the actual paper Levy, Ramadas and Rothvoss do more than this).

First let me remind you the setting (see also this previous blog post for some motivation on discrepancy and a bit more context; by the way it is funny to read the comments in that post after this): given one wants to find (think of it as a “coloring” of the coordinates) such that for some numerical constant (when is a normalized vectors of ‘s and ‘s the quantity represents the unbalancedness of the coloring in the set corresponding to ). Clearly it suffices to give a method to find with at least half of its coordinates equal to and and such that for some numerical constant (indeed one can then simply recurse on the coordinates not yet set to or ; this is the so-called “partial coloring” argument). Note also that one can drop the absolute value by taking and (the number of constraints then becomes but this is easy to deal with and we ignore it here for sake of simplicity).

The algorithm

Let , . We run an iterative algorithm which keeps at every time step a subspace of valid update directions and then proceeds as follows. First find (using for instance a basis for ) such that

(1)

Then update where is maximal so that remains in . Finally update the exponential weights by .

It remains to describe the subspace . For this we introduce the set containing the largest coordinates of (the “inactive” coordinates) and the set containing the coordinates of equal to or (the “frozen” coordinates). The subspace is now described as the set of points orthogonal to (i) , (ii) , (iii) , (iv) . The intuition for (i) and (ii) is rather clear: for (i) one simply wants to ensure that the method keeps making progress towards the boundary of the cube (i.e., ) while for (ii) one wants to make sure that coordinates which are already “colored” (i.e., set to or ) are not updated. In particular (i) and (ii) together ensures that at each step either the norm squared of augments by (in particular ) or that one fixes forever one of the coordinates to or . In particular this means that after at most iterations one will have a partial coloring (i.e., half of the coordinates set to or , which was our objective). Property (iii) is about ensuring that we stop walking in the directions where we are not making good progress (there are many ways to ensure this and this precise form will make sense towards the end of the analysis). Property (iv) is closely related, and while it might be only a technical condition it can also be understood as ensuring that locally one is not increasing the softmax of the constraints, indeed (iv) exactly says that one should move orthogonally to the gradient of .

The analysis

Let . Note that since is on the sphere and one has that . Thus using for , as well as property (iv) (i.e., ) and one obtains:

Observe now that the subspace has dimension at least (say for ) and thus by (1) and the above inequalities one gets:

In particular for any for some numerical constant . It only remains to observe that this ensures for any (this concludes the proof since we already observed that at time at least half of the coordinates are colored). For this last implication we simply use property (iii). Indeed assume that some coordinate satisfies at some time , for some . Since each update increases the weights (multiplicatively) by at most it means that there is a previous time (say ) where this weight was larger than and yet it got updated, meaning that it was not in the top weights, and in particular one had which contradicts for large enough (namely ).

## New journal: Mathematical Statistics and Learning

I am thrilled to announce the launch of a new journal, “Mathematical Statistics and Learning”, to be edited be the European Mathematical Society. The goal of the journal is be the natural home for the top works addressing the mathematical challenges that arise from the current data revolution (as well as breakthroughs on classical data analysis problems!). I personally wish such a journal had existed for at least a decade and I am very happy to be part of this endeavor as an associate editor. Please consider submitting your very best mathematical work in statistics and learning to this journal!

Some more details provided by Gabor Lugosi and Shahar Mendelson on behalf of the Editorial Board:

The journal is devoted to research articles of the highest quality in all aspects of Mathematical Statistics and Learning, including those studied in traditional areas of Statistics and in Machine Learning as well as in Theoretical Computer Science and Signal Processing. We believe that at this point in time there is no venue for top level mathematical publications in those areas, and our aim is to make the new journal such a venue.

The journal’s Editorial Board consists of the Editors,

Luc Devroye (McGill),

Gabor Lugosi (UPF Barcelona),

Shahar Mendelson (Technion and ANU),

Elchanan Mossel (MIT),

Mike Steele (U. Pennsylvania),

Alexandre Tsybakov (ENSAE),

Roman Vershynin (U. Michigan),

and the Associate Editors,

Sebastien Bubeck (Microsoft Research),

Andrea Montanari (Stanford),

Jelani Nelson (Harvard),

Philippe Rigollet (MIT),

Sara van de Geer (ETH – Zurich),

Ramon van Handel (Princeton),

Rachel Ward (UT – Austin).

The success of the journal depends entirely on our community;  we need your help and support in making it the success we believe it can be. We therefore ask that you consider submitting to the journal results you think are of a very high quality.

The first issue of the journal is scheduled to appear in early 2018.

Posted in Announcement | 2 Comments

## STOC 2017 accepted papers

The list of accepted papers to STOC 2017 has just been released. Following the trend in recent years there are quite a few learning theory papers! I have already blogged about the kernel-based convex bandit algorithm; as well as the smoothed poly-time local max-cut (a.k.a. asynchronous Hopfield network). Some of the other learning papers that caught my attention: yet again a new viewpoint on acceleration for convex optimization; some progress on the complexity of finding stationary point on non-convex functions; a new twist on tensor decomposition for poly-time learning of latent variable models; an approximation algorithm for low-rank approximation in ell_1 norm; a new framework to learn from adversarial data; some progress on the trace reconstruction problem (amazingly the exact same result was discovered independently by two teams, see here and here); new sampling technique for graphical models; new relevant statistical physics result; faster submodular minimization;  and finally some new results on nearest neighbor search.

## Guest post by Miklos Racz: Confidence sets for the root in uniform and preferential attachment trees

In the final post of this series (see here for the previous posts) we consider yet another point of view for understanding networks. In the previous posts we studied random graph models with community structure and also models with an underlying geometry. While these models are important and lead to fascinating problems, they are also static in time. Many real-world networks are constantly evolving, and their understanding requires models that reflect this. This point of view brings about a host of new interesting and challenging statistical inference questions that concern the temporal dynamics of these networks. In this post we study such questions for two canonical models of randomly growing trees: uniform attachment and preferential attachment.

Models of growing graphs

A natural general model of randomly growing graphs can be defined as follows. For and a graph on vertices, define the random graph by induction. First, set ; we call the seed of the graph evolution process. Then, given , is formed from by adding a new vertex and some new edges according to some adaptive rule. If is a single vertex, we write simply instead of .

There are several rules one can consider; here we study perhaps the two most natural ones: uniform attachment and preferential attachment (denoted and in the following). Moreover, for simplicity we focus on the case of growing trees, where at every time step a single edge is added. Uniform attachment trees are defined recursively as follows: given , is formed from by adding a new vertex and adding a new edge where the vertex is chosen uniformly at random among vertices of , independently of all past choices. Preferential attachment trees are defined similarly, except that is chosen with probability proportional to its degree:

where for a tree we denote by the degree of vertex in .

Questions: detection and estimation

The most basic questions to consider are those of detection and estimation. Can one detect the influence of the initial seed graph? If so, is it possible to estimate the seed? Can one find the root if the process was started from a single node? We introduce these questions in the general model of randomly growing graphs described above, even though we study them in the special cases of uniform and preferential attachment trees later.

The detection question can be rephrased in the terminology of hypothesis testing. Given two potential seed graphs and , and an observation which is a graph on vertices, one wishes to test whether or . The question then boils down to whether one can design a test with asymptotically (in ) nonnegligible power. This is equivalent to studying the total variation distance between and , so we naturally define

where and are random elements in the finite space of unlabeled graphs with vertices. This limit is well-defined because is nonincreasing in (since if , then the evolution of the random graphs can be coupled such that for all ) and always nonnegative.

If the seed has an influence, it is natural to ask whether one can estimate from for large . If so, can the subgraph corresponding to the seed be located in ? We study this latter question in the simple case when the process starts from a single vertex called the root. (In the case of preferential attachment, starting from a single vertex is not well-defined; in this case we start the process from a single edge and the goal is to find one of its endpoints.) A root-finding algorithm is defined as follows. Given and a target accuracy , a root-finding algorithm outputs a set of vertices such that the root is in with probability at least (with respect to the random generation of ).

An important aspect of this definition is that the size of the output set is allowed to depend on , but not on the size of the input graph. Therefore it is not clear that root-finding algorithms exist at all. Indeed, there are examples when they do not exist: consider a path that grows by picking one of its two ends at random and extending it by a single edge. However, it turns out that in many interesting cases root-finding algorithms do exist. In such cases it is natural to ask for the best possible value of .

The influence of the seed

Consider distinguishing between a PA tree started from a star with vertices, , and a PA tree started from a path with vertices, . Since the preferential attachment mechanism incorporates the rich-get-richer phenomenon, one expects the degree of the center of the star in to be significantly larger than the degree of any of the initial vertices in the path in . This intuition guided Bubeck, Mossel, and Racz when they initiated the theoretical study of the influence of the seed in PA trees. They showed that this intuition is correct: the limiting distribution of the maximum degree of the PA tree indeed depends on the seed. Using this they were able to show that for any two seeds and with at least vertices and different degree profiles we have

However, statistics based solely on degrees cannot distinguish all pairs of nonisomorphic seeds. This is because if and have the same degree profiles, then it is possible to couple and such that they have the same degree profiles for every . In order to distinguish between such seeds, it is necessary to incorporate information about the graph structure into the statistics that are studied. This was done successfully by Curien, Duquesne, Kortchemski, and Manolescu, who analyzed statistics that measure the geometry of large degree nodes. These results can be summarized in the following theorem.

Theorem: The seed has an influence in PA trees in the following sense. For any trees and that are nonisomorphic and have at least vertices, we have

In the case of uniform attachment, degrees do not play a special role, so initially one might even think that the seed has no influence in the limit. However, it turns out that the right perspective is not to look at degrees but rather the sizes of appropriate subtrees (we shall discuss such statistics later). By extending the approach of Curien et al. to deal with such statistics, Bubeck, Eldan, Mossel, and Racz showed that the seed has an influence in uniform attachment trees as well.

Theorem: The seed has an influence in UA trees in the following sense. For any trees and that are nonisomorphic and have at least vertices, we have

These results, together with a lack of examples showing opposite behavior, suggest that for most models of randomly growing graphs the seed has influence.

Question: How common is the phenomenon observed in Theorems 1 and 2? Is there a natural large class of randomly growing graphs for which the seed has an influence? That is, models where for any two seeds and (perhaps satisfying an extra condition), we have . Is there a natural model where the seed has no influence?

These theorems about the influence of the seed open up the problem of finding the seed. Here we present the results of Bubeck, Devroye, and Lugosi who first studied root-finding algorithms in the case of UA and PA trees.

They showed that root-finding algorithms indeed exist for PA trees and that the size of the best confidence set is polynomial in .

Theorem: There exists a polynomial time root-finding algorithm for PA trees with

for some finite constant . Furthermore, there exists a positive constant such that any root-finding algorithm for PA trees must satisfy

They also showed the existence of root-finding algorithms for UA trees. In this model, however, there are confidence sets whose size is subpolynomial in . Moreover, the size of any confidence set has to be at least superpolylogarithmic in .

Theorem: There exists a polynomial time root-finding algorithm for UA trees with

for some finite constant . Furthermore, there exists a positive constant such that any root-finding algorithm for UA trees must satisfy

These theorems show an interesting quantitative difference between the two models: finding the root is exponentially more difficult in PA than in UA. While this might seem counter-intuitive at first, the reason behind this can be traced back to the rich-get-richer phenomenon: the effect of a rare event where not many vertices attach to the root gets amplified by preferential attachment, making it harder to find the root.

Proofs using Polya urns

We now explain the basic ideas that go into proving Theorems 3 and 4 and prove some simpler cases. While UA and PA are arguably the most basic models of randomly growing graphs, the evolution of various simple statistics, such as degrees or subtree sizes, can be described using even simpler building blocks: Polya urns. In this post we assume familiarity with Polya urns; we refer to the lecture notes for a primer on Polya urns for the interested reader.

A root-finding algorithm based on the centroid

We start by presenting a simple root-finding algorithm for UA trees. This algorithm is not optimal, but its analysis is simple and highlights the basic ideas.

For a tree , if we remove a vertex , then the tree becomes a forest consisting of disjoint subtrees of the original tree. Let denote the size (i.e., the number of vertices) of the largest component of this forest. A vertex that minimizes is known as a centroid of ; one can show that there can be at most two centroids. We define the confidence set by taking the set of vertices with smallest values.

Theorem: The centroid-based defined above is a root-finding algorithm for the UA tree. More precisely, if

then

where denotes the root, and denotes the unlabeled version of .

Proof: We label the vertices of the UA tree in chronological order. We start by introducing some notation that is useful throughout the proof. For , denote by the tree containing vertex in the forest obtained by removing in all edges between vertices . See the figure for an illustration.

Let denote the size of a tree , i.e., the number of vertices it contains. Note that the vector

evolves according to the classical P\’olya urn with colors, with initial state . Therefore the normalized vector

converges in distribution to a Dirichlet distribution with parameters .

Now observe that

We bound the two terms appearing above separately, starting with the first one. Note that

and both and converge in distribution to a uniform random variable in . Hence a union bound gives us that

For the other term, first observe that for any we have

Now using results on P\’olya urns we have that for every such that , the random variable

converges in distribution to the distribution. Hence by a union bound we have that

Putting together the two bounds gives that

which concludes the proof due to the assumption on .

The same estimator works for the preferential attachment tree as well, if one takes

for some positive constant . The proof mirrors the one above, but involves a few additional steps; we refer to Bubeck et al. for details.

For uniform attachment the bound on given by Theorem 5 is not optimal. It turns out that it is possible to write down the maximum likelihood estimator (MLE) for the root in the UA model; we do not do so here, see Bubeck et al. One can view the estimator based on the centroid as a certain “relaxation” of the MLE. By constructing a certain “tighter” relaxation of the MLE, one can obtain a confidence set with size subpolynomial in as described in Theorem 4. The analysis of this is the most technical part of Bubeck et al. and we refer to this paper for more details.

Lower bounds

As mentioned above, the MLE for the root can be written down explicitly. This aids in showing a lower bound on the size of a confidence set. In particular, Bubeck et al. define a set of trees whose probability of occurrence under the UA model is not too small, yet the MLE provably fails, giving the lower bound described in Theorem 4. We refer to Bubeck et al. for details.

On the other hand, for the PA model it is not necessary to use the structure of the MLE to obtain a lower bound. A simple symmetry argument suffices to show the lower bound in Theorem 3, which we now sketch.

First observe that the probability of error for the optimal procedure is non-decreasing with , since otherwise one could simulate the process to obtain a better estimate. Thus it suffices to show that the optimal procedure must have a probability of error of at least for some finite . We show that there is some finite such that with probability at least , the root is isomorphic to at least vertices in . Thus if a procedure outputs at most vertices, then it must make an error at least half the time (so with probability at least ).

Observe that the probability that the root is a leaf in is

By choosing , this happens with probability . Furthermore, conditioned on the root being a leaf, with constant probability vertex is connected to leaves, which are then isomorphic to the root.

Open problems

There are many open problems and further directions that one can pursue; the four papers we have discussed contain 20 open problems and conjectures alone, and we urge the reader to have a look and try to solve them!

Posted in Probability theory, Random graphs | 2 Comments

## Geometry of linearized neural networks

This week we had the pleasure to host Tengyu Ma from Princeton University who told us about the recent progress he has made with co-authors to understand various linearized versions of neural networks. I will describe here two such results, one for Residual Neural Networks and one for Recurrent Neural Networks.

Some properties to look for in non-convex optimization

We will say that a function admits first order optimality (respectively second order optimality) if all critical points (respectively all local minima) of are global minima (of course first order optimality implies second order optimality for smooth functions). In particular with first order optimality one has that gradient descent converges to the global minimum, and with second order optimality this is also true provided that one avoids saddle points. To obtain rates of convergence it can be useful to make more quantitative statements. For example we say that is -Polyak if

Clearly -Polyak implies first order optimality, but more importantly it also implies linear convergence rate for gradient descent on . A variant of this condition is -weak-quasi-convexity:

in which case gradient descent converges at the slow non-smooth rate (and in this case it is also robust to noise, i.e. one can write a stochastic gradient descent version). The proofs of these statements just mimic the usual convex proofs. For more on these conditions see for instance this paper.

Linearized Residual Networks

Recall that a neural network is just a map where are linear maps (i.e. they are the matrices parametrizing the neural network) and is some non-linear map (the most popular one, ReLu, is the just the coordinate-wise positive part). Alternatively you can think of a neural network as a sequence of hidden states where and . In 2015 a team of researcher at MSR Asia introduced the concept of a residual neural network where the hidden states are now updated as before for even but for odd we set . Apparently this trick allowed them to train much deeper networks, though it is not clear why this would help from a theoretical point of view (the intuition is that at least when the network is initialized with all matrices being it still does something non-trivial, namely it computes the identity).

In their most recent paper Moritz Hardt and Tengyu Ma try to explain why adding this “identity connection” could be a good idea from a geometric point of view. They consider an (extremely) simplified model where there is no non-linearity, i.e. is the identity map. A neural network is then just a product of matrices. In particular the landscape we are looking at for least-squares with such a model is of the form:

which is of course a non-convex function (just think of the function and observe that on the segment it gives the non-convex function ). However it actually satisfies the second-order optimality condition:

Proposition [Kawaguchi 2016]

Assume that has a full rank covariance matrix and that for some deterministic matrix . Then all local minima of are global minima.

I won’t give the proof of this result as it requires to take the second derivative of which is a bit annoying (I will give below the proof of the first derivative). Now in this linearized setting the residual network version (where the identity connection is added at every layer) corresponds simply to a reparametrization around the identity, in other words we consider now the following function:

Proposition [Hardt and Ma 2016]

Assume that has a full rank covariance matrix and that for some deterministic matrix . Then has first order optimality on the set .

Thus adding the identity connection makes the objective function better behave around the starting points with all-zeros matrices (in the sense that gradient descent doesn’t have to worry about avoiding saddle points). The proof is just a few lines of standard calculations to take derivatives of functions with matrix-valued inputs.

Proof: One has with and ,

so with and ,

which exactly means that the derivative of with respect to is equal to . On the set under consideration one has that and are invertible (and so is by assumption), and thus if this derivative is equal to it muts be that and thus (which is the global minimum).

Linearized recurrent neural networks

The simplest version of a recurrent neural network is as follows. It is a mapping of the form (we are thinking of doing sequence to sequence prediction). In these networks the hidden state is updated as (with ) and the output is . I will now describe a paper by Hardt, Ma and Recht (see also this blog post) that tries to understand the geometry of least-squares for this problem in the linearized version where . That is we are looking at the function:

where is obtained from via some unknown recurrent neural network with parameters . First observe that by induction one can easily see that and . In particular, assuming that is an i.i.d. isotropic sequence one obtains

and thus

In particular we see that the effect of is decoupled from the other variables and that is appears as a convex function, thus we will just ignore it. Next we make the natural assumption that the spectral radius of is less than (for otherwise the influence of the initial input is growing over time which doesn’t seem natural) and thus up to some small error term (for large ) one can consider the idealized risk:

The next idea is a cute one which makes the above expression more tractable. Consider the series and its Fourier transform:

By Parseval’s theorem the idealized risk is equal to the distance between and (i.e. ). We will now show that under appropriate further assumptions, for any that is weakly-quasi-convex in (in particular this shows that the idealized risk is weakly-quasi-convex). The big assumption that Hardt, Ma and Recht make is that the system is a “single-input single-output” model, that is both and are scalar. In this case it turns out that control theory shows that there is a “canonical controlable form” where , and has zeros everywhere except on the upper diagonal where it has ones and on the last row where it has (I don’t know the proof of this result, if some reader has a pointer for a simple proof please share in the comments!). Note that with this form the system is simple to interpret as one has and . Now with just a few lines of algebra:

Thus we are just asking to check the weak-quasi-convexity of

Weak-quasi-convexity is preserved by linear functions, so we just need to understand the map

which is weak-quasi-convex provided that has a positive inner product with . In particular we just proved the following:

Theorem [Hardt, Ma, Recht 2016]

Let and assume there is some cone of angle less than such that . Then the idealized risk is -weakly-quasi-convex on the set of such that .

(In the paper they specifically pick the cone where the imaginary part is larger than the real part.) This theorem naturally suggests that by overparametrizing the network (i.e. adding dimensions to and ) one could have a nicer landscape (indeed in this case the above condition can be easier to check), see the paper for more details!

Posted in Optimization | 1 Comment

## Local max-cut in smoothed polynomial time

Omer Angel, Yuval Peres, Fan Wei, and myself have just posted to the arXiv our paper showing that local max-cut is in smoothed polynomial time. In this post I briefly explain what is the problem, and I give a short proof of the previous state of the art result on this problem, which was a paper by Etscheid and Roglin showing that local max-cut is in quasi-polynomial time.

Local max-cut and smoothed analysis

Let be a connected graph with vertices and be an edge weight function. The local max-cut problem asks to find a partition of the vertices whose total cut weight

is locally maximal, in the sense that one cannot increase the cut weight by changing the value of at a single vertex (recall that actually finding the global maximum is NP-hard). See the papers linked to above for motivation on this problem.

There is a simple local search algorithm for this problem, sometimes referred to as “FLIP”: start from some initial and iteratively flip vertices (i.e. change the sign of at a vertex) to improve the cut weight until reaching a local maximum. It is easy to build instances where FLIP takes exponential time, however in “practice” it seems that FLIP always converges quickly. This motivates the smoothed analysis of FLIP, that is we want to understand what is the typical number of steps for FLIP when the edge weights is perturbed by a small amount of noise. Formally we now assume that the weight on edge is given by a random variable which has a density with respect to the Lebesgue measure bounded from above by (for example this forbids to be too close to a point mass). We assume that these random variables are independent.

Theorem(Etscheid and Roglin [2014]): With probability FLIP terminates in steps for some universal constant .

We improve this result from quasi-polynomial to polynomial, assuming that we put some noise on the interaction between every pair of vertices, or in other words assuming that the graph is complete.

Theorem(Angel, Bubeck, Peres, Wei [2016]): Let be complete. With probability FLIP terminates in steps.

I will now prove the Etscheid and Roglin result.

Proof strategy

To simplify notation let us introduce the Hamiltonian

We want to find a local max of . For any and , we denote by the state equal to except for the coordinate corresponding to which is flipped. For such there exists a vector such that

More specifically is defined by

We say that flipping a vertex is a move, and it is an improving move if the value of strictly improves. We say that a sequence of moves is -slow from if

It is sufficient to show that with high probability there is no -slow sequence with say and (indeed in this case after steps FLIP must have stopped, for otherwise the value would exceed the maximal possible value of ). We will do this in in three main steps, a probability step, a linear algebra step, and a combinatorial step.

Probability step

Lemma: Let be linearly independent vectors in . Then one has

Proof: The inequality follows from a simple change of variables. Let be a full rank matrix whose first rows are and it is completed so that is the identity on the subspace orthogonal to . Let be the density of , and the density of . One has and the key observation is that since has integer coefficients, its determinant must be an integer too, and since it is non-zero one has Thus one gets:

Linear algebra step

Lemma: Consider a sequence of improving moves with distinct vertices (say ) that repeat at least twice in this sequence. Let be the corresponding move coefficients, and for each let (respectively ) be the first (respectively second) time at which moves. Then the vectors , , are linearly independent. Furthermore for any that did not move between the times and one has (and for any ).

Proof: The last sentence of the lemma is obvious. For the linear independence let be such that . Consider a new graph with vertex set and such that is connected to if appears an odd number of times between the times and . This defines an oriented graph, however if is connected to but is not connected to then one has while (and furthermore for any ) and thus . In other words we can consider a subset of where is an undirected graph, and outside of this subset is identically zero. To reduce notation we simply assume that is undirected. Next we observe that if and are connected then one must have (this uses the fact that we look at the {\em first} consecutive times at which the vertices move) and in particular (again using that for any ) one must have . Now let be some connected component of , and let be the unique value of on . Noting that the ‘s corresponding to different components of have difference support (more precisely with one has for any and for any ) one obtains . On the other hand since the sequence of moves is improving one must have , which implies and finally (thus concluding the proof of the linear independence).

Combinatorial step

Lemma: Let . There exists and such that the number of vertices that repeat at least twice in the segment is at least .

Proof (from ABPW): Define the surplus of a sequence to be the difference between the number of elements and the number of distinct elements in the sequence. Let be the maximum surplus in any segment of length in . Observe that . Let us now assume that for any segment of length , the number of vertices that repeat at least twice is at most . Then one has by induction

This shows that has to be greater than which concludes the proof.

Putting things together

We want to show that with one has

By the combinatorial lemma we know that it is enough to show that:

Now using the probability lemma together with the linear algebra lemma (and observing that critically only depends on the value of at the vertices in , and thus the union bound over only gives a factor instead of ) one obtains that the above probability is bounded by

which concludes the proof of the Etscheid and Roglin result.

Note that a natural route to get a polynomial-time bound from the above proof would be to remove the term in the combinatorial lemma but we show in our paper that this is impossible. Our result comes essentially from improvements to the linear algebra step (this is more difficult as the Etscheid and Roglin linear algebra lemma is particularly friendly for the union bound step, so we had to find another way to do the union bound).

Posted in Optimization, Probability theory | 2 Comments

## Prospective graduate student? Consider University of Washington!

This post is targeted at prospective graduate students, especially foreigners from outside the US, who are primarily interested in optimization but also have a taste for probability theory (basically readers of this blog!). As a foreigner myself I remember that during my undergraduate studies my view of the US was essentially restricted to the usual suspects, Princeton, Harvard, MIT, Berkeley, Stanford. These places are indeed amazing, but I would like to try to raise awareness that, in terms of the interface optimization/probability, University of Washington (and especially the theory of computation group there) has a reasonable claim for the place with the most amazing opportunities right now in this space:

1. The junior faculty (Shayan Oveis Gharan, Thomas Rothvoss, and Yin Tat Lee) are all doing groundbreaking (and award-winning) work at the interface optimization/probability. In my opinion the junior faculty roster is a key element in the choice of grad school, as typically junior faculty have much more time to dedicate to students. In particular I know that Yin Tat Lee is looking for graduate students starting next Fall.
2. Besides the theory of computation group, UW has lots of resources in optimization such as TOPS (Trends in Optimization Seminar), and many optimization faculty in various department (Maryam Fazel, Jim Burke, Dmitriy Drusvyatskiy, Jeff Bilmes, Zaid Harchaoui) which means many interesting classes to take!
3. The Theory Group at Microsoft Research is just a bridge away from UW, and we have lots of activities on optimization/probability there too. In fact I am also looking for one graduate student, to be co-advised with a faculty from UW.

Long story short, if you are a talented young mathematician interested in making a difference in optimization then you should apply to the CS department at UW, and here is the link to do so.

Posted in Uncategorized | 3 Comments

## Guest post by Miklos Racz: Entropic central limit theorems and a proof of the fundamental limits of dimension estimation

In this post we give a proof of the fundamental limits of dimension estimation in random geometric graphs, based on the recent work of Bubeck and Ganguly. We refer the reader to the previous post for a detailed introduction; here we just recall the main theorem we will prove.

Theorem [Bubeck and Ganguly]

If the distribution is log-concave, i.e., if it has density for some convex function , and if , then

(1)

where is an appropriately scaled Wishart matrix coming from vectors having i.i.d. entries and is a GOE matrix, both with the diagonal removed.

The proof hinges on a high-dimensional entropic central limit theorem, so a large part of the post is devoted to entropic central limit theorems and ways of proving them. Without further ado let us jump right in.

Pinsker’s inequality: from total variation to relative entropy

Our goal is now to bound from above. In the general setting considered here there is no nice formula for the density of the Wishart ensemble, so cannot be computed directly. Coupling these two random matrices also seems challenging.

In light of these observations, it is natural to switch to a different metric on probability distributions that is easier to handle in this case. Here we use Pinsker’s inequality to switch to relative entropy:

(2)

where denotes the relative entropy of with respect to . We next take a detour to entropic central limit theorems and techniques involved in their proof, before coming back to bounding the right hand side in (2).

An introduction to entropic CLTs

Let denote the density of , the -dimensional standard Gaussian distribution, and let be an isotropic density with mean zero, i.e., a density for which the covariance matrix is the identity . Then

where the second equality follows from the fact that is quadratic in , and the first two moments of and are the same by assumption. We thus see that the standard Gaussian maximizes entropy among isotropic densities. It turns out that much more is true.

The central limit theorem states that if are i.i.d. real-valued random variables with zero mean and unit variance, then converges in distribution to a standard Gaussian random variable as . There are many other senses in which converges to a standard Gaussian, the entropic CLT being one of them.

Theorem [Entropic CLT]

Let be i.i.d. real-valued random variables with zero mean and unit variance, and let . If , then

as . Moreover, the entropy of increases monotonically, i.e., for every .

The condition is necessary for an entropic CLT to hold; for instance, if the are discrete, then for all .

The entropic CLT originates with Shannon in the 1940s and was first proven by Linnik (without the monotonicity part of the statement). The first proofs that gave explicit convergence rates were given independently and at roughly the same time by Artstein, Ball, Barthe, and Naor, and Johnson and Barron in the early 2000s, using two different techniques.

The fact that follows from the entropy power inequality, which goes back to Shannon in 1948. This implies that for all , and so it was naturally conjectured that increases monotonically. However, proving this turned out to be challenging. Even the inequality was unknown for over fifty years, until Artstein, Ball, Barthe, and Naor proved in general that for all .

In the following we sketch some of the main ideas that go into the proof of these results, in particular following the technique introduced by Ball, Barthe, and Naor.

From relative entropy to Fisher information

Our goal is to show that some random variable , which is a convolution of many i.i.d. random variables, is close to a Gaussian . One way to approach this is to interpolate between the two. There are several ways of doing this; for our purposes interpolation along the Ornstein-Uhlenbeck semigroup is most useful. Define

for , and let denote the density of . We have and . This semigroup has several desirable properties. For instance, if the density of is isotropic, then so is . Before we can state the next desirable property that we will use, we need to introduce a few more useful quantities.

For a density function , let

be the Fisher information matrix. The Cramer-Rao bound states that

More generally this holds for the covariance of any unbiased estimator of the mean. The Fisher information is defined as

It is sometimes more convenient to work with the Fisher information distance, defined as . Similarly to the discussion above, one can show that the standard Gaussian minimizes the Fisher information among isotropic densities, and hence the Fisher information distance is always nonnegative.

Now we are ready to state the De Bruijn identity, which characterizes the change of entropy along the Ornstein-Uhlenbeck semigroup via the Fisher information distance:

This implies that the relative entropy between and —which is our quantity of interest—can be expressed as follows:

(3)

Thus our goal is to bound the Fisher information distance .

Bounding the Fisher information distance

We first recall a classical result by Blachman and Stam that shows that Fisher information decreases under convolution.

Theorem [Blachman; Stam]

Let be independent random variables taking values in , and let be such that . Then

In the i.i.d. case, this bound becomes .

Ball, Barthe, and Naor gave the following variational characterization of the Fisher information, which gives a particularly simple proof of Theorem 3. (See Bubeck and Ganguly for a short proof.)

Theorem [Variational characterization of Fisher information]

Let be a sufficiently smooth density on , let be a unit vector, and let be the marginal of in direction . Then we have

(4)

for any continuously differentiable vector field with the property that for every , . Moreover, if satisfies , then there is equality for some suitable vector field .

The Blachman-Stam theorem follows from this characterization by taking the constant vector field . Then we have , and so the right hand side of (4) becomes , where recall that is the Fisher information matrix. In the setting of Theorem 3 the density of is a product density: , where is the density of . Consequently the Fisher information matrix is a diagonal matrix, , and thus , concluding the proof of Theorem 3 using Theorem 4.

Given the characterization of Theorem 4, one need not take the vector field to be constant; one can obtain more by optimizing over the vector field. Doing this leads to the following theorem, which gives a rate of decrease of the Fisher information distance under convolutions.

Theorem [Artstein, Ball, Barthe, and Naor]

Let be i.i.d. random variables with a density having a positive spectral gap . (We say that a random variable has spectral gap if for every sufficiently smooth , we have . In particular, log-concave random variables have a positive spectral gap, see Bobkov (1999).) Then for any with we have that

When , then , and thus using (3) we obtain a rate of convergence of in the entropic CLT.

A result similar to Theorem 5 was proven independently and roughly at the same time by Johnson and Barron using a different approach involving score functions.

A high-dimensional entropic CLT

The techniques of Artstein, Ball, Barthe, and Naor generalize to higher dimensions, as was recently shown by Bubeck and Ganguly.

A result similar to Theorem 5 can be proven, from which a high-dimensional entropic CLT follows, together with a rate of convergence, by using (3) again.

Theorem [Bubeck and Ganguly]

Let be a random vector with i.i.d. entries from a distribution with zero mean, unit variance, and spectral gap . Let be a matrix such that , the identity matrix. Let

and

Then we have that

where denotes the standard Gaussian measure in .

To interpret this result, consider the case where the matrix is built by picking rows one after the other uniformly at random on the Euclidean sphere in , conditionally on being orthogonal to previous rows (to satisfy the isotropicity condition ). We then expect to have and (we leave the details as an exercise for the reader), and so Theorem 7 tells us that .

Back to Wishart and GOE

We now turn our attention back to bounding the relative entropy ; recall (2). Since the Wishart matrix contains the (scaled) inner products of vectors in , it is natural to relate and , since the former comes from the latter by adding an additional -dimensional vector to the vectors already present. Specifically, we have the following:

where is a -dimensional random vector with i.i.d. entries from , which are also independent from . Similarly we can write the matrix using :

This naturally suggests to use the chain rule for relative entropy and bound

by induction on . We get that

By convexity of the relative entropy we also have that

Thus our goal is to understand and bound for , and then apply the bound to (followed by taking expectation over ). This is precisely what was done in Theorem 6, the high-dimensional entropic CLT, for satisfying . Since does not necessarily satisfy , we have to correct for the lack of isotropicity. This is the content of the following lemma, the proof of which we leave as an exercise for the reader.

Lemma

Let and be such that . Then for any isotropic random variable taking values in we have that

(5)

We then apply this lemma with and . Observe that

and hence in expectation the middle two terms of the right hand side of (5) cancel each other out.

The last term in (5),

should be understood as the relative entropy between a centered Gaussian with covariance given by and a standard Gaussian in . Controlling the expectation of this term requires studying the probability that is close to being non-invertible, which requires bounds on the left tail of the smallest singular of . Understanding the extreme singular values of random matrices is a fascinating topic, but it is outside of the scope of these notes, and so we refer the reader to Bubeck and Ganguly for more details on this point.

Finally, the high-dimensional entropic CLT can now be applied to see that

From the induction on we get another factor of , arriving at

We conclude that the dimension threshold is , and the information-theoretic proof that we have outlined sheds light on why this threshold is .

## Guest post by Miklos Racz: The fundamental limits of dimension estimation in random geometric graphs

This post is a continuation of the previous one, where we explored how to detect geometry in a random geometric graph. We now consider the other side of the coin: when is it impossible to detect geometry and what are techniques for proving this? We begin by discussing the model introduced in the previous post and then turn to a more general setup, proving a robustness result on the threshold dimension for detection. The proof of the latter result also gives us the opportunity to learn about the fascinating world of entropic central limit theorems.

Barrier to detecting geometry: when Wishart becomes GOE

Recall from the previous post that is a random geometric graph where the underlying metric space is the -dimensional unit sphere , and where the latent labels of the nodes are i.i.d. uniform random vectors in . Our goal now is to show the impossibility result of Bubeck, Ding, Eldan, and Racz: if , then it is impossible to distinguish between and the Erdos-Renyi random graph . More precisely, we have that

(1)

when and , where denotes total variation distance.

There are essentially three main ways to bound the total variation of two distributions from above: (i) if the distributions have nice formulas associated with them, then exact computation is possible; (ii) through coupling the distributions; or (iii) by using inequalities between probability metrics to switch the problem to bounding a different notion of distance between the distributions. Here, while the distribution of does not have a nice formula associated with it, the main idea is to view this random geometric graph as a function of an Wishart matrix with degrees of freedom—i.e., a matrix of inner products of -dimensional Gaussian vectors—denoted by . It turns out that one can view as (essentially) the same function of an GOE random matrix—i.e., a symmetric matrix with i.i.d. Gaussian entries on and above the diagonal—denoted by . The upside of this is that both of these random matrix ensembles have explicit densities that allow for explicit computation. We explain this connection here in the special case of for simplicity; see Bubeck et al. for the case of general .

Recall that if is a standard normal random variable in , then is uniformly distributed on the sphere . Consequently we can view as a function of an appropriate Wishart matrix, as follows. Let be an matrix where the entries are i.i.d. standard normal random variables, and let be the corresponding Wishart matrix. Note that and so . Thus the matrix defined as

has the same law as the adjacency matrix of . Denote the map that takes to by , i.e., .

In a similar way we can view as a function of an matrix drawn from the Gaussian Orthogonal Ensemble (GOE). Let be a symmetric random matrix where the diagonal entries are i.i.d. normal random variables with mean zero and variance 2, and the entries above the diagonal are i.i.d. standard normal random variables, with the entries on and above the diagonal all independent. Then has the same law as the adjacency matrix of . Note that only depends on the sign of the off-diagonal elements of , so in the definition of we can replace with , where is the identity matrix.

We can thus conclude that

The densities of these two random matrix ensembles are explicit and well known (although we do not state them here), which allow for explicit calculations. The outcome of these calculations is the following result, proven independently and simultaneously by Bubeck et al. and Jiang and Li.

Define the random matrix ensembles and as above. If , then

We conclude that it is impossible to detect underlying geometry whenever .

The universality of the threshold dimension

How robust is the result presented above? We have seen that the detection threshold is intimately connected to the threshold of when a Wishart matrix becomes GOE. Understanding the robustness of this result on random matrices is interesting in its own right, and this is what we will pursue in the remainder of this post, which is based on a recent paper by Bubeck and Ganguly.

Let be an random matrix with i.i.d. entries from a distribution that has mean zero and variance . The matrix is known as the Wishart matrix with degrees of freedom. As we have seen above, this arises naturally in geometry, where is known as the Gram matrix of inner products of points in . The Wishart matrix also appears naturally in statistics as the sample covariance matrix, where is the number of samples and is the number of parameters. (Note that in statistics the number of samples is usually denoted by , and the number of parameters is usually denoted by ; here our notation is taken with the geometric perspective in mind.)

We consider the Wishart matrix with the diagonal removed, and scaled appropriately:

In many applications—such as to random graphs as above—the diagonal of the matrix is not relevant, so removing it does not lose information. Our goal is to understand how large does the dimension have to be so that is approximately like , which is defined as the Wigner matrix with zeros on the diagonal and i.i.d. standard Gaussians above the diagonal. In other words, is drawn from the Gaussian Orthogonal Ensemble (GOE) with the diagonal replaced with zeros.

A simple application of the multivariate central limit theorem gives that if is fixed and , then converges to in distribution. The main result of Bubeck and Ganguly establishes that this holds as long as under rather general conditions on the distribution .

Theorem [Bubeck and Ganguly]

If the distribution is log-concave, i.e., if it has density for some convex function , and if , then

(2)

On the other hand, if has a finite fourth moment and , then

(3)

This result extends Theorem 1 from the previous post and Theorem 1 from above, and establishes as the universal critical dimension (up to logarithmic factors) for sufficiently smooth measures : is approximately Gaussian if and only if is much larger than . For random graphs, as seen above, this is the dimension barrier to extracting geometric information from a network: if the dimension is much greater than the cube of the number of vertices, then all geometry is lost. In the setting of statistics this means that the Gaussian approximation of a Wishart matrix is valid as long as the sample size is much greater than the cube of the number of parameters. Note that for some statistics of a Wishart matrix the Gaussian approximation is valid for much smaller sample sizes (e.g., the largest eigenvalue behaves as in the limit even when the number of parameters is on the same order as the sample size (Johnstone, 2001)).

To distinguish the random matrix ensembles, we have seen in the previous post that signed triangles work up until the threshold dimension in the case when is standard normal. It turns out that the same statistic works in this more general setting; when the entries of the matrices are centered, this statistic can be written as . We leave the details as an exercise for the reader.

We note that for (2) to hold it is necessary to have some smoothness assumption on the distribution . For instance, if is purely atomic, then so is the distribution of , and thus its total variation distance to is . The log-concave assumption gives this necessary smoothness, and it is an interesting open problem to understand how far this can be relaxed.

We will see the proof (and in particular the connection to entropic CLT!) in the next post.

## Guest post by Miklos Racz: Estimating the dimension of a random geometric graph on a high-dimensional sphere

Following the previous post in which we studied community detection, in this post we study the fundamental limits of inferring geometric structure in networks. Many networks coming from physical considerations naturally have an underlying geometry, such as the network of major roads in a country. In other networks this stems from a latent feature space of the nodes. For instance, in social networks a person might be represented by a feature vector of their interests, and two people are connected if their interests are close enough; this latent metric space is referred to as the social space. We are particularly interested in the high-dimensional regime, which brings about a host of new questions, such as estimating the dimension.

A simple random geometric graph model and basic questions

We study perhaps the simplest model of a random geometric graph, where the underlying metric space is the -dimensional unit sphere , and where the latent labels of the nodes are i.i.d. uniform random vectors in . More precisely, the random geometric graph is defined as follows. Let be independent random vectors, uniformly distributed on . In , distinct nodes and are connected by an edge if and only if , where the threshold value is such that For example, when we have .

The most natural random graph model without any structure is the standard Erdos-Renyi random graph , where any two of the vertices are independently connected with probability .

We can thus formalize the question of detecting underlying geometry as a simple hypothesis testing question. The null hypothesis is that the graph is drawn from the Erdos-Renyi model, while the alternative is that it is drawn from . In brief:

(1)

To understand this question, the basic quantity we need to study is the total variation distance between the two distributions on graphs, and , denoted by ; recall that the total variation distance between two probability measures and is defined as . We are interested in particular in the case when the dimension is large, growing with .

It is intuitively clear that if the geometry is too high-dimensional, then it is impossible to detect it, while a low-dimensional geometry will have a strong effect on the generated graph and will be detectable. How fast can the dimension grow with while still being able to detect it? Most of this post will focus on this question.

If we can detect geometry, then it is natural to ask for more information. Perhaps the ultimate goal would be to find an embedding of the vertices into an appropriate dimensional sphere that is a true representation, in the sense that the geometric graph formed from the embedded points is indeed the original graph. More modestly, can the dimension be estimated? We touch on this question at the end of the post.

The dimension threshold for detecting underlying geometry

The high-dimensional setting of the random geometric graph was first studied by Devroye, Gyorgy, Lugosi, and Udina, who showed that geometry is indeed lost in high dimensions: if is fixed and , then . More precisely, they show that this convergence happens when , but this is not tight. The dimension threshold for dense graphs was recently found by Bubeck, Ding, Eldan, and Racz, and it turns out that it is , in the following sense.

Theorem [Bubeck, Ding, Eldan, and Racz 2014]

Let be fixed. Then

Moreover, in the latter case there exists a computationally efficient test to detect underlying geometry (with running time ).

(2)

(3)

Most of this post is devoted to understanding (3), that is, how the two models can be distinguished; the impossibility result of (2) will be discussed in a future post. At the end we will also consider this same question for sparse graphs (where ), where determining the dimension threshold is an intriguing open problem.

The triangle test

A natural test to uncover geometric structure is to count the number of triangles in . Indeed, in a purely random scenario, vertex being connected to both and says nothing about whether and are connected. On the other hand, in a geometric setting this implies that and are close to each other due to the triangle inequality, thus increasing the probability of a connection between them. This, in turn, implies that the expected number of triangles is larger in the geometric setting, given the same edge density. Let us now compute what this statistic gives us.

Given that is connected to both and , and are more likely to be connected under than under .

For a graph , let denote its adjacency matrix. Then

is the indicator variable that three vertices , , and form a triangle, and so the number of triangles in is

By linearity of expectation, for both models the expected number of triangles is times the probability of a triangle between three specific vertices. For the Erd\H{o}s-R\’enyi random graph the edges are independent, so the probability of a triangle is , and thus we have

For it turns out that for any fixed we have

(4)

for some constant , which gives that

Showing (4) is somewhat involved, but in essence it follows from the concentration of measure phenomenon on the sphere, namely that most of the mass on the high-dimensional sphere is located in a band of around the equator. We sketch here the main intuition for , which is illustrated in the figure below.

Let , , and be independent uniformly distributed points in . Then

where the last equality follows by independence. So what remains is to show that this latter conditional probability is approximately . To compute this conditional probability what we really need to know is the typical angle is between and . By rotational invariance we may assume that , and hence , the first coordinate of . One way to generate is to sample a -dimensional standard Gaussian and then normalize it by its length. Since the norm of a -dimensional standard Gaussian is very well concentrated around , it follows that is on the order of . Conditioned on , this typical angle gives the boost in the conditional probability that we see.

If and are two independent uniform points on , then their inner product is on the order of due to the concentration of measure phenomenon on the sphere.

Thus we see that the boost in the number of triangles in the geometric setting is in expectation:

To be able to tell apart the two graph distributions based on the number of triangles, the boost in expectation needs to be much greater than the standard deviation. A simple calculation shows that

and also

Thus we see that if , which is equivalent to .

Signed triangles are more powerful

While triangles detect geometry up until , are there even more powerful statistics that detect geometry for larger dimensions? One can check that longer cycles also only work when , as do several other natural statistics. Yet the underlying geometry can be detected even when .

The simple idea that leads to this improvement is to consider signed triangles. We have already noticed that triangles are more likely in the geometric setting than in the purely random setting. This also means that induced wedges (i.e., when there are exactly two edges among the three possible ones) are less likely in the geometric setting. Similarly, induced single edges are more likely, and induced independent sets on three vertices are less likely in the geometric setting. The following figure summarizes these observations.

The signed triangles statistic incorporates these observations by giving the different patterns positive or negative weights. More precisely, we define

The key insight motivating this definition is that the variance of signed triangles is much smaller than the variance of triangles, due to the cancellations introduced by the centering of the adjacency matrix: the term vanishes, leaving only the term. It is a simple exercise to show that

and

On the other hand it can be shown that

(5)

so the gap between the expectations remains. Furthermore, it can also be shown that the variance also decreases for and we have

Putting everything together we get that if , which is equivalent to . This concludes the proof of (3) from Theorem 1.

Estimating the dimension

Until now we discussed detecting geometry. However, the insights gained above allow us to also touch upon the more subtle problem of estimating the underlying dimension .

Dimension estimation can also be done by counting the “number” of signed triangles as above. However, here it is necessary to have a bound on the difference of the expected number of signed triangles between consecutive dimensions; the lower bound on in~(5) is not enough. Still, we believe that the lower bound should give the true value of the expected value for an appropriate constant , and hence we expect to have that

(6)

Thus, using the variance bound from above, we get that dimension estimation should be possible using signed triangles whenever , which is equivalent to . Showing (6) for general seems involved; Bubeck et al. showed that it holds for , which can be considered as a proof of concept.

Theorem [Bubeck, Ding, Eldan, and Racz 2014]

There exists a universal constant such that for all integers and , one has

This result is tight, as demonstrated by a result of Eldan, which implies that and are indistinguishable when .

The mysterious sparse regime

We conclude this post by discussing an intriguing conjecture for sparse graphs. It is again natural to consider the number of triangles as a way to distinguish between and . Bubeck et al. show that this statistic works whenever , and conjecture that this is tight.

Conjecture [Bubeck, Ding, Eldan, and Racz 2014]

Let be fixed and assume . Then

The main reason for this conjecture is that, when , and seem to be locally equivalent; in particular, they both have the same Poisson number of triangles asymptotically. Thus the only way to distinguish between them would be to find an emergent global property which is significantly different under the two models, but this seems unlikely to exist. Proving or disproving this conjecture remains a challenging open problem. The best known bound is from (2) (which holds uniformly over ), which is very far from !