Lecture 2. Basics / law of small numbers

Due to scheduling considerations, we postpone the proof of the entropic central limit theorem. In this lecture, we discuss basic properties of the entropy and illustrate them by proving a simple version of the law of small numbers (Poisson limit theorem). The next lecture will be devoted to Sanov’s theorem. We will return to the entropic central limit theorem in Lecture 4.

Conditional entropy and mutual information

We begin by introducing two definitions related to entropy. The first definition is a notion of entropy under conditioning.

Definition. If X and Y are two discrete random variables with probability mass functions p_X and p_Y, then the conditional entropy of X given Y is defined as

    \[H(X|Y) := - \mathbb{E} [\log{p_{X|Y}(X|Y)} ]\]

where p_{X|Y}(x|y) = p_{(X,Y)}(x,y)/p_Y(y) is the conditional probability mass function of X given Y.

Remark. If X and Y are absolutely continuous random variables, the conditional differential entropy h(X|Y) is defined analogously (where the probability mass functions are replaced by the corresponding probability densities with respect to Lebesgue measure).

Note that

    \begin{equation*} \begin{split}  H(X|Y)  &=  - \sum_{x,y} p_{(X,Y)}(x,y)\log{p_{X|Y}(x|y)} \\          &= - \sum_y p_Y(y) \sum_x  p_{X|Y}(x|y)\log{p_{X|Y}(x|y)} \\          & = \sum_y p_Y(y) H(X|Y=y). \end{split} \end{equation*}

That is, the conditional entropy H(X|Y) is precisely the expectation (with respect to the law of Y) of the entropy of the conditional distribution of X given Y.

We now turn to the second definition, the mutual information. It describes the degree of dependence between two random variables.

Definition. The mutual information between two random variables X and Y is defined as

    \[I(X,Y) := D( \mathcal{L}(X,Y) || \mathcal{L}(X) \otimes \mathcal{L}(Y)),\]

where \mathcal{L}(X,Y), \mathcal{L}(X) and \mathcal{L}(Y) denote the distributions of (X,Y), X and Y.

Conditional entropy and mutual information are closely related. For example, suppose that (X,Y) has density f_{(X,Y)} with respect to the Lebesgue measure, then

    \begin{equation*} \begin{split} I(X,Y) & = \int f_{(X,Y)}(x,y) \log{\frac{f_{(X,Y)}(x,y)}{f_X(x)f_Y(y)}} \,dx \,dy \\        & = \mathbb{E} \log{\frac{f_{(X,Y)}(X,Y)}{f_X(X)f_Y(Y)}} \\        & = \mathbb{E} \log{\frac{f_{X|Y}(X|Y)}{f_X(X)}}  \\        & = h(X)-h(X|Y). \end{split} \end{equation*}

In particular, since I(X,Y) is always positive (because it is a relative entropy), we have just shown that h(X|Y) \leq h(X), that is, conditioning reduces entropy. The same result holds for discrete random variables when we replace h by H.

Chain rules

Chain rules are formulas that relate the entropy of multiple random variables to the conditional entropies of these random variables. The most basic version is the following.

Chain rule for entropy. H(X_1, X_2, ..., X_n) = \sum_{i=1}^n H(X_i|X_1,...,X_{i-1}). In particular, H(X_2|X_1)=H(X_1, X_2)-H(X_1).

Proof. Note that

    \[p_{(X_1,...,X_n)}(x_1,...,x_n) = \prod_{i=1}^n p_{X_i|X_1,...,X_{i-1}}(x_i|x_1,...,x_{i-1}).\]

Thus,

    \[\log{ p_{(X_1,...,X_n)}(x_1,...,x_n)} = \sum_{i=1}^n \log{p_{X_i|X_1,...,X_{i-1}}(x_i|x_1,...,x_{i-1})}.\]

Taking the expectation on both sides under the distribution (x_1,...,x_n) \sim (X_1,...,X_n) gives the desired result. \qquad\square

Corollary. Entropy is sub-additive, that is, H(X_1, X_2, ..., X_n) \leq \sum_{i=1}^n H(X_i).

Proof. Combine the chain rule with H(X_i|X_1,...,X_{i-1}) \leq H(X_i). \qquad\square

There is also a chain rule for relative entropy.

Chain rule for relative entropy.

    \[D(\mathcal{L}(X,Y) || \mathcal{L}(X^{'},Y^{'})) =  D(\mathcal{L}(X)||\mathcal{L}(X^{'})) + \mathbb{E}_{x \sim X} [ D(\mathcal{L}(Y|X=x) || \mathcal{L}(Y^{'}|X^{'}=x))].\]

The following identity will be useful later.

Lemma.

    \begin{multline*}D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(Y_1) \otimes \cdots   \otimes  \mathcal{L}(Y_n)) = \\  \sum_{i=1}^n  D(\mathcal{L}(X_i) || \mathcal{L}(Y_i)) + D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(X_1) \otimes \cdots  \otimes  \mathcal{L}(X_n)).\end{multline*}

Proof. Note that

    \begin{equation*} \begin{split} & D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(Y_1) \otimes \cdots   \otimes  \mathcal{L}(Y_n)) \\ & =  \mathbb{E} \log{\frac{p_{(X_1,...,X_n)}(X_1,...,X_n)}{p_{Y_1}(X_1)\cdots p_{Y_n}(X_n)}} \\ & =  \mathbb{E} \log{\frac{p_{(X_1,...,X_n)}(X_1,...,X_n)}{p_{X_1}(X_1)\cdots p_{X_n}(X_n)}}        + \sum_{i=1}^n \mathbb{E}\log{\frac{p_{X_i}(X_i)}{p_{Y_i}(X_i)}} \\  & = D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(X_1) \otimes \cdots  \otimes  \mathcal{L}(X_n)) + \sum_{i=1}^n  D(\mathcal{L}(X_i) || \mathcal{L}(Y_i)) . \qquad\square \end{split} \end{equation*}

Data processing and convexity

Two important properties of the relative entropy can be obtained as consequences of the chain rule.

Data processing inequality. Let P and Q be two probability measures on \mathcal{A} and suppose T:\mathcal{A} \rightarrow \mathcal{A}^{'} is measurable. Then D(PT^{-1}||QT^{-1}) \leq D(P||Q), where PT^{-1} is the distribution of T(X) when X \sim P.

The data processing inequality tells us that if we process the data X (which might come from one of the two distributions P and Q), then the relative entropy decreases. In other words, it becomes harder to identify the source distribution after processing the data. The same result (with the same proof) holds also if P and Q are transformed by a transition kernel, rather than by a function.

Proof. Denote by \mathsf{P} and \mathsf{Q} the joint laws of (X,T(X)) and (Y,T(Y)) when X\sim P and Y\sim Q. By the chain rule and nonnegativity of relative entropy

    \[D(PT^{-1}|QT^{-1}) = D(\mathsf{P}||\mathsf{Q}) -    \mathbb{E}_{t \sim PT^{-1}} [ D(\mathcal{L}(X|T(X)=t) || \mathcal{L}(Y|T(Y)=t))] \le D(\mathsf{P}||\mathsf{Q}).\]

On the other hand, using again the chain rule,

    \[D(\mathsf{P}||\mathsf{Q}) = D(P||Q) + \mathbb{E}_{x\sim P} [ D(\mathcal{L}(T(X)|X=x) || \mathcal{L}(T(Y)|Y=x))] =  D(P||Q),\]

where we used \mathcal{L}(T(X)|X=x) = \mathcal{L}(T(Y)|Y=x). Putting these together completes the proof. \qquad\square

Convexity of relative entropy. D(\cdot || \cdot) is jointly convex in its arguments, that is, if P_1, P_2, Q_1, Q_2 are probability measures and 0\leq \lambda \leq 1, then

    \[D(\lambda P_1 + (1-\lambda)P_2 || \lambda Q_1 + (1-\lambda)Q_2 ) \leq \lambda D(P_1 || Q_1) + (1-\lambda)D(P_2||Q_2).\]

Proof. Let T be a random variable that takes value 1 with probability \lambda and 2 with probability 1-\lambda. Conditionally on T=i, draw X\sim P_i and Y\sim Q_i. Then \mathcal{L}(X)=\lambda P_1+(1-\lambda)P_2 and \mathcal{L}(Y)=\lambda Q_1+(1-\lambda)Q_2. Using the chain rule twice, we obtain

    \[D(\mathcal{L}(X)||\mathcal{L}(Y)) \le    D(\mathcal{L}(X,T)||\mathcal{L}(Y,T)) =    \mathbb{E}_{t\sim \mathcal{L}(T)}[D(\mathcal{L}(X|T=t)||\mathcal{L}(Y|T=t))],\]

and the right hand side is precisely \lambda D(P_1 || Q_1) + (1-\lambda)D(P_2||Q_2). \qquad\square

Corollary. The entropy function H is concave.

Proof for a finite alphabet. When the alphabet \mathcal{A} is finite, the corollary can be proven by noting that H(P)=\log{|\mathcal{A}|} - D(P||\mathrm{Unif}(\mathcal{A})). \qquad\square

Relative entropy and total variation distance

Consider the hypothesis testing problem of testing the null hypothesis H_0: X \sim P against the alternative hypothesis H_1: X \sim Q. A test is a measurable function T:\mathcal{A} \rightarrow \{0,1\}. Under the constraint P(T(X)=1) \leq \alpha, it can be shown that the optimal rate of decay of Q(T(X)=0) as a function of the sample size n is of the order of \exp{(-n\cdot D(P||Q))}. This means that D(P||Q) is the measure of how well one can distinguish between Q and P on the basis of data.

We will not prove this fact, but only introduce it to motivate that the relative entropy D is, in some sense, like a measure of distance between probability measures. However, it is not a metric since D(P||Q) \neq D(Q||P) and the triangle inequality does not hold. So in what sense does the relative entropy represent a distance? In fact, it controls several bona fide metrics on the space of probability measures. One example of such metric is the total variation distance.

Definition. Let P and Q be probability measures on \mathcal{A}. The total variation distance is defined as d_{\text{TV}}(P,Q)=\sup_{A \in \mathcal{B}(\mathcal{A})} |P(A)-Q(A)|.

The following are some simple facts about the total variation distance.

  1. 0 \leq d_{\text{TV}}(P,Q) \leq 1.
  2. If P and Q have probability density functions p and q with respect to some common probability measure \lambda, then d_{\text{TV}}(P,Q)= \frac{1}{2}||p-q||_{L^{1}(\lambda)}. To see this, define A=\{x\in \mathcal{A}, p(x)>q(x) \}. Then

        \begin{equation*} \begin{split} ||p-q||_{L^{1}(\lambda)} & =  \int_{A}(p(x)-q(x))\lambda(dx) + \int_{A^c}(q(x)-p(x))\lambda(dx)  \\  & = P(A) - Q(A) + (1-Q(A)-1+P(A)) \\  & = 2(P(A)-Q(A)) = 2 d_{\text{TV}}(P,Q)  \\ \end{split} \end{equation*}

  3. d_{\text{TV}}(P,Q)= \inf_{X \sim P, Y \sim Q}  \mathbb{P}(X\neq Y).

The following inequality shows that total variance distance is controlled by the relative entropy. This shows that the relative entropy is a strong notion of distance.

Pinsker’s inequality. d_{\text{TV}}(P,Q)^2 \leq \frac{1}{2} D(P||Q).

Proof. Without loss of generality, we can assume that P and Q have probability density functions p and q with respect to some common probability measure \lambda on \mathcal{A}. Let A=\{x\in \mathcal{A}, p(x)>q(x) \} and T=1_{A}(x).

Step 1: Prove this inequality by simple calculation in the case when \mathcal{A} contains at most 2 elements.

Step 2: Note that PT^{-1} and QT^{-1} are defined on the space \{0,1\}. So Pinsker’s inequality applies to PT^{-1} and QT^{-1}. Thus,

    \begin{equation*} \begin{split} D(P||Q) & \geq  D(PT^{-1}||QT^{-1}) \geq 2 d_{\text{TV}}(PT^{-1},QT^{-1})^2   \\  & = 2(P(A)-Q(A))^2 = 2 d_{\text{TV}}(P,Q)^2. \qquad\square \end{split} \end{equation*}

Law of small numbers

As a first illustration of an application of entropy to probability, let us prove a simple quantitative law of small numbers. An example of the law of small numbers is the well known fact that Bin(n,\frac{\lambda}{n})  \rightarrow Po(\lambda) in distribution as n goes to infinity. More generally, if X_1,...,X_n are Bernoulli random variables with X_i \sim Bern(p_i), if X_1,...,X_n are weakly dependent, and if none of the p_i dominates the rest, then \mathcal{L}(\sum_{i=1}^n X_i) \approx Po(\lambda) where \lambda = \sum_{i=1}^{n} p_i. This idea can be quantified easily using relative entropy.

Theorem. If X_i \sim Bern(p_i) and X_1,...,X_n may be dependent, then

    \[D(\mathcal{L}(\bar X) || Po(\lambda)) \leq \sum_{i=1}^n p_i^2 + D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(X_1) \otimes \cdots \otimes \mathcal{L}(X_n) )\]

where \bar X = \sum_{i=1}^nX_i and \lambda = \sum_{i=1}^n p_i.

Proof. Let Z_1,...,Z_n be independent random variables with Z_i \sim Po(p_i). Then \bar Z = \sum_{i=1}^n Z_i \sim Po(\lambda). We have

    \begin{equation*} \begin{split} D(\mathcal{L}(\bar X) || Po(\lambda) )  & =  D(\mathcal{L}(\bar X) || \mathcal{L}(\bar Z)) \\  & \leq   D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(Z_1,...,Z_n) )\\  & = \sum_{i=1}^{n} D(\mathcal{L}(X_i) || \mathcal{L}(Z_i) ) + D(\mathcal{L}(X_1,...,X_n) || \mathcal{L}(X_1) \otimes \cdots \otimes \mathcal{L}(X_n) ). \\ \end{split} \end{equation*}

To conclude, it is enough to note that

    \begin{equation*} \begin{split} D(\mathcal{L}(X_i) || \mathcal{L}(Z_i) )  & =  (1-p_i)\log{\frac{1-p_i}{e^{-p_i}}} + p_i\log{\frac{p_i}{p_i e^{-p_i}}}  \\  & =  p_i^2 + (1-p_i)(p_i+\log{(1-p_i)})  \\  & \leq p_i^2 .  \qquad\square \end{split} \end{equation*}

Remark. If p_1= \cdots = p_n = \frac{\lambda}{n} and X_1,...,X_n are independent, then the inequality in the theorem becomes D(Bin(n,\frac{\lambda}{n}) || Po(\lambda) ) \leq \frac{\lambda^2}{n}. However, this rate of convergence is not optimal. One can show that under the same condition, D(Bin(n,\frac{\lambda}{n}) || Po(\lambda) )= o(\frac{1}{n}), using tools similar to those that will be used later to prove the entropic central limit theorem. Note that it is much harder to prove D(\mathcal{L}(S_n)|| \mathcal{N}(0,1)) \rightarrow 0 in the entropic central limit theorem, even without rate of convergence!

Lecture by Mokshay Madiman | Scribed by Che-yu Liu

03. October 2013 by Ramon van Handel
Categories: Information theoretic methods | Comments Off

Lecture 1. Introduction

What is information theory?

The first question that we want to address is: “What is information?” Although there are several ways in which we might think of answering this question, the main rationale behind our approach is to distinguish information from data. We think of information as something abstract that we want to convey, while we think of data as a representation of information, something that is storable/communicable. This is best understood by some examples.

Example. Information is a certain idea, while data is the words we use to describe this idea.

Example. Information is 10^{1000,000,000}. Possible data describing this information are: 10^{1000,000,000}, 10\cdots 0, 10^{10^9}, 1 followed by a billion zeros.

As we are in a mathematical setting we want to rely on a quantitative approach. The main question that arises naturally is: “How can we measure information?” Making sense of this question requires us to have a model for how data is produced. Throughout this seminar we will consider the probabilistic model which we now introduce.

Definition (Probabilistic model). Data is a random variable X taking values on the space A (alphabet) having distribution P (source distribution). We write X\sim P.

To be precise, with the above we mean that there exists a probability space (\Omega,\mathcal{F},\mathbb{P}) and a measurable space (A,\mathcal{G}) with some measurable function X:\Omega\rightarrow A such that we have \mathbb{P}\circ X^{-1} = P.

Remarks.

  1. While this set-up is very similar to what is done in statistics, the focus in information theory is different. In statistics it is assumed that the data X comes from one of a family of distributions (statistical model), and the goal is to infer something about the particular distribution generating the data. On the other hand, in information theory the distribution of the data X might be known or not, and the goal is to compress or communicate X.
  2. In the probabilistic model we assume that the data is generated by a certain random source. This is a particular modeling assumption and it is not necessarily an expression of belief in how data are actually produced. This is a reasonable modeling assumption to make and it allows us to draw reasonable conclusions (for example, text data is clearly not randomly produced, but you can still do useful things by making the modeling assumption that the data was produced by a stochastic source).
  3. The original motivation behind the development of information theory as based on the probabilistic model came from a practical engineering problem (how to compress/communicate data), and not from the idea of how we measure information (although this aspect was also part of the motivation). The whole field of study was created by the 1948 paper of Claude Shannon.
  4. We are going to use the probabilistic model throughout this seminar, but other models exist as well. A popular model used in theoretical computer science is the algorithmic model (which defines the field of algorithmic information theory, as opposed to probabilistic information theory). In this model it is assumed that data is the output of some computer program (running on a Turing machine). While this approach could be thought of as a generalization of the probabilistic model (in fact, one way in which computers can work is to simulate from some probability distribution), many of the basic quantities in algorithmic information theory (like Kolmogorov complexity) are not computable. This is the reason why this field is suitable for theoretical insights, but it is not necessarily suitable for practical purposes.

How do we measure information in the probabilistic model?

In what follows we assume that the information to be conveyed coincides with the data X itself (and we now assume that X takes values in some countable set A), meaning that there is no universal hidden meaning that we are trying to convey apart from the data itself. For example, assume that the information we want to convey (a particular realization of it) is the text “This is the SAS”. A natural way of measuring the amount of information contained in this data is to look for other representations of this information, and to look for the smallest (in some sense that needs to be specified) representation. As we are in the probabilistic framework, we do not know in advance which data is going to be produced by the random source, so we look for a procedure that takes the random outcome and gives us on average the smallest or most compact representation of the information in that data.

Since the data is random, also the size of a particular realization (encoded data) is random. One way to take into account the randomness is to consider a representation (encoding scheme) that minimizes the expected length/size of the encoded data (and that is uniquely decodable). That is, we measure the amount of information in a given data X as

    \[\min_{\text{valid encoding schemes}} \mathbb{E}_P [\text{length of encoded data}].\]

If we set up things in this way, the measure of information is some functional H of the source distribution P, since P is the only quantity governing the data. This functional H is called the entropy and it is defined as follows.

Definition (Entropy). If P is a discrete distribution, then the entropy of P is defined as

    \[H(P) := \sum_{x\in A} p(x) \log\frac{1}{p(x)} ,\]

where we write p(x)=P(\{x\}) for the probability mass function of P.

While it can be shown that H(P) is the minimal expected length of validly encoded data, we do not proceed this way (the ideas behind this fact are covered in the first couple of lectures of an information theory class). Instead, we will give some intuition on why H(P) is a good measure of information.

We first provide some intuition on why the information in the statement X=a should be decreasing as a function of P(a). In fact, recall that presently we assume to know the source distribution P. If we know P, how informative is a particular outcome x form the source distribution? If P(X=x)=\delta_a(x) (i.e., P is a point mass at a), being informed that the random outcome is a is not informative. On the other hand, if P(X=a)=10^{-10}, being informed that the outcome is a is extremely informative/significant (something very rare has happened).

The relevant decreasing function turns out to be the following:

    \[p(x) \longrightarrow \log\frac{1}{p(x)}.\]

In this respect, \log\frac{1}{p(x)} corresponds to the information that we get from the statement X=x. So the average amount of information in the random outcome X is given by

    \[\mathbb{E}_P \left[ \log\frac{1}{p(X)} \right] = \sum_{x\in A} p(x) \log\frac{1}{p(x)} 	= H(P).\]

Connection between information theory and statistics

While the connection between information theory and statistics is not surprising as both fields rely on the probabilistic model, this correspondence is very strong and natural. We give some examples.

  1. Maximum likelihood estimators (MLE) can be seen as minimal codelength estimators. In a statistical model we assume that X\sim P_{\theta}, with \theta\in \Theta for some parameter space \Theta, and the goal is to find the parameter \theta that generated the data. A popular estimator is the MLE since it is plausible to assume that the parameter that generated the data X is the parameter \tilde\theta whose corresponding distribution would have given maximal probability to X, that is,

        \[\hat\theta := \mathop{\mathrm{argmax}}_{\tilde\theta\in\Theta} P_{\tilde\theta}(X).\]

    Note that we can rewrite the above as

        \[\hat\theta = \mathop{\mathrm{argmin}}_{\tilde\theta\in\Theta} \log \frac{1}{P_{\tilde\theta}(X)},\]

    which can be seen to correspond to the minimal number of bits required to represent X assuming that it was generated by P_{\tilde\theta} (codelength). Hence the connection between MLE in statistics and the minimal codelength estimator in information theory. In this setting we assume that we do not know the distribution generating the data and we try to find a good code to encode the data. The problem of finding a good code is in some sense equivalent to the problem of finding the distribution itself, since once you know the distribution you know the best code (in some sense). Also, we mention that many penalized-MLE estimators (where we take into account the complexity of the model by adding a penalty term to the MLE estimator) can be motivated from an information-theoretic point of view in terms of analogue of coding problems; this is the idea behind the “Minimum Description Length” principle.

  2. In Hypothesis testing, the optimal error exponents are information-theoretic quantities.

These are not just coincidental connections, but examples of basic relationships between fundamental limits of statistical inference on the one hand, and fundamental limits of communication and compression of data on the other hand.

We now turn to the main topic of this seminar, that is, the connection between information theory and probability theory.

Connection between information theory and probability theory

Since we are using a probabilistic model it is clear that probability theory is the language of information theory. However, it is not so obvious that information theory can say something fundamental about probability theory. In fact, in the past half century or so, it has been realized that information theory captures many fundamental concepts in probability theory. Before turning to one key example of such connection (the entropic central limit theorem) which will serve as motivation for the initial few lectures of the seminar, we introduce some relevant quantities.

Definition (Differential or Boltzmann-Shannon entropy). If X\in\mathbb{R}^n, X\sim P and \frac{dP}{d \text{Leb}}=f (i.e., X has a density f with respect to the Lesbegue measure), then the differential entropy of P (equivalently, differential entropy of f) is defined as

    \[h(P) := h(f) := - \int_{R^n} f(x) \log f(x) dx,\]

with the conventions 0\log 0 = 0 and dx = \text{Leb}(dx).

While we can think of h as a measure of disorder (particularly motivated by the setting introduced by Boltzmann in physics), h is not a measure of information in the same sense as H is. The reason is that in the present context of “continuous” data (recall that we are in \mathbb{R}^n and a possible outcome of X is a real number) we need infinitely many bits to encode each outcome of X, so it is not meaningful to talk of the amount of information in an outcome as this is generally infinity. Nonetheless, the differential entropy represents a crucial quantity in information theory, and shows up for example both when considering communication over channels with continuous noise distributions, and when considering lossy data compression (the only kind of data compression possible with sources like \mathbb{R}^n, where one accepts some slight distortion of the data in order to be able to encode it with finitely many bits).

The notion that unifies the continuous entropy h with the discrete entropy H previously introduced is the relative entropy which we now define.

Definition (Relative entropy). If P is a probability measure and Q is a \sigma-finite measure on A, then the relative entropy between P and Q is defined as

    \[D(P || Q) :=  	\begin{cases} 		\int f \log f \, dQ	&\text{if $P\ll Q$ with $\frac{dP}{dQ}=f$},\\ 		\infty		&\text{otherwise}. 	\end{cases}\]

Typically P and Q have respective densities p and q with respect to a given reference measure \lambda. Then the relative entropy reads

    \[D(P || Q) = \int p(x) \log \frac{p(x)}{q(x)} \,\lambda(dx).\]

The following examples show how the relative entropy relates h and H.

  1. If A is a countable set and \lambda is the counting measure, then

        \[D(P || \lambda) = \sum_{x\in A} p(x) \log p(x) = - H(P).\]

  2. If A=\mathbb{R}^n and \lambda is the Lesbegue measure, then

        \[D(P || \lambda) = \int p(x) \log p(x) \, dx = - h(P).\]

The following property of relative entropy is the most important inequality in information theory.

Lemma. Let P be a probability measure on A, and Q be a sub-probability measure on A (i.e., Q is a nonnegative, countably additive measure with 0<Q(A)\le 1). Then D(P || Q) \ge 0.

Proof. We only need to consider the case where P\ll Q. Let f=\frac{dP}{dQ} and R=Q/Q(A). Then we have

    \begin{align*} 	D(P || Q)  	&= \int f(x) \log f(x) \,Q(dx) 	= Q(A)\,\mathbb{E}_R\left[f(X)\log f(X)\right]\\ 	&\ge Q(A)\, \mathbb{E}_R[f(X)]\log\mathbb{E}_R[f(X)] = \log\frac{1}{Q(A)} \ge 0, \end{align*}

where we have applied Jensen’s inequality (which holds as R is a probability measure) using that x\mapsto x\log x is convex, and used that \mathbb{E}_R[f(X)]=P(A)/Q(A)=1/Q(A) and that Q(A)\le 1. \square

As a consequence of this result we can now show that the Gaussian distribution maximizes the entropy under constraints on the first two moments.

Lemma. Let \mathcal{P}_{\mu,\sigma^2} be the class of all probability densities on \mathbb{R} (with respect to Lebesgue measure) with mean \mu and variance \sigma^2 and define

    \[g_{\mu,\sigma^2}(x) := \frac{1}{\sqrt{2\pi\sigma^2}} \, e^{-\frac{(x-\mu)^2}{2\sigma^2}}.\]

Then h(g_{\mu,\sigma^2}) \ge h(f) for any f\in\mathcal{P}.

Proof. First of all note that

    \begin{align*} 	\int g_{\mu,\sigma^2}(x) \log g_{\mu,\sigma^2}(x) \, dx  	= \int f(x) \log g_{\mu,\sigma^2}(x) \, dx  \end{align*}

as \log g_{\mu,\sigma^2} is quadratic function and, consequently, only the first two moments are involved in computing its expectation. Hence, we have

    \begin{align*} 	h(g_{\mu,\sigma^2}) - h(f)  	&= - \int g_{\mu,\sigma^2}(x) \log g_{\mu,\sigma^2}(x) \, dx + \int f(x) \log f(x) \, dx\\ 	&= \int f(x) \log \frac{f(x)}{g_{\mu,\sigma^2}(x)} \,dx = D(f||g_{\mu,\sigma^2}) \ge 0. \qquad\square \end{align*}

We are now ready to present the first example of cross-fertilization where information-theoretic concepts can be used to capture fundamental properties in probability theory. Let us first recall the classical central limit theorem (CLT).

Theorem (CLT). If X_1,X_2,\ldots are i.i.d. real-valued random variables with mean 0 and variance 1, then

    \[S_n=\frac{1}{\sqrt{N}} \sum_{i=1}^N X_i \stackrel{\mathcal{D}}{\longrightarrow} N(0,1),\]

that is,

    \[\mathbb{P}\{S_n\in A\} \longrightarrow \frac{1}{\sqrt{2 \pi}} \int_A e^{-\frac{x^2}{2}} \,dx\]

for nice enough sets A.

If we denote by f_{S_n} the density of the normalized partial sum S_n introduced in the statement of the theorem above, we note the following.

  1. For each n\ge 1 we have S_n\in\mathcal{P}_{0,1}. This follows immediately from basic properties of expected values.
  2. From the previous lemma it follows immediately that

        \[\mathop{\mathrm{argmax}}_{f\in \mathcal{P}_{0,1}} h(f) = g_{0,1}.\]

So, the CLT tells us that the sequence f_{S_1}, f_{S_2}, \ldots\in\mathcal{P}_{0,1} converges to the maximizer g_{0,1} of the entropy in \mathcal{P}_{0,1}. In fact, it turns out that the convergence in the central limit theorem can be studied in terms of the entropy and that the CLT is an expression of increasing entropy, as the following entropic central limit theorem describes.

Theorem (Entropic CLT). Let X_1,X_2,\ldots be i.i.d. real-valued random variables with mean 0 and variance 1, and assume the distribution of X_1 has a density (with respect to Lebesgue measure). Under minimal assumptions (specifically, that h(f_{S_n})>-\infty for some n), we have

    \[h(f_{S_n}) \uparrow h(g_{0,1}),\]

or, equivalently,

    \[D(f_{S_n}||g_{0,1}) \downarrow 0.\]

The entropic central limit theorem is remarkable as usually limit theorems do not come with an associated monotonicity statement. This suggests that the relative entropy is a natural tool to analyze the CLT.

Of course, a natural question that presents itself is whether other limit theorems in probability can be understood from a similar information-theoretic point of view.

Plan for future lectures

In the next lecture or two we will present a full proof of the entropic central limit theorem, and also discuss briefly how other limit theorems can be analogously understood from this information-theoretic point of view. Later, we will look at finer behavior than limit theorems, for instance we may look at how information theory can provide insights into large deviations and concentration inequalities.

Lecture by Mokshay Madiman | Scribed by Patrick Rebeschini

25. September 2013 by Ramon van Handel
Categories: Information theoretic methods | Comments Off

Fall 2013: Information theoretic methods

While information theory has traditionally been based on probabilistic methods, ideas and methods from information theory have recently played an increasingly important role in various areas of probability theory itself (as well as in statistics, combinatorics, and other areas of mathematics). The goal of these informal lectures is to introduce some topics and tools at the intersection of probability theory and information theory.  No prior knowledge of information theory will be assumed.  Potential topics include: entropic central limit theorems, entropic Poisson approximation, and related information-theoretic and probabilistic inequalities; connections to logarithmic Sobolev inequalities and Stein’s method; entropic inequalities for additive, matroidal, and tree structures, and their applications; transportation cost-information inequalities and their relation to concentration of measure; basic large deviations theory.

Prerequisites: Probability at the level of ORF 526 is assumed.

Time and location: Thursdays, 4:30-6:00, Bendheim Center classroom 103.
The first lecture will be on September 19.

References:

07. September 2013 by Ramon van Handel
Categories: Information theoretic methods | Comments Off

End of Spring 2013 seminar

That was it for the Spring 2013 stochastic analysis seminar! We will be back in the Fall. The topic and location/time slot will be announced in September.

03. May 2013 by Ramon van Handel
Categories: Announcement | Comments Off

Lectures 7/8. Games on random graphs

The following two lectures by Rene Carmona are on games on random graphs.
Many thanks to Patrick Rebes­chini for scrib­ing these lectures!

In the following we discuss some results from the paper “Connectivity and equilibrium in random games” by Daskalakis, Dimakis, and Mossel. We define a random game on a random graph and we characterize the graphs that are likely to exhibit Nash equilibria for this game. We show that if the random graph is drawn from the Erdös-Rényi distribution, then in the high connectivity regime the law of the number of pure Nash equilibria converges toward a Poisson distribution, asymptotically, as the size of the graph is increased.

Let G=(V,E) be a simple (that is, undirected and with no self-edges) graph, and for each v\in V denote by N(v) the set of neighbors of v, that is, N(v):=\{v'\in V : (v,v')\in E\}. We think of each vertex in V as a player in the game that we are about to introduce. At the same time, we think of each edge (v,v')\in E as a strategic interaction between players v and v'.

Definition (Game on a graph). For each v\in V let S_v represent the set of strategies for player v, assumed to be a finite set. We naturally extend this definition to include families of players: for each A\subseteq V, let S_A= \times_{v\in A} S_v be the set of strategies for each player in A. For each v\in V, denote by u_v:  S_v\times S_{N(v)}\ni(\sigma_v,\sigma_{N(v)})\rightarrow u_v(\sigma_v,\sigma_{N(v)}) \in \mathbb{R} the reward function for player v. A game is a collection (S_v,u_v)_{v\in V}.

The above definition describes a game that is static, in the sense that the game is played only once, and local, in the sense that the reward function of each player depends only on its own strategy and on the strategy of the players in its neighbors. We now introduce the notion of pure Nash equilibrium.

Definition (Pure Nash equilibrium). We say that \sigma \in S_V is a pure Nash equilibrium (PNE) if for each v\in V we have

    \[u_v(\sigma_v,\sigma_{N(v)}) \ge u_v(\tau,\sigma_{N(v)}) 	\qquad\text{for each $\tau\in S_v$}.\]

A pure Nash equilibrium represents a state where no player can be better off by changing his own strategy if he is the only one who is allowed to do so. In order to investigate the existence of a pure Nash equilibrium it suffices to study the best response function defined below.

Definition (Best response function). Given a reward function u_v for player v\in V, we define the best response function \operatorname{BR}_v for v as

    \[\operatorname{BR}_v(\sigma_v,\sigma_{N(v)}) := 	\begin{cases}  		1	& \text{if } \sigma_v\in\arg\sup_{\tau\in S_v} u_v(\tau,\sigma_{N(v)}),\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly, \sigma is a pure Nash equilibrium if and only if \operatorname{BR}_v(\sigma_v,\sigma_{N(v)})=1 for each v\in V. We now define the type of random games that we will be interested in; in order to do so, we need to specify the set of strategies and the reward function for each player.

Definition (Random game on a fixed graph). For a graph G=(V,E) and an atomless probability measure \mu on \mathbb{R}, let \mathcal{D}_{G,\mu} be the associated random game defined as follows:

  1. S_v=\{0,1\} for each v\in V;
  2. \{ u_v(\sigma_v,\sigma_{N(v)}) \}_{v\in V, \sigma_v \in S_v, \sigma_{N(v)} \in S_{N(v)}} is a collection of independent identically distributed random variables with distribution \mu.

Remark. For each game \mathcal{D}_{G,\mu} the family \{\operatorname{BR}_v(0,\sigma_{N(v)})\}_{v\in V, \sigma_{N(v)}\in S_{N(v)}} is a collection of independent random variables that are uniformly distributed in \{0,1\}, and for each v\in V, \sigma_{N(v)}\in S_{N(v)} we have \operatorname{BR}_v(1,\sigma_{N(v)}) = 1-\operatorname{BR}_v(0,\sigma_{N(v)}) almost surely. In fact, note that \operatorname{BR}_v(0,\sigma_{N(v)})=1 if and only if u_v(0,\sigma_{N(v)}) \ge u_v(1,\sigma_{N(v)}) and this event has probability 1/2 since the two random variables appearing on both sides of the inequality sign are independent with the same law \mu and \mu is atomless. As far as the analysis of the existence of pure Nash equilibria is concerned, we could take the present notion of best response functions as the definition of our random game on a fix graph. In fact, note that the choice of \mu in \mathcal{D}_{G,\mu} does not play a role in our analysis, and we would obtain the same results by choosing different (atomless) distributions for sampling (independently) the reward function of each player.

Denote by G(n,p) the distribution of a Erdös-Rényi random graph with n vertices where each edge is present independently with probability p. We now introduce the notion of a random game on a random graph.

Definition (Random game on a ramon graph). For each n\in\mathbb{N}, p\in(0,1) and each probability measure \mu on \mathbb{R}, do the following:

  1. choose a graph G from G(n,p);
  2. choose a random game from \mathcal{D}_{G,\mu} for the graph G.

Henceforth, given a random variable X let \mathcal{L}(X) represent its distribution. Given two measures \mu and \nu on a measurable space, define the total variation distance between \mu and \nu as

    \[\| \mu - \nu \|_{TV} := \sup_{f: \| f\| \le 1} | \mu f - \nu f |,\]

where the supremum is taken over measurable functions such that the supremum norm is less than or equal to 1.

We are now ready to state the main theorem that we will prove in the following (Theorem 1.9 in Daskalakis, Dimakis, and Mossel).

Theorem 1 (High connectivity regime). Let Z^{n,p} be the number of pure Nash equilibria in the random game on random graph defined above. Let define the high-connectivity regime as

    \[p(n)= \frac{(2+\varepsilon(n)) \log n}{n},\]

where \varepsilon:\mathbb{N}\rightarrow\mathbb{R} satisfies the following two properties:

    \begin{align*} 	\varepsilon (n) &> c &\text{for each $n\in\mathbb{N}$, for some $c>0$,}\\ 	\varepsilon (n) &\le \frac{n}{\log n} - 2 &\text{for each $n\in\mathbb{N}$.} \end{align*}

Then, we have

    \[\mathbf{P}_{G(n,p)} \{ \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le O(n^{-\varepsilon/8}) + e^{-\Omega(n)} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

where \mathbf{P}_{G(n,p)} denotes the conditional probability given the graph G(n,p) and N_1 is a Poisson random variable with mean 1. In particular,

    \[\lim_{n\rightarrow\infty} \mathcal{L}(Z^{n,p(n)}) = \mathcal{L}(N_1),\]

which shows that in this regime a pure Nash equilibrium exists with probability converging to 1-\frac{1}{e} as the size of the network increases.

Remark. Using the terminology of statistical mechanics, the first result in Theorem 1 represents a quenched-type result since it involves the conditional distribution of a system (i.e., the game) given its environment (i.e., the graph). On the other hand, the second result represents an annealed-type result, where the unconditional probability is considered.

In order to prove Theorem 1 we need the following lemma on Poisson approximations. The lemma is adapted from the results of R. Arratia, L. Goldstein, and L. Gordon (“Two moments suffice for Poisson approximations: the Chen-Stein method“, Ann. Probab. 17, 9-25, 1989) and it shows how the total variation distance between the law of a sum of Bernoulli random variables and a Poisson distribution can be bounded by the first and second moments of the Bernoulli random variables. This result is a particular instance of the Stein’s method in probability theory.

Lemma 2 (Arratia, Goldstein, and Gordon, 1989). Let \{X_i\}_{i=0,1,\ldots,N} be a collection of Bernoulli random variables with p_i:=\mathbf{P}\{X_i=1\}. For each i\in\{0,1,\ldots,N\} let B_i\subseteq \{0,1,\ldots,N\} be such that \{X_j\}_{j\in B^c_i} is independent of X_i. Define

    \begin{align*} 	b_1 := \sum_{i=0}^N \sum_{j\in B_i} p_ip_j,\qquad\qquad 	b_2 := \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} p_{ij}, \end{align*}

where p_{ij} := \mathbf{P}\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and \lambda := \mathbf{E} Z = \sum_{i=0}^N p_i. If N_{\lambda} is a Poisson random variable with mean \lambda, then

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV} \le 2 (b_1+b_2).\]

Proof. We define the following operators that act on each function f : \mathbb{N}\rightarrow\mathbb{R}:

    \begin{align*} 	[Df](n) &:= f(n+1)-f(n)&\text{for each $n\in\mathbb{N}$},\\ 	[Tf](n) &:= nf(n)-\lambda f(n+1)&\text{for each $n\in\mathbb{N}$},\\ 	[Sf](n+1) &:= - \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} &\text{for each $n\in\mathbb{N}$}, 	& &[Sf](0):=0. \end{align*}

We point out that T characterizes N_\lambda in the sense that \mathbf{E} [[Tf](M)] = 0 if and only if M is a Poisson random variable with mean \lambda; T is an example of Stein’s operator. First of all, we show that for each f : \mathbb{N}\rightarrow\mathbb{R} we have TSf = f. In fact, if n=0 we have

    \[[TSf](0) = T[Sf](0) = - \lambda [Sf](1)  	= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le 0}]} 	{\mathbf{P}\{N_\lambda = 0\}} = f(0)\]

and if n\ge 1 we have

    \begin{align*} 	[TSf](n) &= T[Sf](n) =  n[Sf](n) - \lambda [Sf](n+1)\\ 	&= -n \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\lambda \mathbf{P}\{N_\lambda = n-1\}}  	+ \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}] 	- \mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= f(n). \end{align*}

For each i\in\{0,1,\ldots,N\} define \tilde Z_i := \sum_{j\in B^c_i}X_j and Y_i := Z-X_i = \sum_{j\in\{0,1,\ldots,N\}\setminus \{i\}} X_j. The following properties hold:

  1. \tilde Z_i \le Y_i \le Z,
  2. X_i f(Z) = X_i f(Y_i+1),
  3. f(Y_i+1)-f(Z+1) = X_i[f(Y_i+1)-f(Y_i+2)].

In what follows, consider any given function h:\mathbb{N}\rightarrow \mathbb{R} such that \| h\| := \sup_{n\in\mathbb{N}} |h(n)| \le 1. Define the function \tilde h as \tilde h(n):=h(n)-\mathbf{E}h(N_\lambda) for each n\in\mathbb{N}, and let f:= S \tilde h. From what was seen above we have Tf=\tilde h and we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] =& \ \mathbf{E} \tilde h (Z) = \mathbf{E} [Tf] (Z) 	= \mathbf{E} [Zf(Z)-\lambda f (Z+1)] \\ 	=& \ \sum_{i=0}^N \mathbf{E} [X_i f(Z) - p_i f(Z+1)] 	\stackrel{\text{(ii)}}{=} \sum_{i=0}^N \mathbf{E} [X_i f(Y_i+1) - p_i f(Z+1)]\\ 	=& \ \sum_{i=0}^N \mathbf{E} [p_i f(Y_i+1) - p_i f(Z+1)]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(Y_i+1)]\\ 	\stackrel{\text{(iii)}}{=}& \ \sum_{i=0}^N p_i \mathbf{E} [X_i (f(Y_i+1) - f(Y_i+2))] \\ 	& \ + \sum_{i=0}^N \mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(\tilde Z_i + 1)]. \end{align*}

The first term is bounded above by \| Df \|\sum_{i=0}^N p^2_i while the third term is equal to 0 since \tilde Z_i is independent of X_i. In order to bound the second term we want to rewrite each term f(Y_i+1)-f(\tilde Z_i + 1) as a telescoping sum. In what follow fix i\in\{0,1,\ldots,N\}, label the elements of B_i\setminus\{i\} as \{j_1,\ldots,j_K\} and define

    \begin{align*} 	U_0 &:= \tilde Z_i = \sum_{j\in B^c_i} X_{j},\\ 	U_{k} &:= U_{k-1} + X_{j_k} \qquad \text{for $k\in\{1,\ldots,K\}$}. \end{align*}

Noticing that U_K=Y_i, we have

    \[f(Y_i+1)-f(\tilde Z_i + 1) = \sum_{k=1}^K [ f(U_{k-1}+X_{j_k}+1)  	- f(U_{k-1}+1)]\]

and we get

    \begin{align*} 	\lefteqn{\mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]=}\\ 	&\qquad\qquad= 	\sum_{k=1}^K \mathbf{E} [(X_i - p_i)(f(U_{k-1}+X_{j_k}+1) - f(U_{k-1}+1))]\\ 	&\qquad\qquad=  	\sum_{k=1}^K \mathbf{E} [(X_i - p_i) X_{j_k} (f(U_{k-1}+2) - f(U_{k-1}+1))]\\ 	&\qquad\qquad\le \| Df \| \sum_{k=1}^K \mathbf{E} [(X_i + p_i)X_{j_k}] 	= \| Df \| \sum_{k=1}^K (p_{ij_k} + p_ip_{j_k})\\ 	&\qquad\qquad= \| Df \| \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j). \end{align*}

Therefore, combining all together we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] &\le \| Df \|\sum_{i=0}^N p^2_i  	+ \| Df \| \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j)\\ 	&= \| Df \| \bigg( \sum_{i=0}^N \sum_{j\in B_i \setminus \{i\}} p_{ij}  	+ \sum_{i=0}^N\sum_{j\in B_i} p_i p_j \bigg)\\ 	&= \| Df \| (b_1+b_2). \end{align*}

Since the total variation distance can be characterized in terms of sets as

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} | \mathbf{P}(Z\in A) - \mathbf{P}(N_\lambda\in A) |,\]

from this point on we restrict our analysis to indicator functions, which are easier to deal with than generic functions. For each A\subset\mathbb{N} define h_A:=\mathbf{1}_A, \tilde h_A := h_A-\mathbf{E}h_A(N_\lambda) = \mathbf{1}_A-\mathbf{P}\{N_\lambda \in A\} and f_A := S \tilde h_A. The previous result yields

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} |\mathbf{E} [h_A(Z)-h_A(N_\lambda)]| 	\le 2 (b_1+b_2) \sup_{A\subset \mathbb{N}} \| Df_A \|\]

and the proof of the Lemma is concluded if we show that \| Df_A \| \le 1 for each A\subset \mathbb{N}. In fact, in what follows we will show that

    \[\sup_{A\subset \mathbb{N}} \| Df_A \| \le \frac{1-e^{-\lambda}}{\lambda},\]

where the right hand side is clearly upper bounded by 1. The proof that we are going to present is contained in the Appendix of “Poisson Approximation for Some Statistics Based on Exchangeable Trials” by Barbour and Eagleson.

First of all, note that for each A\subset \mathbb{N} we have f_A(0) = 0 and

    \begin{align*} 	f_A(n+1) &= - \frac{\mathbf{E}[\tilde h_A(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} = \frac{e^\lambda n!}{\lambda^{n+1}}  	(\mathbf{P}\{N_\lambda \in A\} \mathbf{P}\{N_\lambda \le n\}  	- \mathbf{P}\{N_\lambda \in A \cap \{0,1,\ldots,n\}\}), \end{align*}

for each n\in\mathbb{N}. From this expression it is clear that f_A = \sum_{j\in A} f_{\{j\}} for each A\subset\mathbb{N}, which suggests that we can restrict our analysis to singletons. For each j\in\mathbb{N} we have

    \begin{align*} 	-f_{\{j\}}(n+1) = 	\begin{cases} 		-\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda \le n\}  		& \text{if }n < j\\ 		\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda > n\}  		& \text{if }n \ge j\\ 	\end{cases} \end{align*}

and from the series expansion of the Poisson probabilities it is easy seen that the function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n) is negative and decreasing if n \le j and is positive and decreasing if n \ge j+1. Hence, the only positive value taken by the difference function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n+1)+f_{\{j\}}(n) corresponds to the case n=j that can be bounded as follows:

    \begin{align*} 	-f_{\{n\}}(n+1)+f_{\{n\}}(n) &= \frac{1}{\lambda} \mathbb{P}\{N_\lambda > n\} 	+ \frac{1}{n} \mathbb{P}\{N_\lambda \le n-1\}\\ 	&= \frac{e^{-\lambda}}{\lambda} \left( \sum_{k=n+1}^\infty \frac{\lambda^k}{k!} 	+ \sum_{k=1}^n \frac{\lambda^k}{k!} \frac{k}{n} \right) \\ 	&\le \frac{e^{-\lambda}}{\lambda} (e^\lambda - 1) = \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Therefore, for each A\subset\mathbb{N}, n\in\mathbb{N} we have

    \begin{align*} 	- Df_A(n) &= -f_A(n+1) + f_A(n) = \sum_{j\in A} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&= \mathbf{1}_A (n) (-f_{\{n\}}(n+1) + f_{\{n\}}(n)) 	+ \sum_{j \in A : j\neq n} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&\le \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Noticing that f_{A^c} = - f_A, then for each A\subset\mathbb{N}, n\in\mathbb{N} we also get

    \begin{align*} 	Df_A(n) \le - Df_{A^c}(n) \le \frac{1-e^{-\lambda}}{\lambda}, \end{align*}

and we proved that \sup_{A\subset \mathbb{N}} |Df_A| \le \frac{1-e^{-\lambda}}{\lambda}. \square

We now introduce the notation that will naturally allow us to use Lemma 2 to prove Theorem 1. We label the pure strategy profiles in S_V=\times_{v\in V} S_v as \{\sigma^0,\sigma^1,\ldots,\sigma^N\}, where N:=2^n-1 and as always n=|V|:=\mathop{\mathrm{card}} V. Often times, it will be convenient to use the labels 1,2,\ldots,n to enumerate the vertices of the graph G. Accordingly, one can think of the strategy profiles \sigma^i defined in a specific way, for example by positing that \sigma^i(1)\sigma^i(2)\ldots\sigma^i(n) is the binary decomposition of i. In particular \sigma^0 becomes the strategy where each player plays zero, that is, \sigma^0_v=0 for each v\in V, and \sigma^N the strategy where each player plays one, that is, \sigma^N_v=1 for all v\in V. For each i\in\{0,1,\ldots,N\} define

    \[X_i := 	\begin{cases}  		1	& \text{if $\sigma^i$ is a pure Nash equilibrium},\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly the quantity Z:=\sum_{i=0}^N X_i identifies the number of pure Nash equilibria and Z>0 corresponds to the existence of a pure Nash equilibrium. We recall that both the randomness in the choice of the graph and the randomness in the choice of the game are embedded in the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}}. Note that conditionally on a given graph G sampled form G(n,p) we have, for each i\in\{0,1,\ldots,N\},

    \begin{align*} 	\mathbf{E}_{G} X_i  	&= \mathbf{P}_{G} \{\text{$\sigma^i$ is a pure Nash equlibrium}\}\\ 	&= \mathbf{P}_{G} \{\operatorname{BR}_v(\sigma^i_v,\sigma^i_{N(v)})  	= 1 \  v\in V \}\\ 	&= 2^{-n}, \end{align*}

from which it follows that

    \begin{align*} 	\mathbf{E}_{G} Z  	= \sum_{i=0}^N \mathbf{E}_{G} X_i = (1+N) 2^{-n} = 1. \end{align*}

That is, the current definition of a game on a fixed graph implies that the expected number of pure Nash equilibria is 1 for any given graph. It follows that also \mathbf{E} Z = 1. Notice that Theorem 1 adds more information on the table since it describes the asymptotic distribution of Z\equiv Z^{n,p(n)} on a particular regime of the Erdös-Rényi random graph.

In the way we set up the stage it seems tempting to apply Lemma 2 to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} just defined. However, this approach is not fruitful since, apart from trivial cases, any given random variable X_i has a neighborhood of dependence B_i that coincides with the entire set \{0,1,\ldots,N\}. To see this, consider any two strategy profiles \sigma^i and \sigma^j. As long as there exists v\in V such that \sigma^i_v=\sigma^j_v, then we can always find a realization of the graph G such that v does not have any edges attached to it, that is, N(v)=\varnothing; this implies that \operatorname{BR}_v(0,\sigma^i_{N(v)}) = \operatorname{BR}_v(0,\sigma^j_{N(v)}) and, consequently, it implies that X_i and X_j are not independent. Therefore, only X_0 and X_N are independent, where \sigma^0_v:=0 and \sigma^N_v:=1 for each v\in V. However, Lemma 2 can be fruitfully applied to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} when we look at them conditionally on a given graph realization, as the following Lemma demonstrates (Lemma 2.2 of Daskalakis, Dimakis, Mossel).

Lemma 3. Let G=(V,E) be a graph. Define

    \[B_0 := \{ j\in\{0,1,\ldots,N\}  	: \exists v \in V \text{ such that } \sigma^j_{v'}=0  	\text{ for each $v'$ satisfying } (v,v') \in E \}\]

and for each i\in\{1,2,\ldots,N\} define

    \[B_i := \{ j \in \{0,1,\ldots,N\} : \sigma^j = \sigma^i \oplus \sigma^k  	\text{ for some } k\in B_0\},\]

where

    \[\sigma^i \oplus \sigma^j := (\sigma^i(1) \oplus \sigma^j(1),\ldots,  	\sigma^i(|V|) \oplus \sigma^j(|V|))\]

and \oplus is the exclusive-or operation. Then, for each i\in\{0,1,\ldots,N\} we have that X_i is independent of \{X_j\}_{j\in B^c_i}.

Proof.We first show that X_0 is independent of \{X_j\}_{j\in B^c_0}. In order to make the independency structure manifest, we characterize B^c_0. Recall that B_0 is the index set of the pure strategy profiles for which there exists a player having all his neighbors playing 0. Therefore, each strategy profile corresponding to an index set in B^c_0 is characterized by the fact that each player has at least one neighbor who is playing 1. Hence, for each j\in B^c_0 we have \sigma^0_{N(v)} \neq \sigma^j_{N(v)} for all v\in V and, consequently, the events

    \[\{X_0=\alpha\} = \{\operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})  	= \alpha \ \forall v\in V\} \qquad \alpha\in \{0,1\}\]

are independent of the events

    \[\{X_j=\alpha_j\} = \{\operatorname{BR}_v(\sigma^j_v,\sigma^j_{N(v)}) = \alpha_j 	 \ \forall v\in V\}  	 \qquad \alpha_j\in \{0,1\}, j\in B^c_0,\]

which proves our claim.

We now generalize this result to show that X_i is independent of \{X_j\}_{j\in B^c_i}, for each i\in \{1,2,\ldots,N\}. For any i\in\{0,1,\ldots,N\}, note that the exclusive-or map with respect to \sigma^i preserves the differences in the strategy profiles (and, of course, it also preserves the equalities). That is, if \sigma^j and \sigma^k are such that \sigma^j_v\neq \sigma^k_v for some v\in V, then also \sigma^i\oplus\sigma^j and \sigma^i\oplus\sigma^k are such that (\sigma^i\oplus\sigma^j)_v \neq (\sigma^i\oplus\sigma^k)_v. Therefore,

    \[\sigma^0_{N(v)} \neq \sigma^j_{N(v)} \qquad \text{for each $v\in V$}\]

holds true if and only if

    \[\sigma^i_{N(v)}  	= (\sigma^i\oplus\sigma^0)_{N(v)} \neq (\sigma^i\oplus\sigma^j)_{N(v)} 	\qquad\text{for each $v\in V$}\]

holds true. Equivalently stated, j\in B^c_0 if and only if k\in B^c_i, where k is the index set such that \sigma^k = \sigma^i\oplus\sigma^j. Hence, the proof is concluded once we notice that \sigma^i=\sigma^i\oplus\sigma^0 and that B_i is defined as the index set of the pure strategy profiles that are obtained by an exclusive-or map with respect to \sigma^i of a strategy profile in B_0. \square

For a given graph G=(V,E) with |V|=n, define p_i(G):=\mathbf{P}_G\{X_i=1\}=2^{-n}=(N+1)^{-1}, where N:=2^{n}-1 and \mathbf{P}_G represents the conditional probability conditionally on the graph G. Define B_i(G)\subseteq \{0,1,\ldots,N\} as in Lemma 3; then, conditionally on G we have that \{X_j\}_{j\in B^c_i(G)} is independent of X_i. Define

    \begin{align*} 	b_1(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)} p_i(G)p_j(G)  	= \frac{1}{(N+1)^2}\sum_{i=0}^N |B_i(G)|,\\ 	b_2(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)\setminus \{i\}} p_{ij}(G), \end{align*}

where p_{ij}(G) := \mathbf{P}_G\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and recall that \mathbf{E} Z = \mathbf{E} \mathbf{E}_G Z = 1. If N_1 is a Poisson random variable with mean 1, then Lemma 2 yields

    \[\| \mathcal{L}_G (Z) - \mathcal{L} (N_1) \|_{TV} \le 2 (b_1(G)+b_2(G)).\]

At this point, let us introduce the following two lemmas (Lemmas 2.4 and 2.5 of Daskalakis, Dimakis, Mossel).

Lemma 4. If G is sampled from the Erdös-Rényi distribution G(n,p), we have

    \begin{align*} 	\mathbf{E}[b_1(G)] &\le  	R(n,p) := \sum_{s=0}^n \binom{n}{s} 2^{-n} \min\{1,n(1-p)^{s-1}\},\\ 	\mathbf{E}[b_2(G)] &\le  	S(n,p) := \sum_{s=1}^n \binom{n}{s} 2^{-n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Lemma 5. Under the assumptions of Theorem 1 there exists \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ such that

    \begin{align*} 	R(n,p) &\le \alpha' n^{-\varepsilon/4} + e^{-\beta' n} &\text{for each $n>n_0'$},\\ 	S(n,p) &\le \alpha'' n^{-\varepsilon/4} + e^{-\beta'' n} &\text{for each $n>n_0''$}. \end{align*}

We now show how the proof of Theorem 1 follows easily from the two lemmas above.

Proof of Theorem 1. Let \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ as in Lemma 5. Define \alpha^\star:=\max\{\alpha',\alpha''\}, \beta^\star:=\min\{\beta',\beta''\} and n_0^\star:=\max\{n_0',n_0''\}. Clearly, by Lemma 5 we have

    \begin{align*} 	R(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$},\\ 	S(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$}. \end{align*}

Define the event

    \[A_n := \{ \max\{b_1(G),b_2(G)\} 	\le 2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}.\]

By the Markov inequality and the previous asymptotic bounds, for each n>n_0^\star we have

    \begin{align*} 	\mathbf{P}\{A^c_n\} &\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}\\ 	&\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} \}\\ 	&\le \frac{\mathbf{E}[b_1(G)]}{2\alpha^\star n^{-\varepsilon/8}} 	+ \frac{\mathbf{E}[b_2(G)]}{2\alpha^\star n^{-\varepsilon/8}}\\ 	&\le n^{-\varepsilon/8} + \frac{e^{-\beta^\star n}}{\alpha^\star  	n^{-\frac{\varepsilon}{8}}}, \end{align*}

where we used the result of Lemma 4 and the above estimates for R(n,p) and S(n,p). Since \varepsilon> c for some c\in\mathbb{R}_+, then there clearly exists n_0^{\star'} such that

    \[\mathbf{P}\{A^c_n\} \le 2 n^{-\varepsilon/8}\qquad\text{for each $n>n_0^{\star'}$}.\]

Hence, we have that \mathbf{P}\{A_n\} \ge 1-2n^{-\frac{\varepsilon}{8}} for n>n_0^{\star'}. Let us now define \alpha,\beta\in\mathbb{R}_+ and n_0\in\mathbb{N}_+ such that n_0>n_0^{\star'} and

    \[4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n} 	\qquad\text{for each $n>n_0$}.\]

Then

    \[\mathbf{P} \{ \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

which proves the first statement in Theorem 1. In fact, we have

    \begin{align*} 	&\mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \} \\ 	&\qquad\ge \mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} | A_n \} \mathbf{P} \{A_n\}\\ 	&\qquad\ge 1- 2n^{-\frac{\varepsilon}{8}}, \end{align*}

since by definition of A_n, on the event A_n we have

    \begin{align*} 	\| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} &\le 2 (b_1(G)+b_2(G))  	\le 4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n}. \end{align*}

By the properties of conditional expectations we can now prove the convergence in total variation of the unconditional law of Z^{n,p(n)} to the law of N_1. In fact, for n>n_0 we have

    \begin{align*} 	&\| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} \\ 	&\qquad= \sup_{h: \| h\| \le 1} | \mathbf{E} [h(Z^{n,p(n)}) - h(N_1)] |\\ 	&\qquad\le \sup_{h: \| h\| \le 1} | \mathbf{E} [(h(Z^{n,p(n)}) - h(N_1)) \mathbf{1}_{A_n}] | 	+ \sup_{h: \| h\| \le 1} \mathbf{E} [|h(Z^{n,p(n)}) - h(N_1)| \mathbf{1}_{A_n^c}] \\ 	&\qquad\le \sup_{h: \| h\| \le 1} \mathbf{E} [ 	\mathbf{1}_{A_n} | \mathbf{E}_G [h(Z^{n,p(n)}) - h(N_1)] |] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \mathbf{E} [ 	\mathbf{1}_{A_n} \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} ] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \alpha n^{-\varepsilon/8} + e^{-\beta n} + 4 n^{-\varepsilon/8}, \end{align*}

from which it follows that

    \[\lim_{n\rightarrow\infty} \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} = 0.\]

Since convergence in total variation implies convergence in distribution, the previous result implies that Z^{n,p(n)} converges in distribution to N_1, which concludes the proof of Theorem 1. \square

We now provide the proof of Lemma 4, while we refer the reader to Daskalakis, Dimakis, Mossel for the proof of Lemma 5.

Proof of Lemma 4. We begin with the study of \mathbf{E} [b_1(G)]. By the symmetry of the model we have

    \begin{align*} 	\mathbf{E} [b_1(G)] &= \frac{1}{(N+1)^2}\sum_{i=0}^N \mathbf{E} [|B_i(G)|]  	= \frac{1}{N+1} \mathbf{E} [|B_0(G)|]. \end{align*}

Since |B_0(G)| = \sum_{i=0}^N \mathbf{1}_{i\in B_0(G)}, we have \mathbf{E} [|B_0(G)|] = \sum_{i=0}^N \mathbf{P}\{i\in B_0(G)\}. By the symmetry of the Erdös-Rényi distribution we also have that \mathbf{P}\{i\in B_0(G)\} = \mathbf{P}\{j\in B_0(G)\} if \sigma^i and \sigma^j have the same number of players playing 1 (equivalently, the same number of players playing 0).
Therefore, if we label the vertices of the graph as \{1,2,\ldots,n\}, we have

    \[\mathbf{E} [|B_0(G)|] = \sum_{s=0}^n \binom{n}{s}  	\mathbf{E} [\mathbf{1}_{i_s\in B_0(G)}]\]

where for each s\in\{0,1,\ldots,n\} the index set i_s\in\{0,1,\ldots,N\} is such that the strategy \sigma^{i_s} satisfies \sigma^{i_s}_{k}=1 if k\le s and \sigma^{i_s}_{k}=0 if k> s. Hence, the bound for \mathbf{E} [b_1(G)] in the statement of the Lemma is proved if we show that \mathbf{P} \{i_s\in B_0(G)\} \le n (1-p)^{s-1}. In fact, by definition of B_0(G) we have

    \begin{align*} 	\mathbf{P} \{i_s\in B_0(G)\}  	&= \mathbf{P} \{\text{$G : $ $\exists$ player $k\in\{1,\ldots,n\}$ such that  	$N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le \sum_{k=1}^n  	\mathbf{P} \{\text{$G : N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le n (1-p)^{s-1}. \end{align*}

We now study the term \mathbf{E} [b_2(G)]. Proceeding as above, by symmetry we have

    \begin{align*} 	\mathbf{E} [b_2(G)] &= (N+1) \sum_{j=1}^N  	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_j=1\} \mathbf{1}_{j\in B_0(G)} ]\\ 	&= 2^n \sum_{s=1}^n \binom{n}{s} 	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ] \end{align*}

We now analyze the term \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]. As noticed above, i_s\in B_0(G) if and only if the graph G is such that there exists a player k\in\{1,\ldots,n\} such that N(k)\cap \{1, \dots,s\}=\varnothing. In the case in which such k also satisfies the property k\in\{1,\ldots,s\}, then \sigma^{i_s}_k=1 and \sigma^{i_s}_{k'}=0 for each k'\in N(k), and it follows that \mathbf{P}_G\{X_0=1,X_{i_s}=1\} = 0. In fact, the event \{X_0=1,X_{i_s}=1\} corresponds to the realizations where both strategy \sigma^0 and strategy \sigma^{i_s} are pure Nash equilibria, that is, \operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})=\operatorname{BR}_v(\sigma^{i_s}_v,\sigma^{i_s}_{N(v)})=1 for each v\in V. But it can not be that both 0 and 1 are best responses for the player k when all player in the neighbor play 0. Hence, \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} is different from 0 on the event

    \begin{align*} 	A:=& \ \{G: k\in\{1,\ldots,s\} \text{ implies } N(k)\cap \{1, 	\dots,s\} \neq \varnothing \}\\ 	=& \ \{ \text{$\not\exists$ isolated node in the subgraph induced by $\{1,\ldots,s\}$} \} \end{align*}

and on the event

    \[B:=\{G: \text{$\exists k\in\{s+1,\ldots,n\}$ such that $N(k)\cap \{1, 	\dots,s\} = \varnothing$} \}.\]

Define p_s:=\mathbf{P} \{A\} and m_s := |\{k\in \{s+1,\ldots,n\} : N(k)\cap \{1,\dots,s\} = \varnothing\}|. Note that we have B=\bigcup_{t=1}^{n-s} \{G: m_s=t\}. On the events A and B we have \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)}=\mathbf{P}_G\{X_0=1,X_{i_s}=1\} and we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A] \, p_s\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s. \end{align*}

Because of the independency structure in the Erdös-Rényi random graph we have

    \[\mathbf{P}\{m_s=t | A\} = \mathbf{P}\{m_s=t \} = \binom{n-s}{t}  	[(1-p)^s]^t [1-(1-p)^s]^{n-s-t}.\]

Furthermore, we have

    \[\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] = \frac{1}{2^t} \frac{1}{2^{n-t}} \frac{1}{2^{n-t}}.\]

This follows immediately from the definition of pure Nash equilibrium in terms of the best response functions once noticed that on the event \{A, m_s = t\} there are exactly t players (each k\in\{s+1,\ldots,n\} such that N(k)\cap \{1,\dots,s\} = \varnothing\}) such that (\sigma^0_k,\sigma^0_{N(k)})=(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}), while for the remaining n-t players we have (\sigma^0_k,\sigma^0_{N(k)})\neq(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}). Putting everything together we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s\\ 	&\qquad= p_s \sum_{t=1}^{n-s} \binom{n-s}{t} [(1-p)^s]^t [1-(1-p)^s]^{n-s-t} 	\frac{1}{4^{n-t}}\\ 	&\qquad= \frac{p_s}{4^n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Using the fact that p_s\le 1, it clearly follows that \mathbf{E} [b_2(G)] \le S(n,p). \square

01. May 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Giant component: final remarks

The past three lectures were devoted to the giant component theorem:

Theorem Let C_v be the connected component of G(n, \frac{c}{n}) that contains v\in [n].

  1. If c<1, then \max_v|C_v|= O(\log n) in probability.
  2. If c>1, then \max_v|C_v|\sim (1-\rho) n in probability, for some 0<\rho<1.
  3. If c=1, then \max_v|C_v|\sim n^{2/3} in distribution.

We proved only the first (subcritical) and second (supercritical) cases: our presentation was largely inspired by the treatment of Janson, Luczak, and Rucinski and Durrett. We have omitted the critical case, however, as the last two lectures of the semester will be on another topic. The goal of this post is to provide some final remarks and references on the giant component theorem.

Retrospective

At first sight, the “double jump” in the giant component theorem looks quite shocking. In hindsight, however, this does not seem quite so miraculous, as it mirrors an elementary phenomenon that is covered in many introductory probability courses: given a (nice) random walk S_t with initial condition S_0=a, define the hitting time \tau:=\inf\{t:S_t=b\} for some b<a. Then there are three cases:

  1. If S_t has negative drift, then \mathbb{E}[\tau]<\infty. In fact, the random variable \tau has a light (exponential) tail.
  2. If S_t has positive drift, then \mathbb{P}[\tau=\infty]>0.
  3. If S_t has zero drift, then \tau<\infty a.s. but \mathbb{E}[\tau]=\infty. That is, the random variable \tau has a heavy tail.

This “double jump” in the behavior of the hitting probabilities of a random walk is directly analogous to the behavior of the connected components of an Erdös-Rényi graph, and this was indeed the basic idea behind the proofs given in the previous lectures. Of course, it remains a bit of a miracle that the random walk approximation of the exploration process, which only holds for small times, is sufficiently powerful that it describes so completely the behavior of the random graph.

The critical case

In the subcritical case, the size of the largest component is of order \log n because the hitting time of a random walk with negative drift has an exponential tail: that is, we proved

    \[\mathbb{P}[\tau\ge k] \le e^{-\alpha k}    \quad\Longrightarrow\quad    \mathbb{P}\Big[\max_v|C_v| \ge \gamma\log n\Big] \le    n\,\mathbb{P}[\tau\ge\gamma\log n] \le n^{1-\alpha\gamma}\]

which goes to zero as n\to\infty for \gamma>1/\alpha.

Similarly, we would expect that we can obtain the size of the largest component in the critical case if we understand the heavy tail behavior of the hitting time of a random walk with zero drift. This is in fact the case. Indeed, when there is zero drift, one can show that

    \[P[\tau\ge k] \lesssim \frac{1}{\sqrt{k}}.\]

The crude union bound argument used above now does not give the correct answer, but an only slightly better argument is needed. Indeed, note that

    \begin{eqnarray*}     \mathbb{P}\Big[\max_v|C_v| \ge \gamma n^{2/3}\Big] &\le&     \mathbb{P}[|\{v:|C_v|\ge \gamma n^{2/3}\}|\ge \gamma n^{2/3}] \\     &\le&     \frac{\mathbb{E}[|\{v:|C_v|\ge \gamma n^{2/3}\}|]}{\gamma n^{2/3}} \\     &=&     \frac{n\,\mathbb{P}[\tau\ge \gamma n^{2/3}]}{\gamma n^{2/3}}     \lesssim \frac{1}{\gamma^{3/2}}. \end{eqnarray*}

Therefore, n^{-2/3}\max_v|C_v|=O(1). With some further work, a corresponding lower bound can also be proved. See the paper by Nachmias and Peres or the notes by van der Hofstad for the details.

It turns out that in the critical case n^{-2/3}\max_v|C_v| is not only bounded in probability, but in fact converges weakly to some limiting distribution. This distribution, and much more, is beautifully described by Aldous in terms of Brownian excursions. This is an interesting example of the application of stochastic analysis to discrete probability; unfortunately, we do not have the time to cover it.

In a different direction, it turns out that various additional phase transitions appear when we consider a finer scaling, for example, in the “critical window” c=1\pm c' n^{-1/3}. For an overview of the various transitions, see, for example, section 11.1 in Alon and Spencer.

Connectivity threshold

Rather than considering the size of the largest component, one could ask when the entire Erdös-Rényi graph is connected. Note that when 1<c<\infty, the constant \rho in the size (1-\rho)n of the giant component is always strictly positive, so the graph is not connected. Therefore, in order for the entire graph to be connected, we must let c\to\infty (that is, the edge probability p must be superlinear). It turns out that the appropriate scaling for this question is c\sim \log n, and another phase transition arises here.

Theorem. Let p=\frac{a\log n}{n}. If a>1, then the Erdös-Rényi graph G(n,p) is connected with probability tending to 1 as n\to\infty. If a<1, the graph is connected with probability tending to 0 as n\to\infty.

To get some intuition, consider the probability that a vertex v is isolated (that is, disconnected from every other vertex):

    \[\mathbb{P}[v\text{ is isolated}] =    \mathbb{P}[\eta_{vw}=0\text{ for all }w\ne v] =    \bigg(1-\frac{c}{n}\bigg)^{n-1}\sim e^{-c}.\]

Thus for c=a\log n, we have

    \[\mathbb{E}[|\{v:v\text{ is isolated}\}|] \sim    n \,e^{-a\log n} = n^{1-a}.\]

In particular, if a<1, then there must exist an isolated vertex with positive probability as n\to\infty, in which case the graph is not connected (in fact, it is not hard to show that the variance of the number of isolated components is of the same order as its mean, so that the probability that the graph is connected tends to zero). Somewhat miraculously, it turns out that when there are no isolated vertices, then the graph must already be connected, so that we do indeed obtain the sharp transition described in the Theorem above. For a proof of this fact (by a clever combinatorial argument) see, for example, the lecture notes by van der Hofstad. Alternatively, one can use a random walk argument entirely in the spirit of the proofs in the previous lectures to prove that the random graph is connected for a>1: by running simultaneous exploration processes from different vertices as we did in the proof of the supercritical case, one can show that all connected components must intersect when a>1 and thus the entire graph must be connected. See section 2.8 in Durrett for such an argument.

28. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 6. Giant component (3)

Let us begin with a brief recap from the previous lecture. We consider the Erdös-Rényi random graph G(n,\frac{c}{n}) in the supercritical case c>1. Recall that C_v denotes the connected component of the graph that contains the vertex v. Our goal is to prove the existence of the giant component with size \max_v|C_v|\sim (1-\rho)n, while the remaining components have size \lesssim\log n.

Fix \beta>0 sufficiently large (to be chosen in the proof), and define the set

    \[K=\{v :  |C_v| > \beta \log n\}\]

of vertices contained in “large” components. The proof consists of two parts:

  • Part 1: \mathbb{P}[C_v = C_{v'}~\forall v,v' \in K] \to 1.
  • Part 2: \frac{|K|}{n} \to 1-\rho in probability.

Part 1 states that all the sufficiently large components must intersect, forming the giant component. Part 2 counts the number of vertices in the giant component. Part 2 was proved in the previous lecture. The goal of this lecture is to prove Part 1, which completes the proof of the giant component.

Overview

As in the previous lectures, the central idea in the study of the giant component is the exploration process (U_t,A_t,R_t), where

    \[|C_v| = \tau := \inf\{ t  :  |A_t|=0\}.\]

We have seen that |A_t| \approx S_t, where S_t is a random walk with increments

    \[X_t = S_t -S_{t-1} \sim \text{Binomial}(n,\tfrac{c}{n})-1, 	\qquad |A_0| = S_0 =1.\]

When c>1, we have \mathbb{E}[X_t] = c-1 > 0. Thus |A_t| is approximately a random walk with positive drift. The intuitive idea behind the proof of Part 1 is as follows. Initially, the random walk can hit 0 rapidly, in which case the component is small. However, if the random walk drifts away from zero, then with high probability it will never hit zero, in which case the component must keep growing until the random walk approximation is no longer accurate. Thus there do not exist any components of intermediate size: each component is either very small (|C_v|\le \beta\log n) or very large (we will show |C_v|\ge n^{2/3}, but the precise exponent is not important).

We now want to argue that any pair of large components must necessarily intersect. Consider two disjoint sets I and J of vertices of size |I|,|J|\ge n^{2/3}. As each edge is present in the graph with probability c/n, the probability that there is no edge between I and J is

    \[\mathbb{P}[\eta_{ij}=0\mbox{ for all }i\in I,~j\in J]= 	\bigg(1-\frac{c}{n}\bigg)^{|I|\,|J|} \le 	\bigg(1-\frac{c}{n}\bigg)^{n^{4/3}} \le 	e^{-cn^{1/3}}.\]

We therefore expect that any pair of large components must intersect with high probability. The problem with this argument is that we assumed that the sets I and J are nonrandom, while the random sets C_v themselves depend on the edge structure of the random graph (so the events \{C_v=I,C_{v'}=J\} and \{\mbox{no edges between }I,J\} are highly correlated). To actually implement this idea, we therefore need a little bit more sophisticated approach.

To make the proof work, we revisit more carefully our earlier random walk argument. The process |A_t|\approx S_t has positive drift as \mathbb{E}[S_t] = S_0 + (c-1)t. Thus the process |A_t|-(c-1)t/2 is still approximately a random walk with positive drift! Applying the above intuition, either |A_t| dies rapidly (the component is small), or |A_t| grows linearly in t as is illustrated in the following figure:

Rendered by QuickLaTeX.com

This means that the exploration process for a component of size >\beta\log n will not only grow large (|A_{n^{2/3}}|>0) with high probability, but that the exploration process will also possess a large number of active vertices (|A_{n^{2/3}}|\gtrsim n^{2/3}). To prove that all large components intersect, we will run different exploration processes simultaneously starting from different vertices. We will show that if two of these processes reach a large number of active vertices then there must be an edge between them with high probability, and thus the corresponding components must coincide. This resolves the dependence problem in our naive argument, as the edges between the sets of active vertices have not yet been explored and are therefore independent of the history of the exploration process.

The component size dichotomy

We now begin the proof in earnest. We will first show the dichotomy between large and small components: either the component size is \le\beta\log n, or the number of active vertices |A_t| grows linearly up to time n^{2/3}. To be precise, we consider the following event:

    \[B := \Big\{\text{either} ~  	|C_v| \le \beta \log n , 	~ \text{or} ~  	|A_t| \ge \Big(\frac{c-1}{2}\Big) t  	~ \text{for all} ~ \beta \log n \le t  	\le n^{2/3}\Big\}.\]

Our goal is to show that \mathbb{P}[B] is large.

Define the stopping time

    \[\sigma := \inf\Big\{t \ge \beta \log n  : 	|A_t| < \Big(\frac{c-1}{2}\Big)t \Big\}.\]

We can write

    \[B = \{ \tau \le \beta \log n ~ \text{or} ~ \sigma > n^{2/3} \}.\]

Now suppose \tau >\beta \log n and \sigma =t. Then \tau \ge t, as exploration process is alive at time \beta \log n and stays alive until time t. We can therefore write

    \[\mathbb{P}[B^c] = \sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}[\tau>\beta \log n ~ \text{and} ~ \sigma = s] \le 	\sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}\Big[|A_s|  	< \Big(\frac{c-1}{2}\Big)s 	~ \text{and} ~ 	s\le\tau\Big].\]

To bound the probabilities inside the sum, we compare |A_s| to a suitable random walk.

The random walk argument

To bound the probability that |A_s| < (c-1)s/2, we must introduce a comparison random walk that lies beneath |A_t|. We use the same construction as was used in the previous lecture. Let

    \begin{equation*} 	\bar S_{t+1}=\left\{ 	\begin{array}{ll} 	\bar S_t -1 +  	\sum_{w \in U_t} \eta_{wi_t} +  	\sum_{w=1}^{n-(\frac{c+1}{2})n^{2/3} - |U_t|}  	\tilde \eta _w^t &\quad\text{if }  	|U_t|<n-(\frac{c+1}{2}) n^{2/3},\\ 	\bar S_t -1 +  	\sum_{w \in \bar U_t} \eta_{wi_t} & \quad   	\text{if } 	|U_t|\ge n-(\frac{c+1}{2}) n^{2/3} . 	\end{array}  	\right. \end{equation*}

where \bar S_0=1, \tilde \eta_w^t are i.i.d. \text{Bernoulli}(\frac{c}{n}) random variables independent of \eta_{ij}, i_t=\min A_t (the same i_t used in the exploration process), and \bar U_t is the set of the first n-(\frac{c+1}{2})n^{2/3} components of U_t (if t \ge \tau, then A_t=\varnothing and thus i_t is undefined; then we simply add n-(\frac{c+1}{2})n^{2/3} variables \tilde\eta_w^t).

As in the previous lecture, we have:

  • \bar S_t is a random walk with \text{Binomial}(n-(\frac{c+1}{2})n^{2/3},\frac{c}{n})-1 increments.
  • \bar S_t \le |A_t| whenever t \le \tau and |U_t| \ge n-(\frac{c+1}{2})n^{2/3}.

Now suppose that s\le n^{2/3} and |A_s|<(\frac{c-1}{2})s. Then

    \[|U_s| = n - |A_s| - |R_s| \ge 	n - (\tfrac{c+1}{2})s \ge n - (\tfrac{c+1}{2})n^{2/3}.\]

We therefore obtain for s\le n^{2/3}

    \[\mathbb{P}\Big[|A_s|  	< \Big(\frac{c-1}{2}\Big)s 	~ \text{and} ~ 	s\le\tau\Big] \le 	\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big].\]

Thus computing \mathbb{P}[B^c] reduces to compute the tail probability of a random walk (or, in less fancy terms, a sum of i.i.d. random variables). That is something we know how to do.

Lemma (Chernoff bound). Let X\sim \text{Binomial}(n,p). Then

    \[\mathbb{P}[X\le np-t] \le e^{-t^2/2np}.\]

Proof. Let \gamma>0. Then

    \begin{eqnarray*} 	\mathbb{P}[X\le np-t] &=& 	\mathbb{P}[e^{-\gamma X} \ge e^{-\gamma np + \gamma t}] \\ 	&\le& e^{\gamma np - \gamma t} \, \mathbb{E}[e^{-\gamma X}]\\ 	&=& e^{\gamma np - \gamma t} (1- (1-e^{-\gamma})p)^n \\ 	&\le& e^{\{\gamma-(1-e^{-\gamma})\}np - \gamma t}\\ 	&\le& e^{\gamma^2 np/2 - \gamma t}. \end{eqnarray*}

The result follows by optimizing over \gamma>0. \quad\square

Note that \bar S_s \sim 1 - s + \text{Binomial}(\{n-(\frac{c+1}{2})n^{2/3}\}s,\frac{c}{n}). We therefore have by the Chernoff bound

    \[\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big] \le 	\mathbb{P}\Big[ 	\text{Binomial}(\{n-(\tfrac{c+1}{2})n^{2/3}\}s,\tfrac{c}{n}) 	\le \Big(\frac{c+1}{2}\Big)s\Big] \le 	e^{-(c-1-o(1))^2 s/8c}\]

for all s (here o(1) depends only on n and c). In particular, we have

    \[\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big] \le 	n^{-\beta(c-1)^2/9c} 	\quad\text{for all }s\ge\beta\log n\]

provided n is sufficiently large. Thus we can estimate

    \[\mathbb{P}[B^c] \le 	\sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big]  	\le 	n^{2/3-\beta(c-1)^2/9c},\]

which goes to zero as n\to\infty provided that \beta is chosen sufficiently large. In particular, the component size dichotomy follows: choosing any \beta>15c/(c-1)^2, we obtain

    \[\mathbb{P}[|C_v|\le\beta\log n 	\text{ or }|C_v|\ge n^{2/3}\text{ for all }v] \ge 	1-n\mathbb{P}[B^c] \ge  	1-n^{5/3-\beta(c-1)^2/9c}\xrightarrow{n\to\infty}0.\]

Remark: Unlike in the proof of Part 2 in the previous lecture, here we do need to choose \beta sufficiently large for the proof to work. If \beta is too small, then the random walk \bar S_t cannot move sufficiently far away from zero to ensure that it will never return. In particular, even in the supercritical case, the second largest component has size of order \log n.

Large components must intersect

To complete the proof, it remains to show that all large components must intersect. To do this, we will run several exploration processes at once starting from different vertices. If the sets of active vertices of two of these processes grow large, then there must be an edge between them with high probability, and thus the corresponding components intersect. Let us make this argument precise.

In the following, we denote by (U_t^v,A_t^v,R_t^v) the exploration process started at A_0=\{v\}. For each such process, we define the corresponding set B^v that we have investigated above:

    \[B_v := \Big\{\text{either} ~  	|C_v| \le \beta \log n , 	~ \text{or} ~  	|A_t^v| \ge \Big(\frac{c-1}{2}\Big) t  	~ \text{for all} ~ \beta \log n \le t  	\le n^{2/3}\Big\}.\]

We have shown above that, provided \beta>15c/(c-1)^2, we have

    \[\mathbb{P}\bigg[\bigcap_v B_v\bigg] 	\ge 1 - \sum_v \mathbb{P}[B_v^c] \ge 	1-o(1).\]

We can therefore estimate

    \begin{eqnarray*} 	&& 	\mathbb{P}[\exists\, v,v' \in K \text{ such that } C_v \neq C_{v'}] \\ 	&& \mbox{}= 	\mathbb{P}\big[\exists\, v,v' \in K \text{ such that } C_v \neq C_{v'}, 	\text{ and } 	|A^v_{n^{2/3}}| \ge (\tfrac{c-1}{2})n^{2/3} 	~\forall\, v \in K\big] + o(1) \phantom{\sum}\\ 	&& \mbox{}\le 	\sum_{v,v'} 	\mathbb{P}\big[C_v \neq C_{v'} \text{ and } 	 |A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big]+o(1). \end{eqnarray*}

Now note that by time t, the exploration process (U_t^v,A_t^v,R_t^v) has only explored edges \eta_{ij} where i\in R_t^v (or j\in R_t^v), and similarly for (U_t^{v'},A_t^{v'},R_t^{v'}). It follows that

    \[\text{The conditional law of } 	\{\eta_{ij} : i,j \not\in R_t^v\cup R_t^{v'}\} 	\text{ given } 	(A_t^v,R_t^v,A_t^{v'},R_t^{v'}) 	\text{ is i.i.d.\ Binomial}(\tfrac{c}{n}).\]

In particular, if I^v,J^v,I^{v'},J^{v'} are disjoint subsets of vertices, then

    \[\mathbb{P}[\text{no edge between } I^v, I^{v'}  	| A_t^v =I^v, R_t^v=J^v,A_t^{v'}=I^{v'},R_t^{v'}=J^{v'}]  	= \bigg(1-\frac cn \bigg)^{|I^v|\, |I^{v'}|}.\]

On the other hand, C_v \ne C_{v'} implies that R_t^v,A_t^v,R_t^{v'},A_t^{v'} must be disjoint at every time t. Thus if C_v \ne C_{v'}, there can be no edges between vertices in A_t^v and A_t^{v'} at any time t (if such an edge exists, then the vertices connected by this edge will eventually be explored by both exploration processes, and then the sets of removed vertices will no longer be disjoint). Therefore,

    \begin{eqnarray*} 	&&  	\mathbb{P}\big[C_v \neq C_{v'} \text{ and } 	 |A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big] \\ 	&& \mbox{} \le 	\mathbb{P}\big[ 	\text{no edge between }A^v_{n^{2/3}},A^{v'}_{n^{2/3}},\\ 	&& \phantom{\mbox{} \le \mathbb{P}\big[} 	A^v_{n^{2/3}},R^v_{n^{2/3}}, 	A^{v'}_{n^{2/3}},R^{v'}_{n^{2/3}}\text{ are disjoint}, ~ 	|A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big] \\ 	&& \mbox{}\le 	\bigg(1-\frac{c}{n} \bigg)^{(c-1)^2n^{4/3}/4} 	\le e^{-c(c-1)^2n^{1/3}/4}. \end{eqnarray*}

Thus we finally obtain

    \[\mathbb{P}[C_v = C_{v'}~\forall v,v' \in K] \ge 	1- n^2e^{-c(c-1)^2n^{1/3}/4} - o(1) 	\xrightarrow{n\to\infty}1,\]

and the proof of the giant component theorem is complete.

Many thanks to Quentin Berthet for scribing this lecture!

27. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 5. Giant component (2)

Consider the Erdös-Rényi graph model G(n,\frac{c}{n}), and denote as usual by C_v the connected component of the graph that contains the vertex v. In the last lecture, we focused mostly on the subcritical case c < 1, where we showed that \max_v|C_v|\lesssim\log n. Today we will begin developing the supercritical case c > 1, where \max_v|C_v| \sim (1-\rho)n for a suitable constant 0<\rho<1. In particular, our aim for this and next lecture is to prove the following theorem.

Theorem. Let c>1. Then

    \[\frac{\max_v|C_v|}{n}\xrightarrow{n\to\infty} 1 - \rho 	\quad\mbox{in probability},\]

where \rho is the smallest positive solution of the equation \rho = e^{c(\rho-1)}. Moreover, there is a \beta>0 such that all but one of the components have size \leq \beta\log n with probability tending to 1 as n\to\infty.

Beginning of the proof. Define the set

    \[K = \{ v: |C_v| > \beta \log n\}.\]

The proof of the Theorem consists of two main parts:

  • Part 1: \mathbb{P}[C_v = C_{v'}~ \forall v,v' \in K] \xrightarrow{n \to \infty} 1.
  • Part 2: \frac{|K|}{n} \xrightarrow{n \to \infty} 1-\rho in probability.

Part 1 states that all “large” components of the graph must intersect, forming one giant component. Some intuition for why this is the case was given at the end of the previous lecture. Part 2 computes the size of this giant component. In this lecture, we will concentrate on proving Part 2, and we will find out where the mysterious constant \rho comes from. In the next lecture, we will prove Part 1, and we will develop a detailed understanding of why all large components must intersect.

Before we proceed, let us complete the proof of the Theorem assuming Parts 1 and 2 have been proved. First, note that with probability tending to one, the set K is itself a connected component. Indeed, if v\in K and v'\not\in K then v,v' must lie in disjoint connected components by the definition of K. On the other hand, with probability tending to one, all v,v'\in K must lie in the same connected component by Part 1. Therefore, with probability tending to one, the set K forms a single connected component of the graph. By Part 2, the size of this component is \sim (1-\rho)n, while by the definition of K, all other components have size \le \beta\log n. This completes the proof. \quad\square

The remainder of this lecture is devoted to proving Part 2 above. We will first prove that the claim holds on average, and then prove concentration around the average. More precisely, we will show:

  1. \mathbb{E}\big[\frac{|K|}{n}\big] \xrightarrow{n\to\infty} 1-\rho.
  2. \mathrm{Var}\big[\frac{|K|}{n}\big] \xrightarrow{n\to\infty}0.

Together, these two claims evidently prove Part 2.

Mean size of the giant component

We begin by writing out the mean size of the giant component:

    \[\mathbb{E}\bigg[\frac{|K|}{n}\bigg] =  	\mathbb{E}\Bigg[ 	\frac{1}{n} \sum_{v \in [n]}  	\mathbf{1}_{|C_v|>\beta\log n} 	\Bigg] =  	\frac{1}{n} \sum_{v \in [n]}  	\mathbb{P}[\left|C_v\right| > \beta \log n ] = 	\mathbb{P}[\left|C_v\right| > \beta \log n ],\]

where we note that \mathbb{P}[\left|C_v\right| > \beta \log n ] does not depend on the vertex v by the symmetry of the Erdös-Rényi model. Therefore, to prove convergence of the mean size of the giant component, it suffices to prove that

    \[\mathbb{P}[\left|C_v\right| \le \beta \log n ] 	\xrightarrow{n\to\infty}\rho.\]

This is what we will now set out to accomplish.

In the previous lecture we defined exploration process (U_t, A_t, R_t). We showed that

    \[|C_v| = \tau := \inf \{t: |A_t| = 0\}\]

and that for t<\tau

    \[|A_{t+1}| = |A_t| - 1 + \sum_{w \in U_t} \eta_{wi_t}, 	\qquad |A_0|=1,\]

where i_t \in A_t is an arbitrary nonanticipating choice, say, i_t = \min A_t (recall that (\eta_{ij})_{i,j\in[n]} denotes the adjacency matrix of the random graph). As \eta_{ij} are i.i.d. \mathrm{Bernoulli}(\frac{c}{n}) and as edges emanating from the set of unexplored vertices U_t have not yet appeared in previous steps, the process |A_t| is “almost'' a random walk: it fails to be a random walk as we only add |U_t| Bernoulli variables in each iteration, rather than a constant number. In the last lecture, we noted that we can estimate |A_t| from above by a genuine random walk S_t by adding some fictitious vertices. To be precise, we define

    \[S_{t+1}=S_t - 1 + \sum_{w\in U_t} \eta_{wi_t} + 	\sum_{w=1}^{n-|U_t|}\tilde\eta_w^t,\qquad S_0=1,\]

where \tilde\eta_w^t are i.i.d. \mathrm{Bernoulli}(\frac{c}{n}) independent of the \eta_{ij} (if t\ge\tau, then A_t=\varnothing and thus i_t is undefined; in this case, we simply add all n variables \tilde\eta_w^t). In the present lecture, we also need to bound |A_t| from below. To this end, we introduce another process \bar S_t as follows:

    \[\bar S_{t+1}=\left\{ 	\begin{array}{ll} 	\bar S_t - 1 + \sum_{w\in U_t} \eta_{wi_t} + 		\sum_{w=1}^{n-\beta\log n-|U_t|}\tilde\eta_w^t 	& \mbox{if }|U_t|< n-\beta\log n,\\ 	\bar S_t - 1 + \sum_{w\in \bar U_t} \eta_{wi_t} 	& \mbox{if }|U_t|\ge n-\beta\log n, 	\end{array}\right. 	\qquad \bar S_0=1,\]

where \bar U_t is the set consisting of the first n-\beta\log n elements of U_t in increasing order of the vertices (if t\ge\tau, we add n-\beta\log n variables \tilde\eta_w^t). The idea behind these processes is that S_t is engineered, by including “fictitious'' vertices, to always add n i.i.d. Bernoulli variables in every iteration, while \bar S_t is engineered, by including “fictitious'' vertices when |U_t| is small and omitting vertices when |U_t| is large, to always add n-\beta\log n i.i.d. Bernoulli variables in every iteration. The following facts are immediate:

  • S_t is a random walk with i.i.d. \mathrm{Binomial}(n,\frac{c}{n})-1 increments.
  • \bar S_t is a random walk with i.i.d. \mathrm{Binomial}(n-\beta\log n,\frac{c}{n})-1 increments.
  • S_t\ge |A_t| for all t\le\tau.
  • \bar S_t\le |A_t| for all t\le\tau on the event \{|C_v|\le\beta\log n\}.

To see the last property, note that the exploration process can only explore as many vertices as are present in the connected component C_v, so that |U_t|\ge n-|C_v| for all t; therefore, in this situation only the second possibility in the definition of \bar S_t occurs, and it is obvious by construction that then \bar S_t\le |A_t| (nonetheless, the first possibility in the definition must be included to ensure that \bar S_t is a random walk).

We now define the hitting times

    \[T = \inf\{t:S_t=0\},\qquad 	\bar T = \inf\{t:\bar S_t=0\}.\]

Then we evidently have

    \[\mathbb{P}[T \le \beta \log n] \le  	\mathbb{P}[|C_v| \le \beta \log n] \le  	\mathbb{P}[\bar T \le \beta \log n].\]

(Note how we cleverly chose the random walk \bar S_t precisely so that \bar T\le |C_v| whenever |C_v|\le \beta\log n). We have therefore reduced the problem of computing \mathbb{P}[|C_v| \le \beta \log n] to computing the hitting probabilities of random walks. Now we are in business, as this is something we know how to do using martingales!

The hitting time computation

Let us take a moment to gather some intuition. The random walks S_t and \bar S_t have increments distributed as \mathrm{Binomial}(n,\frac{c}{n})-1 and \mathrm{Binomial}(n-\beta\log n,\frac{c}{n})-1, respectively. As n\to\infty, both increment distributions converge to a \mathrm{Poisson}(c)-1 distribution, so we expect that \mathbb{P}[|C_v|\le\beta\log n]\sim\mathbb{P}[T_0\le\beta\log n] where T_0 is the first hitting time of the Poisson random walk. On the other hand, as \mathbb{P}[T_0\le\beta\log n] \to \mathbb{P}[T_0<\infty], we expect that \mathbb{P}[|C_v|\le\beta\log n]\to \mathbb{P}[T_0<\infty]. The problem then reduces to computing the probability that a Poisson random walk ever hits the origin. This computation can be done explicitly, and this is precisely where the mysterious constant \rho=\mathbb{P}[T_0<\infty] comes from!

We now proceed to make this intuition precise. First, we show that the probability \mathbb{P}[T<\beta\log n] can indeed be replaced by \mathbb{P}[T<\infty], as one might expect.

Lemma. \mathbb{P}[T \le \beta \log n] = \mathbb{P}[T < \infty] - o(1) as n\to\infty.

Proof. We need to show that

    \[\mathbb{P} [\beta \log n < T < \infty] \xrightarrow{n \to \infty} 0.\]

Note that as S_T=0 when T<\infty,

    \[\mathbb{P}[k \le T < \infty] = \sum_{t=k}^{\infty}  	\mathbb{P}[T=t] \le \sum_{t=k}^{\infty} \mathbb{P}[S_t = 0].\]

We can evidently write

    \[\mathbb{P}[S_t = 0] =  	\mathbb{E}[\mathbf{1}_{S_t=0} e^{\gamma S_t} ]  	\le  	\mathbb{E} [e^{\gamma S_t}] =  	e^{\gamma} \mathbb{E}[e^{\gamma X_1}]^t\]

where S_t - S_{t-1} =: X_t \sim \mathrm{Binomial}(n,\frac{c}{n})-1 and

    \[\mathbb{E} [e^{\gamma X_1}] = e^{-\gamma}  	(1+\tfrac{c}{n}(e^{\gamma}-1))^n  	\le e^{c(e^{\gamma}-1)-\gamma}.\]

Choosing \gamma = -\log c, we obtain \mathbb{E} [e^{\gamma X_1}] \le e^{1-c + \log c} < 1 for c \ne 1. Therefore,

    \[\mathbb{P} [\beta\log n < T < \infty] \le  	\frac{1}{c} \sum_{t=\beta\log n}^{\infty}  	e^{(1-c + \log c)t} 	\xrightarrow{n\to\infty}0.\]

This completes the proof. \qquad\square

By the above Lemma, and a trivial upper bound, we obtain

    \[\mathbb{P} [T< \infty] - o(1) \le  	\mathbb{P}[|C_v| \le \beta \log n] \le \mathbb{P}[\bar T < \infty].\]

To complete computation of the mean size of the giant component, it therefore remains to show that \mathbb{P} [T<\infty] and \mathbb{P}[\bar T<\infty] converge to \rho. In fact, we can compute these quantities exactly.

Lemma. Let c>1, \frac{c}{n} < 1. Then

    \[\mathbb{P} [T<\infty] = \rho_n\]

where \rho_n is the smallest positive solution of \rho_n = (1+\frac{c}{n}(\rho_n-1))^n.

Proof. Recall the martingale M_t used in last lecture:

    \[M_t = e^{\gamma S_t - \phi(\gamma)t}, 	\qquad 	\phi(\gamma)=\log \mathbb{E}[e^{\gamma X_1}] = 	\log\big[e^{-\gamma}(1+\tfrac{c}{n}(e^{\gamma}-1))^n\big].\]

Suppose that \gamma < 0 and \phi(\gamma) > 0. Then

    \[\mathbb{E} [e^{-\phi(\gamma)T}] =  	\mathbb{E} \Big[\lim_{k \to \infty} M_{k \wedge T}\Big]  	= \lim_{k \to \infty} \mathbb{E} [M_{k \wedge T}] = M_0 = e^{\gamma}.\]

The first equality holds since if T < \infty then S_T = 0 and M_{k \wedge T} \to M_T = e^{-\phi(\gamma)T}, while if T = \infty then S_{k} \ge 0 and M_k \to 0 = e^{-\phi(\gamma)T}. The second equality holds by dominated convergence since 0 \le M_{k \wedge T} \le 1, and the third equality is by the optional stopping theorem.

Now suppose we can find \gamma_n < 0 such that \phi(\gamma) \downarrow 0 as \gamma \uparrow \gamma_n. Then we have

    \[\rho_n := 	e^{\gamma_n} = \lim_{\gamma \uparrow \gamma_n}  	\mathbb{E}[e^{-\phi(\gamma)T}] = 	\mathbb{E} \Big[\lim_{\gamma \uparrow \gamma_n} e^{-\phi(\gamma)T}\Big]  	= \mathbb{P}[T < \infty]\]

by dominated convergence. Thus, evidently, it suffices to find \gamma_n with the requisite properties. Now note that as \phi(\gamma_n)=0, \gamma_n<0, and \phi(\gamma)>0 for \gamma<\gamma_n, we evidently must have

    \[\rho_n = (1+\tfrac{c}{n}(\rho_n-1))^n,\qquad 	\rho_n < 1,\qquad 	\rho < (1+\tfrac{c}{n}(\rho-1))^n \mbox{ for }\rho<\rho_n.\]

We can find such \rho_n by inspecting the following illustration:

Rendered by QuickLaTeX.com

Evidently the requisite assumptions are satisfied when \rho_n is the smallest root of the equation \rho_n=(1+\tfrac{c}{n}(\rho_n-1))^n (but not for the larger root at 1!) \quad\square

Remark. Note that the supercritical case c>1 is essential here. If c\le 1 then the equation for \rho_n has no solutions <1, and the argument in the proof does not work. In fact, when c\le 1, we have \mathbb{P}[T<\infty]=1.

By an immediate adaptation of the proof of the previous lemma, we obtain

    \[\mathbb{P}[\bar T < \infty] = \bar \rho_n\]

where \bar\rho_n is the smallest positive solution of \bar \rho_n = (1+\frac{c}{n}(\bar \rho_n-1))^{n-\beta \log n}. Letting n\to\infty, we see that

    \[\lim_{n \to \infty} \mathbb{P}[|C_v|\le\beta\log n] =  	\lim_{n \to \infty} \mathbb{P}[T<\infty] =  	\lim_{n \to \infty} \mathbb{P}[\bar T<\infty] = \rho,\]

where \rho is the smallest solution of the equation \rho=e^{c(\rho-1)} (which is precisely the probability that the Poisson random walk hits zero, by the identical proof to the lemma above). We have therefore proved

    \[\mathbb{E}\bigg[\frac{|K|}{n}\bigg] \xrightarrow{n\to\infty} 1-\rho.\]

Variance of the giant component size

To complete the proof of Part 2 of the giant component theorem, it remains to show that

    \[\mathrm{Var}\bigg[\frac{|K|}{n}\bigg] = 	\mathrm{Var}\bigg[1-\frac{|K|}{n}\bigg]  	\xrightarrow{n\to\infty} 0.\]

To this end, let us consider

    \[\mathbb{E}\bigg[\bigg(1-\frac{|K|}{n}\bigg)^2\bigg] = 	\mathbb{E}\bigg[\bigg(\frac{1}{n} \sum_{v \in [n]}  		\mathbf{1}_{|C_v| \le \beta \log n}\bigg)^2\bigg] =  	\frac{1}{n^2} \sum_{v,w \in [n]}  	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n].\]

To estimate the terms in this sum, we condition on one of the components:

    \[\begin{array}{lcl} 	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	&=& 	\mathbb{E}[ 	\,\mathbb{P}[|C_v| \le \beta \log n|C_w]\, 	\mathbf{1}_{|C_w| \le \beta \log n}\,] \\ 	&=& 	\sum_{I \subseteq [n], |I| \le \beta\log(n)}  	\mathbb{P}[C_w = I]\, \mathbb{P}[|C_v| \le \beta\log n|C_w = I]. 	\end{array}\]

To proceed, note that the event \{C_w=I\} can be written as

    \[\{C_w=I\} = \{(\eta_{ij})_{i,j\in I}\mbox{ defines a connected  	subgraph and }\eta_{ij}=0\mbox{ when }i\in I,~j\not\in I\}.\]

In particular, the event \{C_w=I\} is independent of the edges \eta_{ij} for i,j\not\in I. Therefore, for v\not\in I, the conditional law of C_v given C_w=I coincides with the (unconditional) law of C_v^{[n]\backslash I}, the conncted component containing v in the induced subgraph on the vertices [n]\backslash I:

    \[\mathbb{P}[|C_v| \le \beta\log n|C_w = I] = 	\mathbb{P}[|C_v^{[n]\backslash I}|\le \beta\log n]\qquad 	\mbox{for }v\not\in I.\]

As this quantity only depends on |I| by the symmetry of the Erdös-Rényi model, we can evidently write

    \[\mathbb{P}[|C_v| \le \beta\log n|C_w = I] = 	\mathbb{P}[|C_1^{[n-|I|]}|\le \beta\log n] \le 	\mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n]\]

for v\not\in I, |I|\le\beta\log n. In particular, we obtain

    \[\sum_{v,w\in[n]} 	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	\le 	\mathbb{E}[n-|K|]\,\{ 	\beta\log n + n\, \mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n]\}.\]

Now note that, by its definition, C_1^{[k]} is distributed precisely as the component containing vertex 1 in the G(k,\frac{c}{n}) random graph model. We can therefore show, repeating exactly the proof of the mean size of the giant component above, that

    \[\mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n] = 	\rho + o(1) =  	\mathbf{E}\bigg[1-\frac{|K|}{n}\bigg] + o(1).\]

We have therefore shown that

    \[\mathbb{E}\bigg[\bigg(1-\frac{|K|}{n}\bigg)^2\bigg] = 	\frac{1}{n^2}\sum_{v,w\in[n]}  	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	\le 	\mathbf{E}\bigg[1-\frac{|K|}{n}\bigg]^2+o(1),\]

which evidently implies

    \[\mathrm{Var}\bigg[1-\frac{|K|}{n}\bigg] \le o(1).\]

This is what we set out to prove.

Remark. It should be noted that the proof of Part 2 did not depend on the value of \beta, or even on the \beta\log n rate, in the definition of the set K: any sequence that grows sublinearly to infinity would have given the same result. This suggests that all but a vanishing fraction of vertices are contained in connected components of order \sim 1 or \sim n. We find out only in the next lecture why the rate \beta\log n (for \beta sufficiently large!) is important: only sufficiently large connected components are guaranteed to intersect, while there might (and do) exist components of order \sim\log n that are disjoint from the giant component. If we do not exclude the latter, we will not be able to prove Part 1.

Many thanks to Weichen Wang for scribing this lecture!

17. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 4. Giant component (1)

Consider the Erdös-Rényi graph model G(n,p). In previous lectures, we focused on the “high complexity regime”, i.e., as n goes to infinity, p is fixed. We discussed topics such as clique numbers and chromatic numbers. From now on, we shall consider the “low complexity regime”, where as n goes to infinity, p=\frac{c}{n} for a fixed constant c>0. As before, let \eta=(\eta_{i,j})_{i,j\in[n]} be the adjacency matrix of G(n, \frac{c}{n}). Then \{\eta_{ij}: i,j\in[n], i<j\} are i.i.d. Bernoulli random variables with success probability p=\frac{c}{n}.

Theorem 1 Let C_v be the connected component of G(n, \frac{c}{n}) that contains v\in [n].

  1. If c<1, then \max_v|C_v|= O(\log n) in probability.
  2. If c>1, then \max_v|C_v|\sim (1-\rho) n in probability, for some 0<\rho<1.
  3. If c=1, then \max_v|C_v|\sim n^{2/3} in distribution.

In the following lectures, we will aim to prove at least parts 1 and 2.

The exploration process

How to study |C_v|? We will explore C_v by starting an “exploration process'' at v that moves around C_v until all its sites have been visited. This walk will be constructed so that it hits each site once. So, the time it takes to explore all of C_v is exactly |C_v|. As a consequence, studying |C_v| reduces to studying a hitting time of a certain random walk, and to the latter we can apply martingale theory.

At each time t=0,1,2,\ldots, we maintain three sets of vertices:

    \begin{eqnarray*} 	R_t &=& \{\text{removed sites}\},\\ 	A_t &=& \{\text{active sites}\},\\  	U_t &=& \{\text{unexplored sites}\}. \end{eqnarray*}

Below is an illustration of how these sets are updated on a simple example.

  • At t=0, initialize A_0=\{v\}, U_0=[n]\backslash \{v\} and R_0=\varnothing. Namely, only v is active, all the vertices other than v are unexplored, and no vertices have been removed.

    Rendered by QuickLaTeX.com

  • At t=1, update A_1=A_0\backslash \{v\}\cup \{\text{neighbors of } v\}, U_1=U_0\backslash A_1 and R_1=\{v\}. Namely, all neighbors of v are moved from the unexplored set to the active set, and v itself is removed.

    Rendered by QuickLaTeX.com

  • At t=2, pick some x\in A_1 and update A_2=A_1\backslash \{x\}\cup \{\text{unexplored neighbors of } x\}, U_2=U_1\backslash A_2 and R_2=R_1\cup \{x\}. Namely, all unexplored neighbors of x are moved into the active set, and x itself is removed.

    Rendered by QuickLaTeX.com

  • At time t, we pick some vertex x from the current active set A_{t-1}, activate all unexplored neighbors of x and remove x itself.

This is a sort of local search along the connected component C_v: much like playing a game of Minesweeper! At each t, the choice of x\in A_{t-1} can be made arbitrarily (e.g., selecting the vertex with the smallest index or randomly selecting a vertex in A_{t-1}). The only requirement is that it is nonanticipating (only depending on the edges visited up to time t). For example, we cannot pick the vertex in A_{t-1} which has the largest number of unexplored neighbors, as this choice relies on unexplored edges.

A formal description of the “exploration process'':

  • Initialize A_0=\{v\}, U_0=[n]\backslash \{v\} and R_0=\varnothing.
  • For t\geq 0, we set

        \begin{eqnarray*} 	R_{t+1} &=& R_t\cup \{i_t\},\\ 	A_{t+1} &=& A_t\backslash \{i_t\}\cup \{w\in U_t: \eta_{wi_t}=1\},\\ 	U_{t+1} &=& U_t\backslash \{w\in U_t: \eta_{wi_t}=1\}, \end{eqnarray*}

    where i_t\in A_t is a nonanticipating but otherwise arbitrary choice.

This process stops when there are no more active vertices. It hits each vertex in C_v once and only once. At each time t, we remove one vertex in C_v. So the stopping time is exactly equal to the size of C_v:

    \[ \tau\equiv \inf\{t: |A_t|=0 \} = |C_v|. \]

So, we only need to study the stopping time \tau.

Recall that \eta_{ij} indicates whether there is an edge between vertices i and j, and \eta_{ij}\overset{\mathrm{i.i.d.}}{\sim} \text{Bernoulli}(\frac{c}{n}). By construction,

    \[ |A_{t+1}| = |A_t| - 1 + \sum_{w\in U_t} \eta_{wi_t}. \]

Now, let's do a thought experiment (wrong, but intuitive). Let's forget for the moment that some sites were previously visited, and assume that in each step all neighbors of i_t are unvisited still (note that when n is really large and t is relatively small, this assumption makes sense). Then |A_{t+1}|-|A_t|+1 is the sum of n independent \text{Bernoulli}(\frac{c}{n}) variables, which has a \text{Binomial}(n, \frac{c}{n}) distribution. This binomial variable is independent of the past because it only depends on unexplored edges; in addition, its distribution does not depend on |A_t|. Therefore, |A_t| would be a random walk with increment distribution \text{Binomial}(n, \frac{c}{n})-1\approx\text{Poisson}(c)-1. Then, studying |C_v| boils down to studying first time a Poisson random walk hits zero! Of course, we cannot really ignore previously visited sites, but this rough intuition nonetheless captures the right idea as n\to\infty and will serve as a guiding principle for the proof.

A comparison random walk

The reason that |A_t| is not a random walk is that there are only |U_t| edges (not n) to explore at time t. We can artificially create a random walk by adding n-|U_t| “fictitious'' points at each time t as follows.

Let \tilde{\eta}_{v}^t be i.i.d. \text{Bernoulli}(\frac{c}{n}) for t\geq 0, v\in[n], which are independent of (\eta_{ij}). Define

    \[ 	S_0 = 1, \qquad S_{t+1} = S_t - 1 +  	\sum_{w\in U_t}  	\eta_{wi_t} + \sum_{w\in [n]\backslash  	U_t}\tilde{\eta}^t_{w}. \]

(When t\ge\tau, then A_t=\varnothing and thus i_t is undefined; in this case, we simply add all n variables \tilde\eta^t_w.)
Note that \sum_{w\in U_t} \eta_{wi_t} is the sum of edges from A_t to U_t. Since we have not explored U_t yet, those edges are independent of all edges explored up to time t (here we use that i_t is nonanticipating). We therefore see that (S_t) is indeed a random walk with increment

    \[ S_{t}- S_{t-1} \sim \text{Binomial}(n, \tfrac{c}{n})-1. \]

Moreover, since all \tilde{\eta}_{w}^t are nonnegative,

    \[ S_{t+1} - S_t \geq |A_{t+1}| - |A_t| \]

as long as t<\tau. It follows that |A_t| is dominated by the genuine random walk S_t, that is,

    \[|A_t| \le S_t \quad\mbox{for all }t\le \tau.\]

We can now obtain bounds on |C_v| by analyzing hitting times of the random walk S_t.

The subcritical case c<1

Define the first time the comparison walk hits zero as

    \[T\equiv\inf\{t: S_t=0\}.\]

Since |A_t|\leq S_t for t\le\tau, it is obvious that

    \[|C_v|=\tau\leq T.\]

Now we study T. The intuition is that as \mathbb{E}[S_{t+1}-S_t]=c-1, (S_t) is a random walk with negative drift in the subcritical case c<1. Thus \mathbb{P}[T<\infty]=1, and in fact the hitting time T has nice tails!

Lemma 2 Let c<1 and \alpha=c-1-\log c>0. Then for any positive integer k,

    \[ \mathbb{P}[T\geq k]\leq \frac{1}{c} e^{-\alpha k}. \]

We will prove this lemma below. Using the lemma, we immediately obtain:

Corollary 3 If c<1, then for any a>\frac{1}{\alpha}=\frac{1}{c-1-\log c}

    \[ \mathbb{P}\big[\max_v|C_v|\geq a\log n\big] \xrightarrow{n\to\infty} 0.  \]

Proof. Applying the Lemma 2 and the union bound,

    \begin{eqnarray*} \mathbb{P}\big[\max_v|C_v|\geq a\log n\big] &\leq & \sum_{v\in[n]} \mathbb{P}[|C_v|\geq a\log n]\\ &\leq & \sum_{v\in[n]} \mathbb{P}[T\geq a\log n] \\ &\leq & \frac{n}{c} n^{-a\alpha} \to 0. \qquad\square \end{eqnarray*}

This corollary proves part 1 of Theorem 1. In fact, it turns out that the constant \frac{1}{\alpha} is tight: by using the second moment method, one can prove a matching lower bound on \max_v|C_v| (see, for example, the lecture notes by van der Hofstad), which implies that in fact \max_v|C_v|\sim \frac{1}{\alpha}\log n in probability. The proof is not much more difficult, but we prefer to move on to the supercritical case.

Remark. It might seem somewhat surprising that the result we obtain is so sharp, considering that we have blindly replaced |A_t| by the larger quantity S_t. However, in going from |A_t| to S_t we do not lose as much as one might think at first sight. When n is large and t is relatively small, the excess term \sum_{w\in[n]\backslash U_t}\tilde{\eta}^t_w in the definition of S_t is zero with high probability, as most vertices are unexplored and the Bernoulli success probability \frac{c}{n} of the \tilde{\eta}^t_w is very small. With a bit of work, one can show that S_t and |A_t| will actually stick together for times t\lesssim\log n with probability going to one as n\to\infty. Thus, in the subcritical case where the random walk only lives for \sim\log n time steps, nothing is lost in going from |A_t| to S_t, and our rough intuition that |A_t| should behave like a random walk as n\to\infty is vindicated.

To wrap up the subcritical case, it remains to prove the lemma.

Proof of Lemma 2. By the Markov inequality,

    \[ \mathbb{P}[T\geq k] = \mathbb{P}[e^{\alpha T}\geq e^{\alpha k}] \leq e^{-\alpha k} \mathbb{E}[e^{\alpha T}].  \]

It remains to bound \mathbb{E}[e^{\alpha T}]\le\frac{1}{c}, which is a standard exercise in martingale theory.

Recall that

    \[S_t = 1+\sum_{k=1}^t X_k,\]

where X_k are i.i.d. Define the moment generating function \phi(\beta)= \log\mathbb{E}[e^{\beta X_k}], and let

    \[ M_t \equiv e^{\beta S_t-\phi(\beta)t}, \qquad \text{for }t\geq 0.  \]

Since e^{\beta S_t}= e^{\beta}\prod_{k=1}^t e^{\beta X_k} and X_t is independent of M_0,M_1,\cdots,M_{t-1},

    \[ 	\mathbb{E}[M_t|M_0,\cdots, M_{t-1}] = M_{t-1}\, 	\mathbb{E}[\tfrac{M_t}{M_{t-1}}|M_0,\cdots, M_{t-1}]  	= M_{t-1}\, \mathbb{E}[e^{\beta X_t-\phi(\beta)}] = M_{t-1}, \]

where we have used \mathbb{E}[e^{\beta X_k - \phi(\beta)}]=1. So (M_t) is a martingale.

In the case \beta>0 and \phi(\beta)<0,

    \[\mathbb{E}[e^{-\phi(\beta)T}] = \mathbb{E}\big[\lim_{n\to\infty} M_{T\wedge n}\big] \leq\liminf_{n\to\infty} \mathbb{E}[M_{T\wedge n}]= M_0 = e^{\beta}.\]

The inequality is by Fatou's lemma and the second equality is by the optional stopping theorem. To see the first equality, note that if T<\infty, then S_{T\wedge n}\to S_T=0 and T\wedge n\to T as n\to\infty, while if T=\infty, then S_{T\wedge n}>0 for all n and T\wedge n\to\infty. Thus e^{-\phi(\beta)T}=\lim_{n\to\infty}e^{\beta S_{T\wedge n}-\phi(\beta)(T\wedge n)}=\lim_{n\to\infty}M_{T\wedge n}.

Next, we compute \phi(\beta). Recall that (X_k+1)\sim \text{Binomial}(n, \frac{c}{n}). It has the same distribution as the sum of n i.i.d. \text{Bernoulli}(\frac{c}{n}) variables. For Y\sim\text{Bernoulli}(p), we have \mathbb{E}[e^{\beta Y}]= 1+(e^{\beta}-1)p. Therefore,

    \begin{eqnarray*} -\phi(\beta) &=& - \log \mathbb{E}[e^{\beta X_t}]\\ &=& -\log\big( e^{-\beta} (1+(e^{\beta}-1)\tfrac{c}{n})^n \big)\\ &=& \beta - n\log\big(1+(e^{\beta}-1)\tfrac{c}{n}\big)\\ &\geq& \beta - c(e^{\beta}-1), \end{eqnarray*}

where the last inequality is because -\log(1+x)\geq -x for any x. Now, by setting \beta=-\log c, we obtain that -\phi(\beta)\geq c-1-\log c=\alpha. Thus we have shown \mathbb{E}[e^{\alpha T}]\le\mathbb{E}[e^{-\phi(\beta)T}]\le e^\beta=\frac{1}{c}. \quad\square

The supercritical case c>1

The goal of the following lectures is to prove part 2 of Theorem 1. More precisely, we will prove:

Theorem 4 Let c>1. Then

    \[ \frac{\max_v|C_v|}{n}\xrightarrow{n\to\infty} 1 - \rho \]

in probability, where \rho is the smallest positive solution of the equation \rho = e^{c(\rho-1)}. Moreover, there is \beta>0 such that all but one of the components have size \leq \beta\log n, with probability tending to 1 as n\to\infty.

This theorem says that with probability tending to 1, there is a unique giant component whose size is (1-\rho)n, and all other components are small with size \leq \beta\log n.

Here we provide some vague intuition for this theorem. When c>1, the random walk (S_t) satisfies \mathbb{E}[S_t-S_{t-1}]=c-1>0, i.e., (S_t) has positive drift. Then \mathbb{P}[T<\infty]<1! In fact, the further away it starts from 0, the smaller the probability it will ever hit 0. Consider the two situations:

  • S_t dies quickly: this implies that the component is small.
  • S_t lives long: then it must live very long, as once it gets far away from 0, the probability of returning is very small. This implies that the component must be very large (if we pretend that S_t=|A_t|).

Of course, S_t is not |A_t| (obviously |A_t| eventually hits 0). But the intuition explains that there cannot be components of intermediate size: given any vertex v, either |C_v| is small (\lesssim \log n), or it must get very large (\gtrsim n^{2/3}, say). In fact, we will find that all components of size \geq \beta\log n must grow all the way to \gtrsim n^{2/3}. However, any pair of large components must intersect with high probability, as there are many potential edges between them! Therefore, all vertices v with |C_v|\geq \beta\log n should be in the same giant component. We then show that the number of such vertices is (1-\rho)n with high probability.

Many thanks to Tracy Ke for scribing this lecture!

06. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Next lecture: April 4

Just a reminder that next week (March 21) is Spring Break, while the week after (March 28) there will be no lecture due to the ORFE/PACM colloquium of Elchanan Mossel.

The next Stochastic Analysis Seminar lecture will be on April 4. We will start fresh with a new topic: the study of the giant component of an Erdös-Rényi graph.

15. March 2013 by Ramon van Handel
Categories: Announcement | Comments Off

← Older posts

Newer posts →

css.php