Fall 2013: Information theoretic methods

While information theory has traditionally been based on probabilistic methods, ideas and methods from information theory have recently played an increasingly important role in various areas of probability theory itself (as well as in statistics, combinatorics, and other areas of mathematics). The goal of these informal lectures is to introduce some topics and tools at the intersection of probability theory and information theory.  No prior knowledge of information theory will be assumed.  Potential topics include: entropic central limit theorems, entropic Poisson approximation, and related information-theoretic and probabilistic inequalities; connections to logarithmic Sobolev inequalities and Stein’s method; entropic inequalities for additive, matroidal, and tree structures, and their applications; transportation cost-information inequalities and their relation to concentration of measure; basic large deviations theory.

Prerequisites: Probability at the level of ORF 526 is assumed.

Time and location: Thursdays, 4:30-6:00, Bendheim Center classroom 103.
The first lecture will be on September 19.


07. September 2013 by Ramon van Handel
Categories: Information theoretic methods | Comments Off

End of Spring 2013 seminar

That was it for the Spring 2013 stochastic analysis seminar! We will be back in the Fall. The topic and location/time slot will be announced in September.

03. May 2013 by Ramon van Handel
Categories: Announcement | Comments Off

Lectures 7/8. Games on random graphs

The following two lectures by Rene Carmona are on games on random graphs.
Many thanks to Patrick Rebes­chini for scrib­ing these lectures!

In the following we discuss some results from the paper “Connectivity and equilibrium in random games” by Daskalakis, Dimakis, and Mossel. We define a random game on a random graph and we characterize the graphs that are likely to exhibit Nash equilibria for this game. We show that if the random graph is drawn from the Erdös-Rényi distribution, then in the high connectivity regime the law of the number of pure Nash equilibria converges toward a Poisson distribution, asymptotically, as the size of the graph is increased.

Let G=(V,E) be a simple (that is, undirected and with no self-edges) graph, and for each v\in V denote by N(v) the set of neighbors of v, that is, N(v):=\{v'\in V : (v,v')\in E\}. We think of each vertex in V as a player in the game that we are about to introduce. At the same time, we think of each edge (v,v')\in E as a strategic interaction between players v and v'.

Definition (Game on a graph). For each v\in V let S_v represent the set of strategies for player v, assumed to be a finite set. We naturally extend this definition to include families of players: for each A\subseteq V, let S_A= \times_{v\in A} S_v be the set of strategies for each player in A. For each v\in V, denote by u_v:  S_v\times S_{N(v)}\ni(\sigma_v,\sigma_{N(v)})\rightarrow u_v(\sigma_v,\sigma_{N(v)}) \in \mathbb{R} the reward function for player v. A game is a collection (S_v,u_v)_{v\in V}.

The above definition describes a game that is static, in the sense that the game is played only once, and local, in the sense that the reward function of each player depends only on its own strategy and on the strategy of the players in its neighbors. We now introduce the notion of pure Nash equilibrium.

Definition (Pure Nash equilibrium). We say that \sigma \in S_V is a pure Nash equilibrium (PNE) if for each v\in V we have

    \[u_v(\sigma_v,\sigma_{N(v)}) \ge u_v(\tau,\sigma_{N(v)}) 	\qquad\text{for each $\tau\in S_v$}.\]

A pure Nash equilibrium represents a state where no player can be better off by changing his own strategy if he is the only one who is allowed to do so. In order to investigate the existence of a pure Nash equilibrium it suffices to study the best response function defined below.

Definition (Best response function). Given a reward function u_v for player v\in V, we define the best response function \operatorname{BR}_v for v as

    \[\operatorname{BR}_v(\sigma_v,\sigma_{N(v)}) := 	\begin{cases}  		1	& \text{if } \sigma_v\in\arg\sup_{\tau\in S_v} u_v(\tau,\sigma_{N(v)}),\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly, \sigma is a pure Nash equilibrium if and only if \operatorname{BR}_v(\sigma_v,\sigma_{N(v)})=1 for each v\in V. We now define the type of random games that we will be interested in; in order to do so, we need to specify the set of strategies and the reward function for each player.

Definition (Random game on a fixed graph). For a graph G=(V,E) and an atomless probability measure \mu on \mathbb{R}, let \mathcal{D}_{G,\mu} be the associated random game defined as follows:

  1. S_v=\{0,1\} for each v\in V;
  2. \{ u_v(\sigma_v,\sigma_{N(v)}) \}_{v\in V, \sigma_v \in S_v, \sigma_{N(v)} \in S_{N(v)}} is a collection of independent identically distributed random variables with distribution \mu.

Remark. For each game \mathcal{D}_{G,\mu} the family \{\operatorname{BR}_v(0,\sigma_{N(v)})\}_{v\in V, \sigma_{N(v)}\in S_{N(v)}} is a collection of independent random variables that are uniformly distributed in \{0,1\}, and for each v\in V, \sigma_{N(v)}\in S_{N(v)} we have \operatorname{BR}_v(1,\sigma_{N(v)}) = 1-\operatorname{BR}_v(0,\sigma_{N(v)}) almost surely. In fact, note that \operatorname{BR}_v(0,\sigma_{N(v)})=1 if and only if u_v(0,\sigma_{N(v)}) \ge u_v(1,\sigma_{N(v)}) and this event has probability 1/2 since the two random variables appearing on both sides of the inequality sign are independent with the same law \mu and \mu is atomless. As far as the analysis of the existence of pure Nash equilibria is concerned, we could take the present notion of best response functions as the definition of our random game on a fix graph. In fact, note that the choice of \mu in \mathcal{D}_{G,\mu} does not play a role in our analysis, and we would obtain the same results by choosing different (atomless) distributions for sampling (independently) the reward function of each player.

Denote by G(n,p) the distribution of a Erdös-Rényi random graph with n vertices where each edge is present independently with probability p. We now introduce the notion of a random game on a random graph.

Definition (Random game on a ramon graph). For each n\in\mathbb{N}, p\in(0,1) and each probability measure \mu on \mathbb{R}, do the following:

  1. choose a graph G from G(n,p);
  2. choose a random game from \mathcal{D}_{G,\mu} for the graph G.

Henceforth, given a random variable X let \mathcal{L}(X) represent its distribution. Given two measures \mu and \nu on a measurable space, define the total variation distance between \mu and \nu as

    \[\| \mu - \nu \|_{TV} := \sup_{f: \| f\| \le 1} | \mu f - \nu f |,\]

where the supremum is taken over measurable functions such that the supremum norm is less than or equal to 1.

We are now ready to state the main theorem that we will prove in the following (Theorem 1.9 in Daskalakis, Dimakis, and Mossel).

Theorem 1 (High connectivity regime). Let Z^{n,p} be the number of pure Nash equilibria in the random game on random graph defined above. Let define the high-connectivity regime as

    \[p(n)= \frac{(2+\varepsilon(n)) \log n}{n},\]

where \varepsilon:\mathbb{N}\rightarrow\mathbb{R} satisfies the following two properties:

    \begin{align*} 	\varepsilon (n) &> c &\text{for each $n\in\mathbb{N}$, for some $c>0$,}\\ 	\varepsilon (n) &\le \frac{n}{\log n} - 2 &\text{for each $n\in\mathbb{N}$.} \end{align*}

Then, we have

    \[\mathbf{P}_{G(n,p)} \{ \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le O(n^{-\varepsilon/8}) + e^{-\Omega(n)} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

where \mathbf{P}_{G(n,p)} denotes the conditional probability given the graph G(n,p) and N_1 is a Poisson random variable with mean 1. In particular,

    \[\lim_{n\rightarrow\infty} \mathcal{L}(Z^{n,p(n)}) = \mathcal{L}(N_1),\]

which shows that in this regime a pure Nash equilibrium exists with probability converging to 1-\frac{1}{e} as the size of the network increases.

Remark. Using the terminology of statistical mechanics, the first result in Theorem 1 represents a quenched-type result since it involves the conditional distribution of a system (i.e., the game) given its environment (i.e., the graph). On the other hand, the second result represents an annealed-type result, where the unconditional probability is considered.

In order to prove Theorem 1 we need the following lemma on Poisson approximations. The lemma is adapted from the results of R. Arratia, L. Goldstein, and L. Gordon (“Two moments suffice for Poisson approximations: the Chen-Stein method“, Ann. Probab. 17, 9-25, 1989) and it shows how the total variation distance between the law of a sum of Bernoulli random variables and a Poisson distribution can be bounded by the first and second moments of the Bernoulli random variables. This result is a particular instance of the Stein’s method in probability theory.

Lemma 2 (Arratia, Goldstein, and Gordon, 1989). Let \{X_i\}_{i=0,1,\ldots,N} be a collection of Bernoulli random variables with p_i:=\mathbf{P}\{X_i=1\}. For each i\in\{0,1,\ldots,N\} let B_i\subseteq \{0,1,\ldots,N\} be such that \{X_j\}_{j\in B^c_i} is independent of X_i. Define

    \begin{align*} 	b_1 := \sum_{i=0}^N \sum_{j\in B_i} p_ip_j,\qquad\qquad 	b_2 := \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} p_{ij}, \end{align*}

where p_{ij} := \mathbf{P}\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and \lambda := \mathbf{E} Z = \sum_{i=0}^N p_i. If N_{\lambda} is a Poisson random variable with mean \lambda, then

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV} \le 2 (b_1+b_2).\]

Proof. We define the following operators that act on each function f : \mathbb{N}\rightarrow\mathbb{R}:

    \begin{align*} 	[Df](n) &:= f(n+1)-f(n)&\text{for each $n\in\mathbb{N}$},\\ 	[Tf](n) &:= nf(n)-\lambda f(n+1)&\text{for each $n\in\mathbb{N}$},\\ 	[Sf](n+1) &:= - \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} &\text{for each $n\in\mathbb{N}$}, 	& &[Sf](0):=0. \end{align*}

We point out that T characterizes N_\lambda in the sense that \mathbf{E} [[Tf](M)] = 0 if and only if M is a Poisson random variable with mean \lambda; T is an example of Stein’s operator. First of all, we show that for each f : \mathbb{N}\rightarrow\mathbb{R} we have TSf = f. In fact, if n=0 we have

    \[[TSf](0) = T[Sf](0) = - \lambda [Sf](1)  	= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le 0}]} 	{\mathbf{P}\{N_\lambda = 0\}} = f(0)\]

and if n\ge 1 we have

    \begin{align*} 	[TSf](n) &= T[Sf](n) =  n[Sf](n) - \lambda [Sf](n+1)\\ 	&= -n \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\lambda \mathbf{P}\{N_\lambda = n-1\}}  	+ \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}] 	- \mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= f(n). \end{align*}

For each i\in\{0,1,\ldots,N\} define \tilde Z_i := \sum_{j\in B^c_i}X_j and Y_i := Z-X_i = \sum_{j\in\{0,1,\ldots,N\}\setminus \{i\}} X_j. The following properties hold:

  1. \tilde Z_i \le Y_i \le Z,
  2. X_i f(Z) = X_i f(Y_i+1),
  3. f(Y_i+1)-f(Z+1) = X_i[f(Y_i+1)-f(Y_i+2)].

In what follows, consider any given function h:\mathbb{N}\rightarrow \mathbb{R} such that \| h\| := \sup_{n\in\mathbb{N}} |h(n)| \le 1. Define the function \tilde h as \tilde h(n):=h(n)-\mathbf{E}h(N_\lambda) for each n\in\mathbb{N}, and let f:= S \tilde h. From what was seen above we have Tf=\tilde h and we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] =& \ \mathbf{E} \tilde h (Z) = \mathbf{E} [Tf] (Z) 	= \mathbf{E} [Zf(Z)-\lambda f (Z+1)] \\ 	=& \ \sum_{i=0}^N \mathbf{E} [X_i f(Z) - p_i f(Z+1)] 	\stackrel{\text{(ii)}}{=} \sum_{i=0}^N \mathbf{E} [X_i f(Y_i+1) - p_i f(Z+1)]\\ 	=& \ \sum_{i=0}^N \mathbf{E} [p_i f(Y_i+1) - p_i f(Z+1)]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(Y_i+1)]\\ 	\stackrel{\text{(iii)}}{=}& \ \sum_{i=0}^N p_i \mathbf{E} [X_i (f(Y_i+1) - f(Y_i+2))] \\ 	& \ + \sum_{i=0}^N \mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(\tilde Z_i + 1)]. \end{align*}

The first term is bounded above by \| Df \|\sum_{i=0}^N p^2_i while the third term is equal to 0 since \tilde Z_i is independent of X_i. In order to bound the second term we want to rewrite each term f(Y_i+1)-f(\tilde Z_i + 1) as a telescoping sum. In what follow fix i\in\{0,1,\ldots,N\}, label the elements of B_i\setminus\{i\} as \{j_1,\ldots,j_K\} and define

    \begin{align*} 	U_0 &:= \tilde Z_i = \sum_{j\in B^c_i} X_{j},\\ 	U_{k} &:= U_{k-1} + X_{j_k} \qquad \text{for $k\in\{1,\ldots,K\}$}. \end{align*}

Noticing that U_K=Y_i, we have

    \[f(Y_i+1)-f(\tilde Z_i + 1) = \sum_{k=1}^K [ f(U_{k-1}+X_{j_k}+1)  	- f(U_{k-1}+1)]\]

and we get

    \begin{align*} 	\lefteqn{\mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]=}\\ 	&\qquad\qquad= 	\sum_{k=1}^K \mathbf{E} [(X_i - p_i)(f(U_{k-1}+X_{j_k}+1) - f(U_{k-1}+1))]\\ 	&\qquad\qquad=  	\sum_{k=1}^K \mathbf{E} [(X_i - p_i) X_{j_k} (f(U_{k-1}+2) - f(U_{k-1}+1))]\\ 	&\qquad\qquad\le \| Df \| \sum_{k=1}^K \mathbf{E} [(X_i + p_i)X_{j_k}] 	= \| Df \| \sum_{k=1}^K (p_{ij_k} + p_ip_{j_k})\\ 	&\qquad\qquad= \| Df \| \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j). \end{align*}

Therefore, combining all together we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] &\le \| Df \|\sum_{i=0}^N p^2_i  	+ \| Df \| \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j)\\ 	&= \| Df \| \bigg( \sum_{i=0}^N \sum_{j\in B_i \setminus \{i\}} p_{ij}  	+ \sum_{i=0}^N\sum_{j\in B_i} p_i p_j \bigg)\\ 	&= \| Df \| (b_1+b_2). \end{align*}

Since the total variation distance can be characterized in terms of sets as

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} | \mathbf{P}(Z\in A) - \mathbf{P}(N_\lambda\in A) |,\]

from this point on we restrict our analysis to indicator functions, which are easier to deal with than generic functions. For each A\subset\mathbb{N} define h_A:=\mathbf{1}_A, \tilde h_A := h_A-\mathbf{E}h_A(N_\lambda) = \mathbf{1}_A-\mathbf{P}\{N_\lambda \in A\} and f_A := S \tilde h_A. The previous result yields

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} |\mathbf{E} [h_A(Z)-h_A(N_\lambda)]| 	\le 2 (b_1+b_2) \sup_{A\subset \mathbb{N}} \| Df_A \|\]

and the proof of the Lemma is concluded if we show that \| Df_A \| \le 1 for each A\subset \mathbb{N}. In fact, in what follows we will show that

    \[\sup_{A\subset \mathbb{N}} \| Df_A \| \le \frac{1-e^{-\lambda}}{\lambda},\]

where the right hand side is clearly upper bounded by 1. The proof that we are going to present is contained in the Appendix of “Poisson Approximation for Some Statistics Based on Exchangeable Trials” by Barbour and Eagleson.

First of all, note that for each A\subset \mathbb{N} we have f_A(0) = 0 and

    \begin{align*} 	f_A(n+1) &= - \frac{\mathbf{E}[\tilde h_A(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} = \frac{e^\lambda n!}{\lambda^{n+1}}  	(\mathbf{P}\{N_\lambda \in A\} \mathbf{P}\{N_\lambda \le n\}  	- \mathbf{P}\{N_\lambda \in A \cap \{0,1,\ldots,n\}\}), \end{align*}

for each n\in\mathbb{N}. From this expression it is clear that f_A = \sum_{j\in A} f_{\{j\}} for each A\subset\mathbb{N}, which suggests that we can restrict our analysis to singletons. For each j\in\mathbb{N} we have

    \begin{align*} 	-f_{\{j\}}(n+1) = 	\begin{cases} 		-\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda \le n\}  		& \text{if }n < j\\ 		\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda > n\}  		& \text{if }n \ge j\\ 	\end{cases} \end{align*}

and from the series expansion of the Poisson probabilities it is easy seen that the function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n) is negative and decreasing if n \le j and is positive and decreasing if n \ge j+1. Hence, the only positive value taken by the difference function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n+1)+f_{\{j\}}(n) corresponds to the case n=j that can be bounded as follows:

    \begin{align*} 	-f_{\{n\}}(n+1)+f_{\{n\}}(n) &= \frac{1}{\lambda} \mathbb{P}\{N_\lambda > n\} 	+ \frac{1}{n} \mathbb{P}\{N_\lambda \le n-1\}\\ 	&= \frac{e^{-\lambda}}{\lambda} \left( \sum_{k=n+1}^\infty \frac{\lambda^k}{k!} 	+ \sum_{k=1}^n \frac{\lambda^k}{k!} \frac{k}{n} \right) \\ 	&\le \frac{e^{-\lambda}}{\lambda} (e^\lambda - 1) = \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Therefore, for each A\subset\mathbb{N}, n\in\mathbb{N} we have

    \begin{align*} 	- Df_A(n) &= -f_A(n+1) + f_A(n) = \sum_{j\in A} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&= \mathbf{1}_A (n) (-f_{\{n\}}(n+1) + f_{\{n\}}(n)) 	+ \sum_{j \in A : j\neq n} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&\le \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Noticing that f_{A^c} = - f_A, then for each A\subset\mathbb{N}, n\in\mathbb{N} we also get

    \begin{align*} 	Df_A(n) \le - Df_{A^c}(n) \le \frac{1-e^{-\lambda}}{\lambda}, \end{align*}

and we proved that \sup_{A\subset \mathbb{N}} |Df_A| \le \frac{1-e^{-\lambda}}{\lambda}. \square

We now introduce the notation that will naturally allow us to use Lemma 2 to prove Theorem 1. We label the pure strategy profiles in S_V=\times_{v\in V} S_v as \{\sigma^0,\sigma^1,\ldots,\sigma^N\}, where N:=2^n-1 and as always n=|V|:=\mathop{\mathrm{card}} V. Often times, it will be convenient to use the labels 1,2,\ldots,n to enumerate the vertices of the graph G. Accordingly, one can think of the strategy profiles \sigma^i defined in a specific way, for example by positing that \sigma^i(1)\sigma^i(2)\ldots\sigma^i(n) is the binary decomposition of i. In particular \sigma^0 becomes the strategy where each player plays zero, that is, \sigma^0_v=0 for each v\in V, and \sigma^N the strategy where each player plays one, that is, \sigma^N_v=1 for all v\in V. For each i\in\{0,1,\ldots,N\} define

    \[X_i := 	\begin{cases}  		1	& \text{if $\sigma^i$ is a pure Nash equilibrium},\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly the quantity Z:=\sum_{i=0}^N X_i identifies the number of pure Nash equilibria and Z>0 corresponds to the existence of a pure Nash equilibrium. We recall that both the randomness in the choice of the graph and the randomness in the choice of the game are embedded in the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}}. Note that conditionally on a given graph G sampled form G(n,p) we have, for each i\in\{0,1,\ldots,N\},

    \begin{align*} 	\mathbf{E}_{G} X_i  	&= \mathbf{P}_{G} \{\text{$\sigma^i$ is a pure Nash equlibrium}\}\\ 	&= \mathbf{P}_{G} \{\operatorname{BR}_v(\sigma^i_v,\sigma^i_{N(v)})  	= 1 \  v\in V \}\\ 	&= 2^{-n}, \end{align*}

from which it follows that

    \begin{align*} 	\mathbf{E}_{G} Z  	= \sum_{i=0}^N \mathbf{E}_{G} X_i = (1+N) 2^{-n} = 1. \end{align*}

That is, the current definition of a game on a fixed graph implies that the expected number of pure Nash equilibria is 1 for any given graph. It follows that also \mathbf{E} Z = 1. Notice that Theorem 1 adds more information on the table since it describes the asymptotic distribution of Z\equiv Z^{n,p(n)} on a particular regime of the Erdös-Rényi random graph.

In the way we set up the stage it seems tempting to apply Lemma 2 to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} just defined. However, this approach is not fruitful since, apart from trivial cases, any given random variable X_i has a neighborhood of dependence B_i that coincides with the entire set \{0,1,\ldots,N\}. To see this, consider any two strategy profiles \sigma^i and \sigma^j. As long as there exists v\in V such that \sigma^i_v=\sigma^j_v, then we can always find a realization of the graph G such that v does not have any edges attached to it, that is, N(v)=\varnothing; this implies that \operatorname{BR}_v(0,\sigma^i_{N(v)}) = \operatorname{BR}_v(0,\sigma^j_{N(v)}) and, consequently, it implies that X_i and X_j are not independent. Therefore, only X_0 and X_N are independent, where \sigma^0_v:=0 and \sigma^N_v:=1 for each v\in V. However, Lemma 2 can be fruitfully applied to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} when we look at them conditionally on a given graph realization, as the following Lemma demonstrates (Lemma 2.2 of Daskalakis, Dimakis, Mossel).

Lemma 3. Let G=(V,E) be a graph. Define

    \[B_0 := \{ j\in\{0,1,\ldots,N\}  	: \exists v \in V \text{ such that } \sigma^j_{v'}=0  	\text{ for each $v'$ satisfying } (v,v') \in E \}\]

and for each i\in\{1,2,\ldots,N\} define

    \[B_i := \{ j \in \{0,1,\ldots,N\} : \sigma^j = \sigma^i \oplus \sigma^k  	\text{ for some } k\in B_0\},\]


    \[\sigma^i \oplus \sigma^j := (\sigma^i(1) \oplus \sigma^j(1),\ldots,  	\sigma^i(|V|) \oplus \sigma^j(|V|))\]

and \oplus is the exclusive-or operation. Then, for each i\in\{0,1,\ldots,N\} we have that X_i is independent of \{X_j\}_{j\in B^c_i}.

Proof.We first show that X_0 is independent of \{X_j\}_{j\in B^c_0}. In order to make the independency structure manifest, we characterize B^c_0. Recall that B_0 is the index set of the pure strategy profiles for which there exists a player having all his neighbors playing 0. Therefore, each strategy profile corresponding to an index set in B^c_0 is characterized by the fact that each player has at least one neighbor who is playing 1. Hence, for each j\in B^c_0 we have \sigma^0_{N(v)} \neq \sigma^j_{N(v)} for all v\in V and, consequently, the events

    \[\{X_0=\alpha\} = \{\operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})  	= \alpha \ \forall v\in V\} \qquad \alpha\in \{0,1\}\]

are independent of the events

    \[\{X_j=\alpha_j\} = \{\operatorname{BR}_v(\sigma^j_v,\sigma^j_{N(v)}) = \alpha_j 	 \ \forall v\in V\}  	 \qquad \alpha_j\in \{0,1\}, j\in B^c_0,\]

which proves our claim.

We now generalize this result to show that X_i is independent of \{X_j\}_{j\in B^c_i}, for each i\in \{1,2,\ldots,N\}. For any i\in\{0,1,\ldots,N\}, note that the exclusive-or map with respect to \sigma^i preserves the differences in the strategy profiles (and, of course, it also preserves the equalities). That is, if \sigma^j and \sigma^k are such that \sigma^j_v\neq \sigma^k_v for some v\in V, then also \sigma^i\oplus\sigma^j and \sigma^i\oplus\sigma^k are such that (\sigma^i\oplus\sigma^j)_v \neq (\sigma^i\oplus\sigma^k)_v. Therefore,

    \[\sigma^0_{N(v)} \neq \sigma^j_{N(v)} \qquad \text{for each $v\in V$}\]

holds true if and only if

    \[\sigma^i_{N(v)}  	= (\sigma^i\oplus\sigma^0)_{N(v)} \neq (\sigma^i\oplus\sigma^j)_{N(v)} 	\qquad\text{for each $v\in V$}\]

holds true. Equivalently stated, j\in B^c_0 if and only if k\in B^c_i, where k is the index set such that \sigma^k = \sigma^i\oplus\sigma^j. Hence, the proof is concluded once we notice that \sigma^i=\sigma^i\oplus\sigma^0 and that B_i is defined as the index set of the pure strategy profiles that are obtained by an exclusive-or map with respect to \sigma^i of a strategy profile in B_0. \square

For a given graph G=(V,E) with |V|=n, define p_i(G):=\mathbf{P}_G\{X_i=1\}=2^{-n}=(N+1)^{-1}, where N:=2^{n}-1 and \mathbf{P}_G represents the conditional probability conditionally on the graph G. Define B_i(G)\subseteq \{0,1,\ldots,N\} as in Lemma 3; then, conditionally on G we have that \{X_j\}_{j\in B^c_i(G)} is independent of X_i. Define

    \begin{align*} 	b_1(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)} p_i(G)p_j(G)  	= \frac{1}{(N+1)^2}\sum_{i=0}^N |B_i(G)|,\\ 	b_2(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)\setminus \{i\}} p_{ij}(G), \end{align*}

where p_{ij}(G) := \mathbf{P}_G\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and recall that \mathbf{E} Z = \mathbf{E} \mathbf{E}_G Z = 1. If N_1 is a Poisson random variable with mean 1, then Lemma 2 yields

    \[\| \mathcal{L}_G (Z) - \mathcal{L} (N_1) \|_{TV} \le 2 (b_1(G)+b_2(G)).\]

At this point, let us introduce the following two lemmas (Lemmas 2.4 and 2.5 of Daskalakis, Dimakis, Mossel).

Lemma 4. If G is sampled from the Erdös-Rényi distribution G(n,p), we have

    \begin{align*} 	\mathbf{E}[b_1(G)] &\le  	R(n,p) := \sum_{s=0}^n \binom{n}{s} 2^{-n} \min\{1,n(1-p)^{s-1}\},\\ 	\mathbf{E}[b_2(G)] &\le  	S(n,p) := \sum_{s=1}^n \binom{n}{s} 2^{-n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Lemma 5. Under the assumptions of Theorem 1 there exists \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ such that

    \begin{align*} 	R(n,p) &\le \alpha' n^{-\varepsilon/4} + e^{-\beta' n} &\text{for each $n>n_0'$},\\ 	S(n,p) &\le \alpha'' n^{-\varepsilon/4} + e^{-\beta'' n} &\text{for each $n>n_0''$}. \end{align*}

We now show how the proof of Theorem 1 follows easily from the two lemmas above.

Proof of Theorem 1. Let \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ as in Lemma 5. Define \alpha^\star:=\max\{\alpha',\alpha''\}, \beta^\star:=\min\{\beta',\beta''\} and n_0^\star:=\max\{n_0',n_0''\}. Clearly, by Lemma 5 we have

    \begin{align*} 	R(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$},\\ 	S(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$}. \end{align*}

Define the event

    \[A_n := \{ \max\{b_1(G),b_2(G)\} 	\le 2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}.\]

By the Markov inequality and the previous asymptotic bounds, for each n>n_0^\star we have

    \begin{align*} 	\mathbf{P}\{A^c_n\} &\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}\\ 	&\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} \}\\ 	&\le \frac{\mathbf{E}[b_1(G)]}{2\alpha^\star n^{-\varepsilon/8}} 	+ \frac{\mathbf{E}[b_2(G)]}{2\alpha^\star n^{-\varepsilon/8}}\\ 	&\le n^{-\varepsilon/8} + \frac{e^{-\beta^\star n}}{\alpha^\star  	n^{-\frac{\varepsilon}{8}}}, \end{align*}

where we used the result of Lemma 4 and the above estimates for R(n,p) and S(n,p). Since \varepsilon> c for some c\in\mathbb{R}_+, then there clearly exists n_0^{\star'} such that

    \[\mathbf{P}\{A^c_n\} \le 2 n^{-\varepsilon/8}\qquad\text{for each $n>n_0^{\star'}$}.\]

Hence, we have that \mathbf{P}\{A_n\} \ge 1-2n^{-\frac{\varepsilon}{8}} for n>n_0^{\star'}. Let us now define \alpha,\beta\in\mathbb{R}_+ and n_0\in\mathbb{N}_+ such that n_0>n_0^{\star'} and

    \[4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n} 	\qquad\text{for each $n>n_0$}.\]


    \[\mathbf{P} \{ \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

which proves the first statement in Theorem 1. In fact, we have

    \begin{align*} 	&\mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \} \\ 	&\qquad\ge \mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} | A_n \} \mathbf{P} \{A_n\}\\ 	&\qquad\ge 1- 2n^{-\frac{\varepsilon}{8}}, \end{align*}

since by definition of A_n, on the event A_n we have

    \begin{align*} 	\| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} &\le 2 (b_1(G)+b_2(G))  	\le 4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n}. \end{align*}

By the properties of conditional expectations we can now prove the convergence in total variation of the unconditional law of Z^{n,p(n)} to the law of N_1. In fact, for n>n_0 we have

    \begin{align*} 	&\| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} \\ 	&\qquad= \sup_{h: \| h\| \le 1} | \mathbf{E} [h(Z^{n,p(n)}) - h(N_1)] |\\ 	&\qquad\le \sup_{h: \| h\| \le 1} | \mathbf{E} [(h(Z^{n,p(n)}) - h(N_1)) \mathbf{1}_{A_n}] | 	+ \sup_{h: \| h\| \le 1} \mathbf{E} [|h(Z^{n,p(n)}) - h(N_1)| \mathbf{1}_{A_n^c}] \\ 	&\qquad\le \sup_{h: \| h\| \le 1} \mathbf{E} [ 	\mathbf{1}_{A_n} | \mathbf{E}_G [h(Z^{n,p(n)}) - h(N_1)] |] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \mathbf{E} [ 	\mathbf{1}_{A_n} \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} ] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \alpha n^{-\varepsilon/8} + e^{-\beta n} + 4 n^{-\varepsilon/8}, \end{align*}

from which it follows that

    \[\lim_{n\rightarrow\infty} \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} = 0.\]

Since convergence in total variation implies convergence in distribution, the previous result implies that Z^{n,p(n)} converges in distribution to N_1, which concludes the proof of Theorem 1. \square

We now provide the proof of Lemma 4, while we refer the reader to Daskalakis, Dimakis, Mossel for the proof of Lemma 5.

Proof of Lemma 4. We begin with the study of \mathbf{E} [b_1(G)]. By the symmetry of the model we have

    \begin{align*} 	\mathbf{E} [b_1(G)] &= \frac{1}{(N+1)^2}\sum_{i=0}^N \mathbf{E} [|B_i(G)|]  	= \frac{1}{N+1} \mathbf{E} [|B_0(G)|]. \end{align*}

Since |B_0(G)| = \sum_{i=0}^N \mathbf{1}_{i\in B_0(G)}, we have \mathbf{E} [|B_0(G)|] = \sum_{i=0}^N \mathbf{P}\{i\in B_0(G)\}. By the symmetry of the Erdös-Rényi distribution we also have that \mathbf{P}\{i\in B_0(G)\} = \mathbf{P}\{j\in B_0(G)\} if \sigma^i and \sigma^j have the same number of players playing 1 (equivalently, the same number of players playing 0).
Therefore, if we label the vertices of the graph as \{1,2,\ldots,n\}, we have

    \[\mathbf{E} [|B_0(G)|] = \sum_{s=0}^n \binom{n}{s}  	\mathbf{E} [\mathbf{1}_{i_s\in B_0(G)}]\]

where for each s\in\{0,1,\ldots,n\} the index set i_s\in\{0,1,\ldots,N\} is such that the strategy \sigma^{i_s} satisfies \sigma^{i_s}_{k}=1 if k\le s and \sigma^{i_s}_{k}=0 if k> s. Hence, the bound for \mathbf{E} [b_1(G)] in the statement of the Lemma is proved if we show that \mathbf{P} \{i_s\in B_0(G)\} \le n (1-p)^{s-1}. In fact, by definition of B_0(G) we have

    \begin{align*} 	\mathbf{P} \{i_s\in B_0(G)\}  	&= \mathbf{P} \{\text{$G : $ $\exists$ player $k\in\{1,\ldots,n\}$ such that  	$N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le \sum_{k=1}^n  	\mathbf{P} \{\text{$G : N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le n (1-p)^{s-1}. \end{align*}

We now study the term \mathbf{E} [b_2(G)]. Proceeding as above, by symmetry we have

    \begin{align*} 	\mathbf{E} [b_2(G)] &= (N+1) \sum_{j=1}^N  	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_j=1\} \mathbf{1}_{j\in B_0(G)} ]\\ 	&= 2^n \sum_{s=1}^n \binom{n}{s} 	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ] \end{align*}

We now analyze the term \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]. As noticed above, i_s\in B_0(G) if and only if the graph G is such that there exists a player k\in\{1,\ldots,n\} such that N(k)\cap \{1, \dots,s\}=\varnothing. In the case in which such k also satisfies the property k\in\{1,\ldots,s\}, then \sigma^{i_s}_k=1 and \sigma^{i_s}_{k'}=0 for each k'\in N(k), and it follows that \mathbf{P}_G\{X_0=1,X_{i_s}=1\} = 0. In fact, the event \{X_0=1,X_{i_s}=1\} corresponds to the realizations where both strategy \sigma^0 and strategy \sigma^{i_s} are pure Nash equilibria, that is, \operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})=\operatorname{BR}_v(\sigma^{i_s}_v,\sigma^{i_s}_{N(v)})=1 for each v\in V. But it can not be that both 0 and 1 are best responses for the player k when all player in the neighbor play 0. Hence, \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} is different from 0 on the event

    \begin{align*} 	A:=& \ \{G: k\in\{1,\ldots,s\} \text{ implies } N(k)\cap \{1, 	\dots,s\} \neq \varnothing \}\\ 	=& \ \{ \text{$\not\exists$ isolated node in the subgraph induced by $\{1,\ldots,s\}$} \} \end{align*}

and on the event

    \[B:=\{G: \text{$\exists k\in\{s+1,\ldots,n\}$ such that $N(k)\cap \{1, 	\dots,s\} = \varnothing$} \}.\]

Define p_s:=\mathbf{P} \{A\} and m_s := |\{k\in \{s+1,\ldots,n\} : N(k)\cap \{1,\dots,s\} = \varnothing\}|. Note that we have B=\bigcup_{t=1}^{n-s} \{G: m_s=t\}. On the events A and B we have \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)}=\mathbf{P}_G\{X_0=1,X_{i_s}=1\} and we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A] \, p_s\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s. \end{align*}

Because of the independency structure in the Erdös-Rényi random graph we have

    \[\mathbf{P}\{m_s=t | A\} = \mathbf{P}\{m_s=t \} = \binom{n-s}{t}  	[(1-p)^s]^t [1-(1-p)^s]^{n-s-t}.\]

Furthermore, we have

    \[\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] = \frac{1}{2^t} \frac{1}{2^{n-t}} \frac{1}{2^{n-t}}.\]

This follows immediately from the definition of pure Nash equilibrium in terms of the best response functions once noticed that on the event \{A, m_s = t\} there are exactly t players (each k\in\{s+1,\ldots,n\} such that N(k)\cap \{1,\dots,s\} = \varnothing\}) such that (\sigma^0_k,\sigma^0_{N(k)})=(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}), while for the remaining n-t players we have (\sigma^0_k,\sigma^0_{N(k)})\neq(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}). Putting everything together we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s\\ 	&\qquad= p_s \sum_{t=1}^{n-s} \binom{n-s}{t} [(1-p)^s]^t [1-(1-p)^s]^{n-s-t} 	\frac{1}{4^{n-t}}\\ 	&\qquad= \frac{p_s}{4^n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Using the fact that p_s\le 1, it clearly follows that \mathbf{E} [b_2(G)] \le S(n,p). \square

01. May 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Giant component: final remarks

The past three lectures were devoted to the giant component theorem:

Theorem Let C_v be the connected component of G(n, \frac{c}{n}) that contains v\in [n].

  1. If c<1, then \max_v|C_v|= O(\log n) in probability.
  2. If c>1, then \max_v|C_v|\sim (1-\rho) n in probability, for some 0<\rho<1.
  3. If c=1, then \max_v|C_v|\sim n^{2/3} in distribution.

We proved only the first (subcritical) and second (supercritical) cases: our presentation was largely inspired by the treatment of Janson, Luczak, and Rucinski and Durrett. We have omitted the critical case, however, as the last two lectures of the semester will be on another topic. The goal of this post is to provide some final remarks and references on the giant component theorem.


At first sight, the “double jump” in the giant component theorem looks quite shocking. In hindsight, however, this does not seem quite so miraculous, as it mirrors an elementary phenomenon that is covered in many introductory probability courses: given a (nice) random walk S_t with initial condition S_0=a, define the hitting time \tau:=\inf\{t:S_t=b\} for some b<a. Then there are three cases:

  1. If S_t has negative drift, then \mathbb{E}[\tau]<\infty. In fact, the random variable \tau has a light (exponential) tail.
  2. If S_t has positive drift, then \mathbb{P}[\tau=\infty]>0.
  3. If S_t has zero drift, then \tau<\infty a.s. but \mathbb{E}[\tau]=\infty. That is, the random variable \tau has a heavy tail.

This “double jump” in the behavior of the hitting probabilities of a random walk is directly analogous to the behavior of the connected components of an Erdös-Rényi graph, and this was indeed the basic idea behind the proofs given in the previous lectures. Of course, it remains a bit of a miracle that the random walk approximation of the exploration process, which only holds for small times, is sufficiently powerful that it describes so completely the behavior of the random graph.

The critical case

In the subcritical case, the size of the largest component is of order \log n because the hitting time of a random walk with negative drift has an exponential tail: that is, we proved

    \[\mathbb{P}[\tau\ge k] \le e^{-\alpha k}    \quad\Longrightarrow\quad    \mathbb{P}\Big[\max_v|C_v| \ge \gamma\log n\Big] \le    n\,\mathbb{P}[\tau\ge\gamma\log n] \le n^{1-\alpha\gamma}\]

which goes to zero as n\to\infty for \gamma>1/\alpha.

Similarly, we would expect that we can obtain the size of the largest component in the critical case if we understand the heavy tail behavior of the hitting time of a random walk with zero drift. This is in fact the case. Indeed, when there is zero drift, one can show that

    \[P[\tau\ge k] \lesssim \frac{1}{\sqrt{k}}.\]

The crude union bound argument used above now does not give the correct answer, but an only slightly better argument is needed. Indeed, note that

    \begin{eqnarray*}     \mathbb{P}\Big[\max_v|C_v| \ge \gamma n^{2/3}\Big] &\le&     \mathbb{P}[|\{v:|C_v|\ge \gamma n^{2/3}\}|\ge \gamma n^{2/3}] \\     &\le&     \frac{\mathbb{E}[|\{v:|C_v|\ge \gamma n^{2/3}\}|]}{\gamma n^{2/3}} \\     &=&     \frac{n\,\mathbb{P}[\tau\ge \gamma n^{2/3}]}{\gamma n^{2/3}}     \lesssim \frac{1}{\gamma^{3/2}}. \end{eqnarray*}

Therefore, n^{-2/3}\max_v|C_v|=O(1). With some further work, a corresponding lower bound can also be proved. See the paper by Nachmias and Peres or the notes by van der Hofstad for the details.

It turns out that in the critical case n^{-2/3}\max_v|C_v| is not only bounded in probability, but in fact converges weakly to some limiting distribution. This distribution, and much more, is beautifully described by Aldous in terms of Brownian excursions. This is an interesting example of the application of stochastic analysis to discrete probability; unfortunately, we do not have the time to cover it.

In a different direction, it turns out that various additional phase transitions appear when we consider a finer scaling, for example, in the “critical window” c=1\pm c' n^{-1/3}. For an overview of the various transitions, see, for example, section 11.1 in Alon and Spencer.

Connectivity threshold

Rather than considering the size of the largest component, one could ask when the entire Erdös-Rényi graph is connected. Note that when 1<c<\infty, the constant \rho in the size (1-\rho)n of the giant component is always strictly positive, so the graph is not connected. Therefore, in order for the entire graph to be connected, we must let c\to\infty (that is, the edge probability p must be superlinear). It turns out that the appropriate scaling for this question is c\sim \log n, and another phase transition arises here.

Theorem. Let p=\frac{a\log n}{n}. If a>1, then the Erdös-Rényi graph G(n,p) is connected with probability tending to 1 as n\to\infty. If a<1, the graph is connected with probability tending to 0 as n\to\infty.

To get some intuition, consider the probability that a vertex v is isolated (that is, disconnected from every other vertex):

    \[\mathbb{P}[v\text{ is isolated}] =    \mathbb{P}[\eta_{vw}=0\text{ for all }w\ne v] =    \bigg(1-\frac{c}{n}\bigg)^{n-1}\sim e^{-c}.\]

Thus for c=a\log n, we have

    \[\mathbb{E}[|\{v:v\text{ is isolated}\}|] \sim    n \,e^{-a\log n} = n^{1-a}.\]

In particular, if a<1, then there must exist an isolated vertex with positive probability as n\to\infty, in which case the graph is not connected (in fact, it is not hard to show that the variance of the number of isolated components is of the same order as its mean, so that the probability that the graph is connected tends to zero). Somewhat miraculously, it turns out that when there are no isolated vertices, then the graph must already be connected, so that we do indeed obtain the sharp transition described in the Theorem above. For a proof of this fact (by a clever combinatorial argument) see, for example, the lecture notes by van der Hofstad. Alternatively, one can use a random walk argument entirely in the spirit of the proofs in the previous lectures to prove that the random graph is connected for a>1: by running simultaneous exploration processes from different vertices as we did in the proof of the supercritical case, one can show that all connected components must intersect when a>1 and thus the entire graph must be connected. See section 2.8 in Durrett for such an argument.

28. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 6. Giant component (3)

Let us begin with a brief recap from the previous lecture. We consider the Erdös-Rényi random graph G(n,\frac{c}{n}) in the supercritical case c>1. Recall that C_v denotes the connected component of the graph that contains the vertex v. Our goal is to prove the existence of the giant component with size \max_v|C_v|\sim (1-\rho)n, while the remaining components have size \lesssim\log n.

Fix \beta>0 sufficiently large (to be chosen in the proof), and define the set

    \[K=\{v :  |C_v| > \beta \log n\}\]

of vertices contained in “large” components. The proof consists of two parts:

  • Part 1: \mathbb{P}[C_v = C_{v'}~\forall v,v' \in K] \to 1.
  • Part 2: \frac{|K|}{n} \to 1-\rho in probability.

Part 1 states that all the sufficiently large components must intersect, forming the giant component. Part 2 counts the number of vertices in the giant component. Part 2 was proved in the previous lecture. The goal of this lecture is to prove Part 1, which completes the proof of the giant component.


As in the previous lectures, the central idea in the study of the giant component is the exploration process (U_t,A_t,R_t), where

    \[|C_v| = \tau := \inf\{ t  :  |A_t|=0\}.\]

We have seen that |A_t| \approx S_t, where S_t is a random walk with increments

    \[X_t = S_t -S_{t-1} \sim \text{Binomial}(n,\tfrac{c}{n})-1, 	\qquad |A_0| = S_0 =1.\]

When c>1, we have \mathbb{E}[X_t] = c-1 > 0. Thus |A_t| is approximately a random walk with positive drift. The intuitive idea behind the proof of Part 1 is as follows. Initially, the random walk can hit 0 rapidly, in which case the component is small. However, if the random walk drifts away from zero, then with high probability it will never hit zero, in which case the component must keep growing until the random walk approximation is no longer accurate. Thus there do not exist any components of intermediate size: each component is either very small (|C_v|\le \beta\log n) or very large (we will show |C_v|\ge n^{2/3}, but the precise exponent is not important).

We now want to argue that any pair of large components must necessarily intersect. Consider two disjoint sets I and J of vertices of size |I|,|J|\ge n^{2/3}. As each edge is present in the graph with probability c/n, the probability that there is no edge between I and J is

    \[\mathbb{P}[\eta_{ij}=0\mbox{ for all }i\in I,~j\in J]= 	\bigg(1-\frac{c}{n}\bigg)^{|I|\,|J|} \le 	\bigg(1-\frac{c}{n}\bigg)^{n^{4/3}} \le 	e^{-cn^{1/3}}.\]

We therefore expect that any pair of large components must intersect with high probability. The problem with this argument is that we assumed that the sets I and J are nonrandom, while the random sets C_v themselves depend on the edge structure of the random graph (so the events \{C_v=I,C_{v'}=J\} and \{\mbox{no edges between }I,J\} are highly correlated). To actually implement this idea, we therefore need a little bit more sophisticated approach.

To make the proof work, we revisit more carefully our earlier random walk argument. The process |A_t|\approx S_t has positive drift as \mathbb{E}[S_t] = S_0 + (c-1)t. Thus the process |A_t|-(c-1)t/2 is still approximately a random walk with positive drift! Applying the above intuition, either |A_t| dies rapidly (the component is small), or |A_t| grows linearly in t as is illustrated in the following figure:

Rendered by QuickLaTeX.com

This means that the exploration process for a component of size >\beta\log n will not only grow large (|A_{n^{2/3}}|>0) with high probability, but that the exploration process will also possess a large number of active vertices (|A_{n^{2/3}}|\gtrsim n^{2/3}). To prove that all large components intersect, we will run different exploration processes simultaneously starting from different vertices. We will show that if two of these processes reach a large number of active vertices then there must be an edge between them with high probability, and thus the corresponding components must coincide. This resolves the dependence problem in our naive argument, as the edges between the sets of active vertices have not yet been explored and are therefore independent of the history of the exploration process.

The component size dichotomy

We now begin the proof in earnest. We will first show the dichotomy between large and small components: either the component size is \le\beta\log n, or the number of active vertices |A_t| grows linearly up to time n^{2/3}. To be precise, we consider the following event:

    \[B := \Big\{\text{either} ~  	|C_v| \le \beta \log n , 	~ \text{or} ~  	|A_t| \ge \Big(\frac{c-1}{2}\Big) t  	~ \text{for all} ~ \beta \log n \le t  	\le n^{2/3}\Big\}.\]

Our goal is to show that \mathbb{P}[B] is large.

Define the stopping time

    \[\sigma := \inf\Big\{t \ge \beta \log n  : 	|A_t| < \Big(\frac{c-1}{2}\Big)t \Big\}.\]

We can write

    \[B = \{ \tau \le \beta \log n ~ \text{or} ~ \sigma > n^{2/3} \}.\]

Now suppose \tau >\beta \log n and \sigma =t. Then \tau \ge t, as exploration process is alive at time \beta \log n and stays alive until time t. We can therefore write

    \[\mathbb{P}[B^c] = \sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}[\tau>\beta \log n ~ \text{and} ~ \sigma = s] \le 	\sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}\Big[|A_s|  	< \Big(\frac{c-1}{2}\Big)s 	~ \text{and} ~ 	s\le\tau\Big].\]

To bound the probabilities inside the sum, we compare |A_s| to a suitable random walk.

The random walk argument

To bound the probability that |A_s| < (c-1)s/2, we must introduce a comparison random walk that lies beneath |A_t|. We use the same construction as was used in the previous lecture. Let

    \begin{equation*} 	\bar S_{t+1}=\left\{ 	\begin{array}{ll} 	\bar S_t -1 +  	\sum_{w \in U_t} \eta_{wi_t} +  	\sum_{w=1}^{n-(\frac{c+1}{2})n^{2/3} - |U_t|}  	\tilde \eta _w^t &\quad\text{if }  	|U_t|<n-(\frac{c+1}{2}) n^{2/3},\\ 	\bar S_t -1 +  	\sum_{w \in \bar U_t} \eta_{wi_t} & \quad   	\text{if } 	|U_t|\ge n-(\frac{c+1}{2}) n^{2/3} . 	\end{array}  	\right. \end{equation*}

where \bar S_0=1, \tilde \eta_w^t are i.i.d. \text{Bernoulli}(\frac{c}{n}) random variables independent of \eta_{ij}, i_t=\min A_t (the same i_t used in the exploration process), and \bar U_t is the set of the first n-(\frac{c+1}{2})n^{2/3} components of U_t (if t \ge \tau, then A_t=\varnothing and thus i_t is undefined; then we simply add n-(\frac{c+1}{2})n^{2/3} variables \tilde\eta_w^t).

As in the previous lecture, we have:

  • \bar S_t is a random walk with \text{Binomial}(n-(\frac{c+1}{2})n^{2/3},\frac{c}{n})-1 increments.
  • \bar S_t \le |A_t| whenever t \le \tau and |U_t| \ge n-(\frac{c+1}{2})n^{2/3}.

Now suppose that s\le n^{2/3} and |A_s|<(\frac{c-1}{2})s. Then

    \[|U_s| = n - |A_s| - |R_s| \ge 	n - (\tfrac{c+1}{2})s \ge n - (\tfrac{c+1}{2})n^{2/3}.\]

We therefore obtain for s\le n^{2/3}

    \[\mathbb{P}\Big[|A_s|  	< \Big(\frac{c-1}{2}\Big)s 	~ \text{and} ~ 	s\le\tau\Big] \le 	\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big].\]

Thus computing \mathbb{P}[B^c] reduces to compute the tail probability of a random walk (or, in less fancy terms, a sum of i.i.d. random variables). That is something we know how to do.

Lemma (Chernoff bound). Let X\sim \text{Binomial}(n,p). Then

    \[\mathbb{P}[X\le np-t] \le e^{-t^2/2np}.\]

Proof. Let \gamma>0. Then

    \begin{eqnarray*} 	\mathbb{P}[X\le np-t] &=& 	\mathbb{P}[e^{-\gamma X} \ge e^{-\gamma np + \gamma t}] \\ 	&\le& e^{\gamma np - \gamma t} \, \mathbb{E}[e^{-\gamma X}]\\ 	&=& e^{\gamma np - \gamma t} (1- (1-e^{-\gamma})p)^n \\ 	&\le& e^{\{\gamma-(1-e^{-\gamma})\}np - \gamma t}\\ 	&\le& e^{\gamma^2 np/2 - \gamma t}. \end{eqnarray*}

The result follows by optimizing over \gamma>0. \quad\square

Note that \bar S_s \sim 1 - s + \text{Binomial}(\{n-(\frac{c+1}{2})n^{2/3}\}s,\frac{c}{n}). We therefore have by the Chernoff bound

    \[\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big] \le 	\mathbb{P}\Big[ 	\text{Binomial}(\{n-(\tfrac{c+1}{2})n^{2/3}\}s,\tfrac{c}{n}) 	\le \Big(\frac{c+1}{2}\Big)s\Big] \le 	e^{-(c-1-o(1))^2 s/8c}\]

for all s (here o(1) depends only on n and c). In particular, we have

    \[\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big] \le 	n^{-\beta(c-1)^2/9c} 	\quad\text{for all }s\ge\beta\log n\]

provided n is sufficiently large. Thus we can estimate

    \[\mathbb{P}[B^c] \le 	\sum_{s=\beta \log n}^{n^{2/3}}  	\mathbb{P}\Big[\bar S_s < \Big(\frac{c-1}{2}\Big)s\Big]  	\le 	n^{2/3-\beta(c-1)^2/9c},\]

which goes to zero as n\to\infty provided that \beta is chosen sufficiently large. In particular, the component size dichotomy follows: choosing any \beta>15c/(c-1)^2, we obtain

    \[\mathbb{P}[|C_v|\le\beta\log n 	\text{ or }|C_v|\ge n^{2/3}\text{ for all }v] \ge 	1-n\mathbb{P}[B^c] \ge  	1-n^{5/3-\beta(c-1)^2/9c}\xrightarrow{n\to\infty}0.\]

Remark: Unlike in the proof of Part 2 in the previous lecture, here we do need to choose \beta sufficiently large for the proof to work. If \beta is too small, then the random walk \bar S_t cannot move sufficiently far away from zero to ensure that it will never return. In particular, even in the supercritical case, the second largest component has size of order \log n.

Large components must intersect

To complete the proof, it remains to show that all large components must intersect. To do this, we will run several exploration processes at once starting from different vertices. If the sets of active vertices of two of these processes grow large, then there must be an edge between them with high probability, and thus the corresponding components intersect. Let us make this argument precise.

In the following, we denote by (U_t^v,A_t^v,R_t^v) the exploration process started at A_0=\{v\}. For each such process, we define the corresponding set B^v that we have investigated above:

    \[B_v := \Big\{\text{either} ~  	|C_v| \le \beta \log n , 	~ \text{or} ~  	|A_t^v| \ge \Big(\frac{c-1}{2}\Big) t  	~ \text{for all} ~ \beta \log n \le t  	\le n^{2/3}\Big\}.\]

We have shown above that, provided \beta>15c/(c-1)^2, we have

    \[\mathbb{P}\bigg[\bigcap_v B_v\bigg] 	\ge 1 - \sum_v \mathbb{P}[B_v^c] \ge 	1-o(1).\]

We can therefore estimate

    \begin{eqnarray*} 	&& 	\mathbb{P}[\exists\, v,v' \in K \text{ such that } C_v \neq C_{v'}] \\ 	&& \mbox{}= 	\mathbb{P}\big[\exists\, v,v' \in K \text{ such that } C_v \neq C_{v'}, 	\text{ and } 	|A^v_{n^{2/3}}| \ge (\tfrac{c-1}{2})n^{2/3} 	~\forall\, v \in K\big] + o(1) \phantom{\sum}\\ 	&& \mbox{}\le 	\sum_{v,v'} 	\mathbb{P}\big[C_v \neq C_{v'} \text{ and } 	 |A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big]+o(1). \end{eqnarray*}

Now note that by time t, the exploration process (U_t^v,A_t^v,R_t^v) has only explored edges \eta_{ij} where i\in R_t^v (or j\in R_t^v), and similarly for (U_t^{v'},A_t^{v'},R_t^{v'}). It follows that

    \[\text{The conditional law of } 	\{\eta_{ij} : i,j \not\in R_t^v\cup R_t^{v'}\} 	\text{ given } 	(A_t^v,R_t^v,A_t^{v'},R_t^{v'}) 	\text{ is i.i.d.\ Binomial}(\tfrac{c}{n}).\]

In particular, if I^v,J^v,I^{v'},J^{v'} are disjoint subsets of vertices, then

    \[\mathbb{P}[\text{no edge between } I^v, I^{v'}  	| A_t^v =I^v, R_t^v=J^v,A_t^{v'}=I^{v'},R_t^{v'}=J^{v'}]  	= \bigg(1-\frac cn \bigg)^{|I^v|\, |I^{v'}|}.\]

On the other hand, C_v \ne C_{v'} implies that R_t^v,A_t^v,R_t^{v'},A_t^{v'} must be disjoint at every time t. Thus if C_v \ne C_{v'}, there can be no edges between vertices in A_t^v and A_t^{v'} at any time t (if such an edge exists, then the vertices connected by this edge will eventually be explored by both exploration processes, and then the sets of removed vertices will no longer be disjoint). Therefore,

    \begin{eqnarray*} 	&&  	\mathbb{P}\big[C_v \neq C_{v'} \text{ and } 	 |A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big] \\ 	&& \mbox{} \le 	\mathbb{P}\big[ 	\text{no edge between }A^v_{n^{2/3}},A^{v'}_{n^{2/3}},\\ 	&& \phantom{\mbox{} \le \mathbb{P}\big[} 	A^v_{n^{2/3}},R^v_{n^{2/3}}, 	A^{v'}_{n^{2/3}},R^{v'}_{n^{2/3}}\text{ are disjoint}, ~ 	|A^v_{n^{2/3}}| \wedge |A_{n^{2/3}}^{v'}| \ge  	(\tfrac{c-1}{2})n^{2/3}\big] \\ 	&& \mbox{}\le 	\bigg(1-\frac{c}{n} \bigg)^{(c-1)^2n^{4/3}/4} 	\le e^{-c(c-1)^2n^{1/3}/4}. \end{eqnarray*}

Thus we finally obtain

    \[\mathbb{P}[C_v = C_{v'}~\forall v,v' \in K] \ge 	1- n^2e^{-c(c-1)^2n^{1/3}/4} - o(1) 	\xrightarrow{n\to\infty}1,\]

and the proof of the giant component theorem is complete.

Many thanks to Quentin Berthet for scribing this lecture!

27. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 5. Giant component (2)

Consider the Erdös-Rényi graph model G(n,\frac{c}{n}), and denote as usual by C_v the connected component of the graph that contains the vertex v. In the last lecture, we focused mostly on the subcritical case c < 1, where we showed that \max_v|C_v|\lesssim\log n. Today we will begin developing the supercritical case c > 1, where \max_v|C_v| \sim (1-\rho)n for a suitable constant 0<\rho<1. In particular, our aim for this and next lecture is to prove the following theorem.

Theorem. Let c>1. Then

    \[\frac{\max_v|C_v|}{n}\xrightarrow{n\to\infty} 1 - \rho 	\quad\mbox{in probability},\]

where \rho is the smallest positive solution of the equation \rho = e^{c(\rho-1)}. Moreover, there is a \beta>0 such that all but one of the components have size \leq \beta\log n with probability tending to 1 as n\to\infty.

Beginning of the proof. Define the set

    \[K = \{ v: |C_v| > \beta \log n\}.\]

The proof of the Theorem consists of two main parts:

  • Part 1: \mathbb{P}[C_v = C_{v'}~ \forall v,v' \in K] \xrightarrow{n \to \infty} 1.
  • Part 2: \frac{|K|}{n} \xrightarrow{n \to \infty} 1-\rho in probability.

Part 1 states that all “large” components of the graph must intersect, forming one giant component. Some intuition for why this is the case was given at the end of the previous lecture. Part 2 computes the size of this giant component. In this lecture, we will concentrate on proving Part 2, and we will find out where the mysterious constant \rho comes from. In the next lecture, we will prove Part 1, and we will develop a detailed understanding of why all large components must intersect.

Before we proceed, let us complete the proof of the Theorem assuming Parts 1 and 2 have been proved. First, note that with probability tending to one, the set K is itself a connected component. Indeed, if v\in K and v'\not\in K then v,v' must lie in disjoint connected components by the definition of K. On the other hand, with probability tending to one, all v,v'\in K must lie in the same connected component by Part 1. Therefore, with probability tending to one, the set K forms a single connected component of the graph. By Part 2, the size of this component is \sim (1-\rho)n, while by the definition of K, all other components have size \le \beta\log n. This completes the proof. \quad\square

The remainder of this lecture is devoted to proving Part 2 above. We will first prove that the claim holds on average, and then prove concentration around the average. More precisely, we will show:

  1. \mathbb{E}\big[\frac{|K|}{n}\big] \xrightarrow{n\to\infty} 1-\rho.
  2. \mathrm{Var}\big[\frac{|K|}{n}\big] \xrightarrow{n\to\infty}0.

Together, these two claims evidently prove Part 2.

Mean size of the giant component

We begin by writing out the mean size of the giant component:

    \[\mathbb{E}\bigg[\frac{|K|}{n}\bigg] =  	\mathbb{E}\Bigg[ 	\frac{1}{n} \sum_{v \in [n]}  	\mathbf{1}_{|C_v|>\beta\log n} 	\Bigg] =  	\frac{1}{n} \sum_{v \in [n]}  	\mathbb{P}[\left|C_v\right| > \beta \log n ] = 	\mathbb{P}[\left|C_v\right| > \beta \log n ],\]

where we note that \mathbb{P}[\left|C_v\right| > \beta \log n ] does not depend on the vertex v by the symmetry of the Erdös-Rényi model. Therefore, to prove convergence of the mean size of the giant component, it suffices to prove that

    \[\mathbb{P}[\left|C_v\right| \le \beta \log n ] 	\xrightarrow{n\to\infty}\rho.\]

This is what we will now set out to accomplish.

In the previous lecture we defined exploration process (U_t, A_t, R_t). We showed that

    \[|C_v| = \tau := \inf \{t: |A_t| = 0\}\]

and that for t<\tau

    \[|A_{t+1}| = |A_t| - 1 + \sum_{w \in U_t} \eta_{wi_t}, 	\qquad |A_0|=1,\]

where i_t \in A_t is an arbitrary nonanticipating choice, say, i_t = \min A_t (recall that (\eta_{ij})_{i,j\in[n]} denotes the adjacency matrix of the random graph). As \eta_{ij} are i.i.d. \mathrm{Bernoulli}(\frac{c}{n}) and as edges emanating from the set of unexplored vertices U_t have not yet appeared in previous steps, the process |A_t| is “almost'' a random walk: it fails to be a random walk as we only add |U_t| Bernoulli variables in each iteration, rather than a constant number. In the last lecture, we noted that we can estimate |A_t| from above by a genuine random walk S_t by adding some fictitious vertices. To be precise, we define

    \[S_{t+1}=S_t - 1 + \sum_{w\in U_t} \eta_{wi_t} + 	\sum_{w=1}^{n-|U_t|}\tilde\eta_w^t,\qquad S_0=1,\]

where \tilde\eta_w^t are i.i.d. \mathrm{Bernoulli}(\frac{c}{n}) independent of the \eta_{ij} (if t\ge\tau, then A_t=\varnothing and thus i_t is undefined; in this case, we simply add all n variables \tilde\eta_w^t). In the present lecture, we also need to bound |A_t| from below. To this end, we introduce another process \bar S_t as follows:

    \[\bar S_{t+1}=\left\{ 	\begin{array}{ll} 	\bar S_t - 1 + \sum_{w\in U_t} \eta_{wi_t} + 		\sum_{w=1}^{n-\beta\log n-|U_t|}\tilde\eta_w^t 	& \mbox{if }|U_t|< n-\beta\log n,\\ 	\bar S_t - 1 + \sum_{w\in \bar U_t} \eta_{wi_t} 	& \mbox{if }|U_t|\ge n-\beta\log n, 	\end{array}\right. 	\qquad \bar S_0=1,\]

where \bar U_t is the set consisting of the first n-\beta\log n elements of U_t in increasing order of the vertices (if t\ge\tau, we add n-\beta\log n variables \tilde\eta_w^t). The idea behind these processes is that S_t is engineered, by including “fictitious'' vertices, to always add n i.i.d. Bernoulli variables in every iteration, while \bar S_t is engineered, by including “fictitious'' vertices when |U_t| is small and omitting vertices when |U_t| is large, to always add n-\beta\log n i.i.d. Bernoulli variables in every iteration. The following facts are immediate:

  • S_t is a random walk with i.i.d. \mathrm{Binomial}(n,\frac{c}{n})-1 increments.
  • \bar S_t is a random walk with i.i.d. \mathrm{Binomial}(n-\beta\log n,\frac{c}{n})-1 increments.
  • S_t\ge |A_t| for all t\le\tau.
  • \bar S_t\le |A_t| for all t\le\tau on the event \{|C_v|\le\beta\log n\}.

To see the last property, note that the exploration process can only explore as many vertices as are present in the connected component C_v, so that |U_t|\ge n-|C_v| for all t; therefore, in this situation only the second possibility in the definition of \bar S_t occurs, and it is obvious by construction that then \bar S_t\le |A_t| (nonetheless, the first possibility in the definition must be included to ensure that \bar S_t is a random walk).

We now define the hitting times

    \[T = \inf\{t:S_t=0\},\qquad 	\bar T = \inf\{t:\bar S_t=0\}.\]

Then we evidently have

    \[\mathbb{P}[T \le \beta \log n] \le  	\mathbb{P}[|C_v| \le \beta \log n] \le  	\mathbb{P}[\bar T \le \beta \log n].\]

(Note how we cleverly chose the random walk \bar S_t precisely so that \bar T\le |C_v| whenever |C_v|\le \beta\log n). We have therefore reduced the problem of computing \mathbb{P}[|C_v| \le \beta \log n] to computing the hitting probabilities of random walks. Now we are in business, as this is something we know how to do using martingales!

The hitting time computation

Let us take a moment to gather some intuition. The random walks S_t and \bar S_t have increments distributed as \mathrm{Binomial}(n,\frac{c}{n})-1 and \mathrm{Binomial}(n-\beta\log n,\frac{c}{n})-1, respectively. As n\to\infty, both increment distributions converge to a \mathrm{Poisson}(c)-1 distribution, so we expect that \mathbb{P}[|C_v|\le\beta\log n]\sim\mathbb{P}[T_0\le\beta\log n] where T_0 is the first hitting time of the Poisson random walk. On the other hand, as \mathbb{P}[T_0\le\beta\log n] \to \mathbb{P}[T_0<\infty], we expect that \mathbb{P}[|C_v|\le\beta\log n]\to \mathbb{P}[T_0<\infty]. The problem then reduces to computing the probability that a Poisson random walk ever hits the origin. This computation can be done explicitly, and this is precisely where the mysterious constant \rho=\mathbb{P}[T_0<\infty] comes from!

We now proceed to make this intuition precise. First, we show that the probability \mathbb{P}[T<\beta\log n] can indeed be replaced by \mathbb{P}[T<\infty], as one might expect.

Lemma. \mathbb{P}[T \le \beta \log n] = \mathbb{P}[T < \infty] - o(1) as n\to\infty.

Proof. We need to show that

    \[\mathbb{P} [\beta \log n < T < \infty] \xrightarrow{n \to \infty} 0.\]

Note that as S_T=0 when T<\infty,

    \[\mathbb{P}[k \le T < \infty] = \sum_{t=k}^{\infty}  	\mathbb{P}[T=t] \le \sum_{t=k}^{\infty} \mathbb{P}[S_t = 0].\]

We can evidently write

    \[\mathbb{P}[S_t = 0] =  	\mathbb{E}[\mathbf{1}_{S_t=0} e^{\gamma S_t} ]  	\le  	\mathbb{E} [e^{\gamma S_t}] =  	e^{\gamma} \mathbb{E}[e^{\gamma X_1}]^t\]

where S_t - S_{t-1} =: X_t \sim \mathrm{Binomial}(n,\frac{c}{n})-1 and

    \[\mathbb{E} [e^{\gamma X_1}] = e^{-\gamma}  	(1+\tfrac{c}{n}(e^{\gamma}-1))^n  	\le e^{c(e^{\gamma}-1)-\gamma}.\]

Choosing \gamma = -\log c, we obtain \mathbb{E} [e^{\gamma X_1}] \le e^{1-c + \log c} < 1 for c \ne 1. Therefore,

    \[\mathbb{P} [\beta\log n < T < \infty] \le  	\frac{1}{c} \sum_{t=\beta\log n}^{\infty}  	e^{(1-c + \log c)t} 	\xrightarrow{n\to\infty}0.\]

This completes the proof. \qquad\square

By the above Lemma, and a trivial upper bound, we obtain

    \[\mathbb{P} [T< \infty] - o(1) \le  	\mathbb{P}[|C_v| \le \beta \log n] \le \mathbb{P}[\bar T < \infty].\]

To complete computation of the mean size of the giant component, it therefore remains to show that \mathbb{P} [T<\infty] and \mathbb{P}[\bar T<\infty] converge to \rho. In fact, we can compute these quantities exactly.

Lemma. Let c>1, \frac{c}{n} < 1. Then

    \[\mathbb{P} [T<\infty] = \rho_n\]

where \rho_n is the smallest positive solution of \rho_n = (1+\frac{c}{n}(\rho_n-1))^n.

Proof. Recall the martingale M_t used in last lecture:

    \[M_t = e^{\gamma S_t - \phi(\gamma)t}, 	\qquad 	\phi(\gamma)=\log \mathbb{E}[e^{\gamma X_1}] = 	\log\big[e^{-\gamma}(1+\tfrac{c}{n}(e^{\gamma}-1))^n\big].\]

Suppose that \gamma < 0 and \phi(\gamma) > 0. Then

    \[\mathbb{E} [e^{-\phi(\gamma)T}] =  	\mathbb{E} \Big[\lim_{k \to \infty} M_{k \wedge T}\Big]  	= \lim_{k \to \infty} \mathbb{E} [M_{k \wedge T}] = M_0 = e^{\gamma}.\]

The first equality holds since if T < \infty then S_T = 0 and M_{k \wedge T} \to M_T = e^{-\phi(\gamma)T}, while if T = \infty then S_{k} \ge 0 and M_k \to 0 = e^{-\phi(\gamma)T}. The second equality holds by dominated convergence since 0 \le M_{k \wedge T} \le 1, and the third equality is by the optional stopping theorem.

Now suppose we can find \gamma_n < 0 such that \phi(\gamma) \downarrow 0 as \gamma \uparrow \gamma_n. Then we have

    \[\rho_n := 	e^{\gamma_n} = \lim_{\gamma \uparrow \gamma_n}  	\mathbb{E}[e^{-\phi(\gamma)T}] = 	\mathbb{E} \Big[\lim_{\gamma \uparrow \gamma_n} e^{-\phi(\gamma)T}\Big]  	= \mathbb{P}[T < \infty]\]

by dominated convergence. Thus, evidently, it suffices to find \gamma_n with the requisite properties. Now note that as \phi(\gamma_n)=0, \gamma_n<0, and \phi(\gamma)>0 for \gamma<\gamma_n, we evidently must have

    \[\rho_n = (1+\tfrac{c}{n}(\rho_n-1))^n,\qquad 	\rho_n < 1,\qquad 	\rho < (1+\tfrac{c}{n}(\rho-1))^n \mbox{ for }\rho<\rho_n.\]

We can find such \rho_n by inspecting the following illustration:

Rendered by QuickLaTeX.com

Evidently the requisite assumptions are satisfied when \rho_n is the smallest root of the equation \rho_n=(1+\tfrac{c}{n}(\rho_n-1))^n (but not for the larger root at 1!) \quad\square

Remark. Note that the supercritical case c>1 is essential here. If c\le 1 then the equation for \rho_n has no solutions <1, and the argument in the proof does not work. In fact, when c\le 1, we have \mathbb{P}[T<\infty]=1.

By an immediate adaptation of the proof of the previous lemma, we obtain

    \[\mathbb{P}[\bar T < \infty] = \bar \rho_n\]

where \bar\rho_n is the smallest positive solution of \bar \rho_n = (1+\frac{c}{n}(\bar \rho_n-1))^{n-\beta \log n}. Letting n\to\infty, we see that

    \[\lim_{n \to \infty} \mathbb{P}[|C_v|\le\beta\log n] =  	\lim_{n \to \infty} \mathbb{P}[T<\infty] =  	\lim_{n \to \infty} \mathbb{P}[\bar T<\infty] = \rho,\]

where \rho is the smallest solution of the equation \rho=e^{c(\rho-1)} (which is precisely the probability that the Poisson random walk hits zero, by the identical proof to the lemma above). We have therefore proved

    \[\mathbb{E}\bigg[\frac{|K|}{n}\bigg] \xrightarrow{n\to\infty} 1-\rho.\]

Variance of the giant component size

To complete the proof of Part 2 of the giant component theorem, it remains to show that

    \[\mathrm{Var}\bigg[\frac{|K|}{n}\bigg] = 	\mathrm{Var}\bigg[1-\frac{|K|}{n}\bigg]  	\xrightarrow{n\to\infty} 0.\]

To this end, let us consider

    \[\mathbb{E}\bigg[\bigg(1-\frac{|K|}{n}\bigg)^2\bigg] = 	\mathbb{E}\bigg[\bigg(\frac{1}{n} \sum_{v \in [n]}  		\mathbf{1}_{|C_v| \le \beta \log n}\bigg)^2\bigg] =  	\frac{1}{n^2} \sum_{v,w \in [n]}  	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n].\]

To estimate the terms in this sum, we condition on one of the components:

    \[\begin{array}{lcl} 	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	&=& 	\mathbb{E}[ 	\,\mathbb{P}[|C_v| \le \beta \log n|C_w]\, 	\mathbf{1}_{|C_w| \le \beta \log n}\,] \\ 	&=& 	\sum_{I \subseteq [n], |I| \le \beta\log(n)}  	\mathbb{P}[C_w = I]\, \mathbb{P}[|C_v| \le \beta\log n|C_w = I]. 	\end{array}\]

To proceed, note that the event \{C_w=I\} can be written as

    \[\{C_w=I\} = \{(\eta_{ij})_{i,j\in I}\mbox{ defines a connected  	subgraph and }\eta_{ij}=0\mbox{ when }i\in I,~j\not\in I\}.\]

In particular, the event \{C_w=I\} is independent of the edges \eta_{ij} for i,j\not\in I. Therefore, for v\not\in I, the conditional law of C_v given C_w=I coincides with the (unconditional) law of C_v^{[n]\backslash I}, the conncted component containing v in the induced subgraph on the vertices [n]\backslash I:

    \[\mathbb{P}[|C_v| \le \beta\log n|C_w = I] = 	\mathbb{P}[|C_v^{[n]\backslash I}|\le \beta\log n]\qquad 	\mbox{for }v\not\in I.\]

As this quantity only depends on |I| by the symmetry of the Erdös-Rényi model, we can evidently write

    \[\mathbb{P}[|C_v| \le \beta\log n|C_w = I] = 	\mathbb{P}[|C_1^{[n-|I|]}|\le \beta\log n] \le 	\mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n]\]

for v\not\in I, |I|\le\beta\log n. In particular, we obtain

    \[\sum_{v,w\in[n]} 	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	\le 	\mathbb{E}[n-|K|]\,\{ 	\beta\log n + n\, \mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n]\}.\]

Now note that, by its definition, C_1^{[k]} is distributed precisely as the component containing vertex 1 in the G(k,\frac{c}{n}) random graph model. We can therefore show, repeating exactly the proof of the mean size of the giant component above, that

    \[\mathbb{P}[|C_1^{[n-\beta\log n]}|\le \beta\log n] = 	\rho + o(1) =  	\mathbf{E}\bigg[1-\frac{|K|}{n}\bigg] + o(1).\]

We have therefore shown that

    \[\mathbb{E}\bigg[\bigg(1-\frac{|K|}{n}\bigg)^2\bigg] = 	\frac{1}{n^2}\sum_{v,w\in[n]}  	\mathbb{P}[|C_v| \le \beta \log n, |C_w| \le \beta \log n] 	\le 	\mathbf{E}\bigg[1-\frac{|K|}{n}\bigg]^2+o(1),\]

which evidently implies

    \[\mathrm{Var}\bigg[1-\frac{|K|}{n}\bigg] \le o(1).\]

This is what we set out to prove.

Remark. It should be noted that the proof of Part 2 did not depend on the value of \beta, or even on the \beta\log n rate, in the definition of the set K: any sequence that grows sublinearly to infinity would have given the same result. This suggests that all but a vanishing fraction of vertices are contained in connected components of order \sim 1 or \sim n. We find out only in the next lecture why the rate \beta\log n (for \beta sufficiently large!) is important: only sufficiently large connected components are guaranteed to intersect, while there might (and do) exist components of order \sim\log n that are disjoint from the giant component. If we do not exclude the latter, we will not be able to prove Part 1.

Many thanks to Weichen Wang for scribing this lecture!

17. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Lecture 4. Giant component (1)

Consider the Erdös-Rényi graph model G(n,p). In previous lectures, we focused on the “high complexity regime”, i.e., as n goes to infinity, p is fixed. We discussed topics such as clique numbers and chromatic numbers. From now on, we shall consider the “low complexity regime”, where as n goes to infinity, p=\frac{c}{n} for a fixed constant c>0. As before, let \eta=(\eta_{i,j})_{i,j\in[n]} be the adjacency matrix of G(n, \frac{c}{n}). Then \{\eta_{ij}: i,j\in[n], i<j\} are i.i.d. Bernoulli random variables with success probability p=\frac{c}{n}.

Theorem 1 Let C_v be the connected component of G(n, \frac{c}{n}) that contains v\in [n].

  1. If c<1, then \max_v|C_v|= O(\log n) in probability.
  2. If c>1, then \max_v|C_v|\sim (1-\rho) n in probability, for some 0<\rho<1.
  3. If c=1, then \max_v|C_v|\sim n^{2/3} in distribution.

In the following lectures, we will aim to prove at least parts 1 and 2.

The exploration process

How to study |C_v|? We will explore C_v by starting an “exploration process'' at v that moves around C_v until all its sites have been visited. This walk will be constructed so that it hits each site once. So, the time it takes to explore all of C_v is exactly |C_v|. As a consequence, studying |C_v| reduces to studying a hitting time of a certain random walk, and to the latter we can apply martingale theory.

At each time t=0,1,2,\ldots, we maintain three sets of vertices:

    \begin{eqnarray*} 	R_t &=& \{\text{removed sites}\},\\ 	A_t &=& \{\text{active sites}\},\\  	U_t &=& \{\text{unexplored sites}\}. \end{eqnarray*}

Below is an illustration of how these sets are updated on a simple example.

  • At t=0, initialize A_0=\{v\}, U_0=[n]\backslash \{v\} and R_0=\varnothing. Namely, only v is active, all the vertices other than v are unexplored, and no vertices have been removed.

    Rendered by QuickLaTeX.com

  • At t=1, update A_1=A_0\backslash \{v\}\cup \{\text{neighbors of } v\}, U_1=U_0\backslash A_1 and R_1=\{v\}. Namely, all neighbors of v are moved from the unexplored set to the active set, and v itself is removed.

    Rendered by QuickLaTeX.com

  • At t=2, pick some x\in A_1 and update A_2=A_1\backslash \{x\}\cup \{\text{unexplored neighbors of } x\}, U_2=U_1\backslash A_2 and R_2=R_1\cup \{x\}. Namely, all unexplored neighbors of x are moved into the active set, and x itself is removed.

    Rendered by QuickLaTeX.com

  • At time t, we pick some vertex x from the current active set A_{t-1}, activate all unexplored neighbors of x and remove x itself.

This is a sort of local search along the connected component C_v: much like playing a game of Minesweeper! At each t, the choice of x\in A_{t-1} can be made arbitrarily (e.g., selecting the vertex with the smallest index or randomly selecting a vertex in A_{t-1}). The only requirement is that it is nonanticipating (only depending on the edges visited up to time t). For example, we cannot pick the vertex in A_{t-1} which has the largest number of unexplored neighbors, as this choice relies on unexplored edges.

A formal description of the “exploration process'':

  • Initialize A_0=\{v\}, U_0=[n]\backslash \{v\} and R_0=\varnothing.
  • For t\geq 0, we set

        \begin{eqnarray*} 	R_{t+1} &=& R_t\cup \{i_t\},\\ 	A_{t+1} &=& A_t\backslash \{i_t\}\cup \{w\in U_t: \eta_{wi_t}=1\},\\ 	U_{t+1} &=& U_t\backslash \{w\in U_t: \eta_{wi_t}=1\}, \end{eqnarray*}

    where i_t\in A_t is a nonanticipating but otherwise arbitrary choice.

This process stops when there are no more active vertices. It hits each vertex in C_v once and only once. At each time t, we remove one vertex in C_v. So the stopping time is exactly equal to the size of C_v:

    \[ \tau\equiv \inf\{t: |A_t|=0 \} = |C_v|. \]

So, we only need to study the stopping time \tau.

Recall that \eta_{ij} indicates whether there is an edge between vertices i and j, and \eta_{ij}\overset{\mathrm{i.i.d.}}{\sim} \text{Bernoulli}(\frac{c}{n}). By construction,

    \[ |A_{t+1}| = |A_t| - 1 + \sum_{w\in U_t} \eta_{wi_t}. \]

Now, let's do a thought experiment (wrong, but intuitive). Let's forget for the moment that some sites were previously visited, and assume that in each step all neighbors of i_t are unvisited still (note that when n is really large and t is relatively small, this assumption makes sense). Then |A_{t+1}|-|A_t|+1 is the sum of n independent \text{Bernoulli}(\frac{c}{n}) variables, which has a \text{Binomial}(n, \frac{c}{n}) distribution. This binomial variable is independent of the past because it only depends on unexplored edges; in addition, its distribution does not depend on |A_t|. Therefore, |A_t| would be a random walk with increment distribution \text{Binomial}(n, \frac{c}{n})-1\approx\text{Poisson}(c)-1. Then, studying |C_v| boils down to studying first time a Poisson random walk hits zero! Of course, we cannot really ignore previously visited sites, but this rough intuition nonetheless captures the right idea as n\to\infty and will serve as a guiding principle for the proof.

A comparison random walk

The reason that |A_t| is not a random walk is that there are only |U_t| edges (not n) to explore at time t. We can artificially create a random walk by adding n-|U_t| “fictitious'' points at each time t as follows.

Let \tilde{\eta}_{v}^t be i.i.d. \text{Bernoulli}(\frac{c}{n}) for t\geq 0, v\in[n], which are independent of (\eta_{ij}). Define

    \[ 	S_0 = 1, \qquad S_{t+1} = S_t - 1 +  	\sum_{w\in U_t}  	\eta_{wi_t} + \sum_{w\in [n]\backslash  	U_t}\tilde{\eta}^t_{w}. \]

(When t\ge\tau, then A_t=\varnothing and thus i_t is undefined; in this case, we simply add all n variables \tilde\eta^t_w.)
Note that \sum_{w\in U_t} \eta_{wi_t} is the sum of edges from A_t to U_t. Since we have not explored U_t yet, those edges are independent of all edges explored up to time t (here we use that i_t is nonanticipating). We therefore see that (S_t) is indeed a random walk with increment

    \[ S_{t}- S_{t-1} \sim \text{Binomial}(n, \tfrac{c}{n})-1. \]

Moreover, since all \tilde{\eta}_{w}^t are nonnegative,

    \[ S_{t+1} - S_t \geq |A_{t+1}| - |A_t| \]

as long as t<\tau. It follows that |A_t| is dominated by the genuine random walk S_t, that is,

    \[|A_t| \le S_t \quad\mbox{for all }t\le \tau.\]

We can now obtain bounds on |C_v| by analyzing hitting times of the random walk S_t.

The subcritical case c<1

Define the first time the comparison walk hits zero as

    \[T\equiv\inf\{t: S_t=0\}.\]

Since |A_t|\leq S_t for t\le\tau, it is obvious that

    \[|C_v|=\tau\leq T.\]

Now we study T. The intuition is that as \mathbb{E}[S_{t+1}-S_t]=c-1, (S_t) is a random walk with negative drift in the subcritical case c<1. Thus \mathbb{P}[T<\infty]=1, and in fact the hitting time T has nice tails!

Lemma 2 Let c<1 and \alpha=c-1-\log c>0. Then for any positive integer k,

    \[ \mathbb{P}[T\geq k]\leq \frac{1}{c} e^{-\alpha k}. \]

We will prove this lemma below. Using the lemma, we immediately obtain:

Corollary 3 If c<1, then for any a>\frac{1}{\alpha}=\frac{1}{c-1-\log c}

    \[ \mathbb{P}\big[\max_v|C_v|\geq a\log n\big] \xrightarrow{n\to\infty} 0.  \]

Proof. Applying the Lemma 2 and the union bound,

    \begin{eqnarray*} \mathbb{P}\big[\max_v|C_v|\geq a\log n\big] &\leq & \sum_{v\in[n]} \mathbb{P}[|C_v|\geq a\log n]\\ &\leq & \sum_{v\in[n]} \mathbb{P}[T\geq a\log n] \\ &\leq & \frac{n}{c} n^{-a\alpha} \to 0. \qquad\square \end{eqnarray*}

This corollary proves part 1 of Theorem 1. In fact, it turns out that the constant \frac{1}{\alpha} is tight: by using the second moment method, one can prove a matching lower bound on \max_v|C_v| (see, for example, the lecture notes by van der Hofstad), which implies that in fact \max_v|C_v|\sim \frac{1}{\alpha}\log n in probability. The proof is not much more difficult, but we prefer to move on to the supercritical case.

Remark. It might seem somewhat surprising that the result we obtain is so sharp, considering that we have blindly replaced |A_t| by the larger quantity S_t. However, in going from |A_t| to S_t we do not lose as much as one might think at first sight. When n is large and t is relatively small, the excess term \sum_{w\in[n]\backslash U_t}\tilde{\eta}^t_w in the definition of S_t is zero with high probability, as most vertices are unexplored and the Bernoulli success probability \frac{c}{n} of the \tilde{\eta}^t_w is very small. With a bit of work, one can show that S_t and |A_t| will actually stick together for times t\lesssim\log n with probability going to one as n\to\infty. Thus, in the subcritical case where the random walk only lives for \sim\log n time steps, nothing is lost in going from |A_t| to S_t, and our rough intuition that |A_t| should behave like a random walk as n\to\infty is vindicated.

To wrap up the subcritical case, it remains to prove the lemma.

Proof of Lemma 2. By the Markov inequality,

    \[ \mathbb{P}[T\geq k] = \mathbb{P}[e^{\alpha T}\geq e^{\alpha k}] \leq e^{-\alpha k} \mathbb{E}[e^{\alpha T}].  \]

It remains to bound \mathbb{E}[e^{\alpha T}]\le\frac{1}{c}, which is a standard exercise in martingale theory.

Recall that

    \[S_t = 1+\sum_{k=1}^t X_k,\]

where X_k are i.i.d. Define the moment generating function \phi(\beta)= \log\mathbb{E}[e^{\beta X_k}], and let

    \[ M_t \equiv e^{\beta S_t-\phi(\beta)t}, \qquad \text{for }t\geq 0.  \]

Since e^{\beta S_t}= e^{\beta}\prod_{k=1}^t e^{\beta X_k} and X_t is independent of M_0,M_1,\cdots,M_{t-1},

    \[ 	\mathbb{E}[M_t|M_0,\cdots, M_{t-1}] = M_{t-1}\, 	\mathbb{E}[\tfrac{M_t}{M_{t-1}}|M_0,\cdots, M_{t-1}]  	= M_{t-1}\, \mathbb{E}[e^{\beta X_t-\phi(\beta)}] = M_{t-1}, \]

where we have used \mathbb{E}[e^{\beta X_k - \phi(\beta)}]=1. So (M_t) is a martingale.

In the case \beta>0 and \phi(\beta)<0,

    \[\mathbb{E}[e^{-\phi(\beta)T}] = \mathbb{E}\big[\lim_{n\to\infty} M_{T\wedge n}\big] \leq\liminf_{n\to\infty} \mathbb{E}[M_{T\wedge n}]= M_0 = e^{\beta}.\]

The inequality is by Fatou's lemma and the second equality is by the optional stopping theorem. To see the first equality, note that if T<\infty, then S_{T\wedge n}\to S_T=0 and T\wedge n\to T as n\to\infty, while if T=\infty, then S_{T\wedge n}>0 for all n and T\wedge n\to\infty. Thus e^{-\phi(\beta)T}=\lim_{n\to\infty}e^{\beta S_{T\wedge n}-\phi(\beta)(T\wedge n)}=\lim_{n\to\infty}M_{T\wedge n}.

Next, we compute \phi(\beta). Recall that (X_k+1)\sim \text{Binomial}(n, \frac{c}{n}). It has the same distribution as the sum of n i.i.d. \text{Bernoulli}(\frac{c}{n}) variables. For Y\sim\text{Bernoulli}(p), we have \mathbb{E}[e^{\beta Y}]= 1+(e^{\beta}-1)p. Therefore,

    \begin{eqnarray*} -\phi(\beta) &=& - \log \mathbb{E}[e^{\beta X_t}]\\ &=& -\log\big( e^{-\beta} (1+(e^{\beta}-1)\tfrac{c}{n})^n \big)\\ &=& \beta - n\log\big(1+(e^{\beta}-1)\tfrac{c}{n}\big)\\ &\geq& \beta - c(e^{\beta}-1), \end{eqnarray*}

where the last inequality is because -\log(1+x)\geq -x for any x. Now, by setting \beta=-\log c, we obtain that -\phi(\beta)\geq c-1-\log c=\alpha. Thus we have shown \mathbb{E}[e^{\alpha T}]\le\mathbb{E}[e^{-\phi(\beta)T}]\le e^\beta=\frac{1}{c}. \quad\square

The supercritical case c>1

The goal of the following lectures is to prove part 2 of Theorem 1. More precisely, we will prove:

Theorem 4 Let c>1. Then

    \[ \frac{\max_v|C_v|}{n}\xrightarrow{n\to\infty} 1 - \rho \]

in probability, where \rho is the smallest positive solution of the equation \rho = e^{c(\rho-1)}. Moreover, there is \beta>0 such that all but one of the components have size \leq \beta\log n, with probability tending to 1 as n\to\infty.

This theorem says that with probability tending to 1, there is a unique giant component whose size is (1-\rho)n, and all other components are small with size \leq \beta\log n.

Here we provide some vague intuition for this theorem. When c>1, the random walk (S_t) satisfies \mathbb{E}[S_t-S_{t-1}]=c-1>0, i.e., (S_t) has positive drift. Then \mathbb{P}[T<\infty]<1! In fact, the further away it starts from 0, the smaller the probability it will ever hit 0. Consider the two situations:

  • S_t dies quickly: this implies that the component is small.
  • S_t lives long: then it must live very long, as once it gets far away from 0, the probability of returning is very small. This implies that the component must be very large (if we pretend that S_t=|A_t|).

Of course, S_t is not |A_t| (obviously |A_t| eventually hits 0). But the intuition explains that there cannot be components of intermediate size: given any vertex v, either |C_v| is small (\lesssim \log n), or it must get very large (\gtrsim n^{2/3}, say). In fact, we will find that all components of size \geq \beta\log n must grow all the way to \gtrsim n^{2/3}. However, any pair of large components must intersect with high probability, as there are many potential edges between them! Therefore, all vertices v with |C_v|\geq \beta\log n should be in the same giant component. We then show that the number of such vertices is (1-\rho)n with high probability.

Many thanks to Tracy Ke for scribing this lecture!

06. April 2013 by Ramon van Handel
Categories: Random graphs | Comments Off

Next lecture: April 4

Just a reminder that next week (March 21) is Spring Break, while the week after (March 28) there will be no lecture due to the ORFE/PACM colloquium of Elchanan Mossel.

The next Stochastic Analysis Seminar lecture will be on April 4. We will start fresh with a new topic: the study of the giant component of an Erdös-Rényi graph.

15. March 2013 by Ramon van Handel
Categories: Announcement | Comments Off

Lecture 3. Chromatic number

Let {C} be set of integers that represent colors. A vertex coloring of a graph {G=([n], E)} is an assignment {c\,:\,\{1, \ldots, n\} \rightarrow C} of a color {c(i) \in C} to each vertex {i \in [n]}. Furthermore a vertex coloring {c} is said to be proper if {c(i)\neq c(j)} for all {(i,j) \in E}. The chromatic number {\chi(G)} of a graph {G} is the smallest number {|C|} of colors needed for a proper vertex coloring to exist. In this lecture, any coloring is a proper vertex coloring. All logs are in base {2}.

Theorem [Bollobas, 1988]
The chromatic number {\chi(G)} of a random graph {G\sim G(n,\frac12)} satisfies

\displaystyle \chi(G) =\frac{n}{2\log n}(1+o_P(1))

1. Basic properties

In this section, we begin by reviewing some basic properties of the chromatic number, and in particular how it relates to independent sets. Recall that an independent set {S \subset [n]} of a graph {G=([n], E)} is a set of vertices such that {i,j \in S \Rightarrow (i,j) \notin E}. Independent sets are sometimes called stable sets. Denote by {\bar G=([n], \bar E)} the complement graph of {G} where {(i,j) \in \bar E} iff {(i,j) \notin E}. If {S} is an independent set of {G}, then it is a clique in {\bar G}. The largest independent set in {G} is called independence number of {G} and denoted by {\alpha(G)}.

Proposition For any graph {G=([n],E)} with independence number {\alpha(G)}, clique number {\omega(G)} and chromatic number {\chi(G)} the following holds

  1. {1\le \chi(G) \le n}
  2. {\chi(G) \ge \omega(G)}
  3. {n \le \chi(G)\alpha(G)}

Proof: Trivial. \Box

2. Chromatic, clique and independence numbers

Our goal is to study the chromatic number of a random graph {G \sim G(n,\frac12)}. It actually builds upon the clique number that we studied during last lecture. Indeed, we will use the following observation: if {G \sim G(n,\frac12)} then so does the complement graph {\bar G} of {G}. It implies that the independence number {\alpha(G)} has the same distribution as the clique number {\omega(G)}. We know from last lecture that {\omega(G)=(2\log n)(1+o(1))}. Therefore, it follows from the above proposition that

\displaystyle \chi(G) \ge \frac{n}{\alpha(G)}= \frac{n}{2\log n}(1+o(1))\,.

Our main theorem suggests that this simple bound is, in fact, tight. Proving this requires a stronger lower bound on clique of a random graph.

Define {m=\lfloor n/\ln^2 n\rfloor} and let {V \subset [n]} be a subset of {|V|=m} vertices and observe that induced subgraph {G_V=(V, E \cap [m]^2)} restricted to vertices in {V} has distribution {G(m,\frac12)}.


\displaystyle \bar \alpha_m=\min_{\substack{V\subset [n]\\ |V|=m}} \alpha(G_{V})

That is: every {m} vertex subgraph contains an independent set of size {\bar \alpha_m}.

Consider now the following coloring procedure to produce a proper coloring. Pull out independent sets of size {\bar \alpha_m} one by one and give each a distinct color. According to the definition of {\bar \alpha_m}, as long as there are at least {m} vertices, this is possible. If there are less than {m} vertices left, give each a distinct color.

The above algorithm gives one possible way of coloring graph {G} and thus the output number is an upper bound on the chromatic number {\chi(G)}. Let us now compute this upper bound. By definition of {\bar \alpha_m}, we have

\displaystyle \chi(G)\le \Big\lceil \frac{n-m}{\bar \alpha_m}\Big\rceil + m \le \frac{n}{\bar \alpha_m} +m\,. \ \ \ \ \ (1)

Therefore, to find an upper bound on {\chi(G)}, we need to find a lower bound on {\bar \alpha_m} for an appropriately chosen {m}. To that end, observe that

\displaystyle \begin{array}{rcl} \mathbb{P}(\bar \alpha_m<k)&\le& \sum_{\substack{V\subset [n]\\ |V|=m}} \mathbb{P}(\alpha(G_{V})<k)\\ &\le& {n \choose m} \max_{\substack{V\subset [n]\\ |V|=m}}\mathbb{P}(\omega(G_{V})<k)\,. \end{array}

Therefore, we need to prove a bound on {\mathbb{P}(\omega(G_{V})<k)}, uniformly in {V}. This can be done using concentration properties of a suitable martingale.

3. Azuma-Hoeffding inequality

Martingales have useful concentration properties. Specifically the following lemma holds.

Lemma [Azuma-Hoeffding inequality.]
Let {(\Omega, \mathcal{F}, \{\mathcal{F}_i\}_{i \ge 0}, \mathbb{P})} be a filtered space and let {X} be a random variable on {(\Omega, \mathcal{F}, \mathbb{P})}. Assume that the martingale {X_i=\mathbb{E}[X|\mathcal{F}_i], i\ge 0} is such that for all {i \ge 1},

\displaystyle |X_{i}-X_{i-1}|\le 1\,, \qquad \text{a.s.}

Then for any positive integer {N} and any {t>0}, it holds,

\displaystyle \mathbb{P}(X_N-X_0\ge t) \le \exp\Big(-\frac{t^2}{2N}\Big)


\displaystyle \mathbb{P}(X_N-X_0\le -t) \le \exp\Big(-\frac{t^2}{2N}\Big)

Proof: We assume the following inequality due to Hoeffding. For any random variable {Z} such that {|Z| \le 1} a.s. and any {s>0}, it holds

\displaystyle \mathbb{E}[e^{sZ}]\le e^{\frac{s^2}{2}}\,.

Now, using a Chernoff bound, we get for any {s>0},

\displaystyle \begin{array}{rcl} \mathbb{P}(X_N-X_0>t)&=&\mathbb{P}\Big(\sum_{i=1}^N\{X_i-X_{i-1}\}>t\Big)\\ &\le& e^{-st}\mathbb{E}\exp\Big(s\sum_{i=1}^N\{X_i-X_{i-1}\}\Big)\\ &=&e^{-st}\mathbb{E}\prod_{i=1}^N\exp\Big(s\{X_i-X_{i-1}\}\Big)\\ &=&e^{-st}\mathbb{E}\prod_{i=1}^N\mathbb{E}\Big[\exp\Big(s\{X_i-X_{i-1}\}\Big)\Big|\mathcal{F}_{i-1}\Big]\\ \end{array}

where {\mathcal{F}_{i}=\sigma(X_1, \ldots, X_i)}. The above two displays yield that

\displaystyle \mathbb{P}(X_N-X_0>t)\le \exp\Big(\inf_{s>0}\Big\{\frac{Ns^2}{2} -st\Big\}\Big)=\exp\Big(-\frac{t^2}{2N}\Big)\,.

The second part of the proof follows by applying the same argument to the martingale {(-X_i)_{i\ge 0}}. \Box

Consider the lexicographic ordering of {[n]^2} and let {E_1, \ldots, E_N \in \{0,1\}} be {N={n \choose 2}} iid Bernoulli random variables with parameter {1/2} indicating the presence of an edge in a graph {G\sim G(n,\frac12)}. Define the {\sigma}-algebra {\mathcal{F}_i=\sigma(E_1, \ldots E_i), i=1, \ldots, N}. We are going to apply the Azuma-Hoeffding inequality to the (Doob) martingale {Y_i=\mathbb{E}[Y|\mathcal{F}_i]\,, \quad i=1, \ldots, N} where {Y} is maximal number of edge-disjoint {k}-cliques in {G}.

Note first that {|Y_i-Y_{i-1}|\le 1} a.s., since adding/removing one edge can add/remove at most one edge disjoint {k}-clique. Moreover, this inequality would not hold if we had chosen {Y} to be the number cliques of size {k} (not necessarily disjoint). Indeed, adding or removing an edge can create or destroy many overlapping cliques.

It follows that,

\displaystyle \begin{array}{rcl} \mathbb{P}[\omega(G)<k]&=&\mathbb{P}[Y=0]\\ &=&\mathbb{P}[Y-\mathbb{E}[Y]=-\mathbb{E}[Y]]\\ &=&\mathbb{P}[Y_N-Y_0=-\mathbb{E}[Y]]\\ \end{array}

Therefore, applying the Azuma-Hoeffding inequality, we get

\displaystyle \mathbb{P}[\omega(G)<k]\le e^{-\frac{(\mathbb{E}[Y])^2}{2N}}\,. \ \ \ \ \ (2)

It remains to show that {\mathbb{E}[Y]} is sufficiently large. Note first that {Y \ge X}, where is {Y} is the number of {k}-cliques that do not share an edge with any other {k}-clique of the graph.

4. A strong lower bound on the clique number

Fix positive integers {m} and

\displaystyle k=(2-\varepsilon+o(1))\log m

and let {X} denote the number of {k} cliques in {G \sim G(m,\frac12)} that do not share an edge with any other {k}-clique of the graph. Then

\displaystyle \mathbb{E}[X]\ge m^{(2-\varepsilon)a+o(1)}

for all {a\in (0,1)}. Here the asymptotic notation notations {o(1)} are as {m \rightarrow \infty}.

Proof: Define {M=\left\lceil 2^ak2^{\frac{k-1}{2}}\right\rceil\ge 4k} (for {m} large enough) for some {a \in (1,2)} to be chosen later.

Moreover, note that for {m} large enough,

\displaystyle M\le 8m^{1-\frac{\varepsilon}2 +o(1)}\le m

so that {X \ge X'} where {X'} is the number of {k} cliques in {G \sim G(M,\frac12)} that do not share an edge with any other {k}-clique of the graph {G}. Let {G \sim G(M, \frac12)}.

For any {S \in [M]} define the indicator:

\displaystyle I_S=1\{S \text{ is a clique of size} \ k \ \text{in} \ G\}.

Moreover for any set {S}, let {Z(S)} denote the number of {k}-cliques in {G}, other than {S} itself, that share at least two vertices with {S}. Observe that

\displaystyle \begin{array}{rcl} \mathbb{E}[X']&=&\mathbb{E}\sum_{\substack{S \subset [M] \\ |S| =k}}I_S1\{Z(S)=0\}\\ &=&{M\choose k}\mathbb{P}(I=1, Z=0)\qquad \text{where } I=I_{[k]}\,, \ Z=Z([k])\\ &=&{M \choose k}\Big(\frac12\Big)^{k \choose 2}\mathbb{P}(Z=0|I=1)\,, \end{array}

We bound {\mathbb{P}(Z=0|I=1)} as follows

\displaystyle \begin{array}{rcl} \mathbb{P}(Z=0|I=1)&=&1-\mathbb{P}(Z\ge 1|I=1)\\ &\ge& 1-\mathbb{E}(Z|I=1)\\ &=&1-\sum_{\substack{S \subset [M], |S|=k\\ |S\cap [k]| \ge 2}}\mathbb{P}(I_S=1)\\ &=&1-\sum_{s=2}^{k-1}\underbrace{{k\choose s}{M-k \choose k-s}\Big(\frac12\Big)^{{k\choose 2}-{s\choose 2}}}_{F_s}\\ \end{array}

The following bound holds for all {2\le s \le k/2},

\displaystyle \begin{array}{rcl} \frac{F_s}{F_2}&=&\frac{(M-2k+2)!}{(M-2k+s)!}\Big[ \frac{(k-2)!}{(k-s)!} \Big]^2\frac2{s!}2^{\frac{(s+1)(s-2)}{2}}\\ &\le& \frac2{s!}\Big(\frac{(k-2)^22^{\frac{s+1}2}}{M-2k+2} \Big)^{s-2}\\ &\le&\frac1{(s-2)!}\Big(\frac{2k^22^{\frac{k/2+1}{2}}}{M} \Big)^{s-2}\qquad \text{for} \ M\ge 4k\\ &\le&\frac1{(s-2)!}(2^{2+a}k2^{-\frac{k}4})^{s-2} \end{array}

It yields

\displaystyle \begin{array}{rcl} \sum_{2\le s \le k/2}\frac{F_s}{F_2}&\le & \exp\Big(2^{2+a}k2^{-\frac{k}4}\Big)=1+o(1)\\ \end{array}

Moreover, for {s> k/2}, {M \ge k}, it holds

\displaystyle \begin{array}{rcl} {M-k \choose k-s}&\le& {M-k \choose s}\frac{s!(M-k-s)!}{(k-s)!(M-2k+s)!}\\ &\le& {M-k \choose s}\Big(\frac{s}{M-k+s}\Big)^k\\ &\le & {M-k \choose s}\Big(\frac{k}{M-k/2}\Big)^{k}\\ \end{array}


\displaystyle \begin{array}{rcl} F_s&=& {k\choose k-s}{M-k \choose k-s}\Big(\frac12\Big)^{{k\choose 2}-{s\choose 2}}\\ &\le& F_{k-s}\Big(\frac{k}{M-k/2}\Big)^{k}\Big(\frac12\Big)^{{k-s\choose 2}-{s\choose 2}}\\ &\le & F_{k-s}\Big(\frac{k2^{\frac{k-1}2}}{M-k/2}\Big)^{k}\\ &\le & F_{k-s}\Big(\frac{2^{-a}M}{M-k/2}\Big)^{k}\\ &\le &F_{k-s}o(1)\,. \end{array}


\displaystyle \sum_{k/2\le s \le k-1}\frac{F_s}{F_2}\le \sum_{2\le s \le k/2}\frac{F_s}{F_2}o(1)=o(1)\\


\displaystyle \mathbb{P}(Z=0|I=1)\ge 1-F_2(1+o(1))\,.


\displaystyle \begin{array}{rcl} F_2&=&2{k \choose 2}{M-k \choose k-2} \frac{1}{2^{k \choose 2}}\\ &=& k^2\Big(\frac{M-k}{k-2}\Big)^{k-2}2^{-\frac{k(k-1)}{2}}(1+o(1))\\ &=& \frac{k^2}{M^2}\Big(\frac{M}{k2^{\frac{k-1}{2}}}\Big)^{k}(1+o(1))\\ &=& \frac{k^22^{ak}}{M^2}(1+o(1))\\ &\le&\frac{2}{2^{2a}}\Big(\frac{2^a}{2}\Big)^k(1+o(1))=o(1)\,, \end{array}

since {2^a<2}.

To conclude the proof, observe that by Stirling, since {k=o(n)},

\displaystyle \begin{array}{rcl} {M \choose k}\Big(\frac12\Big)^{k \choose 2}&\ge&\Big(\frac{M}{k2^{\frac{k-1}{2}}}\Big)^k(1+o(1))\\ &=&2^{ak}(1+o(1))=m^{(2-\varepsilon) a+o(1)}\,. \end{array}


Let {G \sim G(m, \frac12)} and observe that

\displaystyle (2-\varepsilon+o(1))\log m=(2-\varepsilon+o(1))\log n

We can therefore apply the previous proposition together with~(2), to get

\displaystyle \mathbb{P}[\omega(G)<(2-\varepsilon+o(1))\log n]\le \exp\Big(-\frac{m^{2(2-\varepsilon)a+o(1)}}{n(n-1)}\Big)  

5. Proof of the main theorem


\displaystyle {n \choose m}\le 2^n=2^{m^{1+o(1)}}\,,

we get for {k=(2-\varepsilon+o(1))\log n},

\displaystyle \begin{array}{rcl} \mathbb{P}(\bar \alpha_m<k)&\le &{n \choose m} \exp\Big(-\frac{m^{2(2-\varepsilon)a+o(1)}}{n(n-1)}\Big)\\ &\le& \exp\Big(m^{1+o(1)}(\ln 2)-\frac{m^{2(2-\varepsilon)a+o(1)}}{n(n-1)}\Big)=o(1)\,. \end{array}

Thus {\bar \alpha_m \ge (2\log n)(1+o_P(1))}. Together with~(1), it yields

\displaystyle \begin{array}{rcl} \chi(G)&\le&\frac{n}{k}(1+o_P(1))+m \\ &=&\frac{n}{2\log n}(1+o_P(1))+o\Big(\frac{n}{\log n}\Big)\\ &=&\frac{n}{2\log n}(1+o_P(1)) \end{array}

Which completes the proof of our theorem.

Lecture and scribing by Philippe Rigollet

14. March 2013 by Philippe
Categories: Random graphs | Comments Off

Lecture 2. Clique number

In this lecture all logs are in base {2}. We will prove the following.

Theorem For the centered and normalized clique number

\displaystyle \frac{\omega\left(G(n, \frac12)\right) - 2 \log n}{\log \log n}\xrightarrow{n\to\infty}-2 \quad\text{in probability.}

That is, as n\to\infty, the clique number of the Erdös-Rényi graph G(n,\frac12) is \omega\left(G(n, \frac12)\right)=2\log n-(2+o(1))\log\log n.

Proof: The proof is divided into two parts. First, we show that the clique number cannot be too big using a union bound. Then we show that the clique number cannot be too small using the second moment method. In the following, G_n denotes an Erdös-Rényi graph G(n,\frac12).

For the first part one has

\displaystyle \begin{array}{rcl} {\mathbb P}(\omega(G_n) \geq k) & = & {\mathbb P}( \exists S \subset [n], \; \text{s.t.} \; |S|=k \; \text{and} \; S \; \text{is a clique in} \; G_n) \\ & \leq & \sum_{S \subset [n], |S|=k} {\mathbb P}(S \; \text{is a clique in} \; G_n) \notag \\ & = & {n \choose k} \left(\frac12\right)^{{k \choose 2}} \\ & \leq & \left(\frac{e n}{k} \right)^k 2^{- \frac{k(k-1)}{2}} \\ & = & 2^{k \left( \log n - \log k + \log e- \frac{k-1}{2}\right)}. \end{array}

Thus, choosing k=2\log n - (2 - \epsilon) \log \log n, we obtain

{\mathbb P}(\omega(G_n) \geq 2\log n - (2 - \epsilon) \log \log n) \le n^{ - \frac{\epsilon}{2} \log \log n + \frac{1}{2} + \log e} \xrightarrow{n \rightarrow \infty} 0

for every {\epsilon >0}, where we used that k\ge\log n for large n.

For the second part it is useful to introduce some notation. Define

\displaystyle {X_S = 1\{S \; \text{is a clique in} \; G_n\}},\qquad {Y_k = \sum_{S \subset [n], |S|=k} X_S}.

In particular, {\omega(G_n) < k} iff {Y_k=0}. Thus we want to show that for {k=2\log n - (2 + \epsilon) \log \log n} one has {{\mathbb P}(Y_k = 0) \to 0}. Using the trivial observation that x=0 implies (x-y)^2=y^2, we get

\displaystyle {\mathbb P}(Y_k=0) \leq {\mathbb P}((Y_k - {\mathbb E} Y_k)^2 = ({\mathbb E} Y_k)^2) \leq \frac{\text{Var}(Y_k)}{({\mathbb E} Y_k)^2}

where we have used Markov’s inequality. Note that by linearity of the expectation, {\mathbb E} Y_k = {n \choose k} \left(\frac12\right)^{k\choose 2}. Furthermore, we can write

\displaystyle \text{Var}(Y_k) = \sum_{S \subset [n], |S|=k} \text{Var}(X_S) + \sum_{S, T \subset [n], |S|=|T|=k, S \neq T} \{ {\mathbb E} X_S X_T - ({\mathbb E} X_S) ({\mathbb E} X_T) \}.

As {X_S} are boolean random variables we have {\text{Var}(X_S) \leq {\mathbb E} X_S} and thus

\displaystyle \frac{\sum_{S \subset [n], |S|=k} \text{Var}(X_S)}{\left( {\mathbb E} Y_k \right)^2} \leq \frac{\sum_{S \subset [n], |S|=k} {\mathbb E} X_S}{\left( {\mathbb E} Y_k \right)^2} = \frac{1}{{n \choose k} \left(\frac12\right)^{k\choose 2}},

which tends to {0} for {k=2\log n - (2 + \epsilon) \log \log n} (see the first part of the proof and use the inequality {{n \choose k} \geq \left(\frac{n}{k}\right)^k}). Thus it remains to show that the following quantity tends to {0}:

\displaystyle \frac{\sum_{S, T \subset [n], |S|=|T|=k, S \neq T} \{{\mathbb E} X_S X_T - ({\mathbb E} X_S) ({\mathbb E} X_T)\}}{\left({\mathbb E} Y_k\right)^2} .

First note that, by the independence of the edges, for {S, T} with {|S \cap T| \leq 1} we have that {X_S} and {X_T} are independent, so that in the numerator of the above quantity one can restrict to {S, T} with {|S \cap T| \geq 2}. Now by an elementary reasoning we have (with {S_0} being an arbitrary subset of {k} vertices)

\displaystyle \begin{array}{rcl} && \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb E} X_S X_T \\ && = \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb P}(X_S = 1 \; \text{and} \; X_T = 1) \\ && = \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb P}(X_S = 1) {\mathbb P}(X_T =1 | X_S = 1) \\ && = \left( \sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1) \right) \left(\sum_{S \subset [n], |S|=k} {\mathbb P}(X_S=1) \right) \\ && = \left( \sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1) \right) \left({n \choose k} {\mathbb E} X_S \right) . \end{array}

Thus we are now left with proving that the following quantity goes to {0}:

\displaystyle \frac{\sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1)}{{n \choose k} \left(\frac12 \right)^{- {k \choose 2}}} . \ \ \ \ \ (1)

Clearly one has

\displaystyle {\mathbb P}(X_T = 1 | X_{S_0} = 1) = \left( \frac12 \right)^{{k \choose 2} - {|T \cap S_0| \choose 2}} ,

which shows that (1) can be rewritten as

\displaystyle \sum_{s=2}^{k-1} \frac{{k \choose s} {n-k \choose k-s}}{{n \choose k}} 2^{{s \choose 2}} . \ \ \ \ \ (2)

Now note that since

\displaystyle {n-1 \choose k-1} = \frac{k}{n} {n \choose k} ,

one has

\displaystyle {n-k \choose k-s} \leq {n-s \choose k-s} = \left( \prod_{\alpha=1}^s \frac{k-\alpha}{n-\alpha} \right) {n \choose k} \leq \left(\frac{k}{n}\right)^s {n \choose k} .

Using { {k \choose s} \leq \left(\frac{e k}{s}\right)^s} one obtains that (2) is bounded from above by

\displaystyle \sum_{s=2}^{k-1} \left(\frac{k}{s}\right)^s \left(\frac{k}{n}\right)^s 2^{{s \choose 2}} = \sum_{s=2}^{k-1} 2^{s \left(2 \log k - \log n + \frac{s}{2} - \log s + \log e - \frac12\right)} .

As \frac{s}{2}-\log s is a convex function and as {2 \leq s \leq k}, one has {\frac{s}{2} - \log s \leq \max(0, \frac{k}{2} - \log k)}. Thus for {k} large enough the exponent in the above display is bounded from above by {\log k - \log n + \frac{k}{2} + c}, and for {k=2\log n - (2 + \epsilon) \log \log n} this latter is bounded by {- \frac{\epsilon}{2} \log \log n + c}. Thus we proved that (2) is bounded from above by

\displaystyle \sum_{s=2}^{k-1} 2^{ - s (\frac{\epsilon}{2} \log \log n - c) } ,

which tends to {0} as {n} tends to infinity, concluding the proof. \Box

07. March 2013 by Sebastien Bubeck
Categories: Random graphs | Comments Off

← Older posts

Newer posts →