## Giant component: final remarks

The past three lectures were devoted to the giant component theorem:

TheoremLet be the connected component of that contains .

- If , then in probability.
- If , then in probability, for some .
- If , then in distribution.

We proved only the first (subcritical) and second (supercritical) cases: our presentation was largely inspired by the treatment of Janson, Luczak, and Rucinski and Durrett. We have omitted the critical case, however, as the last two lectures of the semester will be on another topic. The goal of this post is to provide some final remarks and references on the giant component theorem.

**Retrospective**

At first sight, the “double jump” in the giant component theorem looks quite shocking. In hindsight, however, this does not seem quite so miraculous, as it mirrors an elementary phenomenon that is covered in many introductory probability courses: given a (nice) random walk with initial condition , define the hitting time for some . Then there are three cases:

- If has
*negative drift*, then . In fact, the random variable has a light (exponential) tail. - If has
*positive drift*, then . - If has
*zero drift*, then a.s. but . That is, the random variable has a heavy tail.

This “double jump” in the behavior of the hitting probabilities of a random walk is directly analogous to the behavior of the connected components of an Erdös-Rényi graph, and this was indeed the basic idea behind the proofs given in the previous lectures. Of course, it remains a bit of a miracle that the random walk approximation of the exploration process, which only holds for small times, is sufficiently powerful that it describes so completely the behavior of the random graph.

**The critical case**

In the *subcritical* case, the size of the largest component is of order because the hitting time of a random walk with negative drift has an *exponential tail*: that is, we proved

which goes to zero as for .

Similarly, we would expect that we can obtain the size of the largest component in the *critical* case if we understand the *heavy tail* behavior of the hitting time of a random walk with zero drift. This is in fact the case. Indeed, when there is zero drift, one can show that

The crude union bound argument used above now does not give the correct answer, but an only slightly better argument is needed. Indeed, note that

Therefore, . With some further work, a corresponding lower bound can also be proved. See the paper by Nachmias and Peres or the notes by van der Hofstad for the details.

It turns out that in the critical case is not only bounded in probability, but in fact converges weakly to some limiting distribution. This distribution, and much more, is beautifully described by Aldous in terms of Brownian excursions. This is an interesting example of the application of stochastic analysis to discrete probability; unfortunately, we do not have the time to cover it.

In a different direction, it turns out that various additional phase transitions appear when we consider a finer scaling, for example, in the “critical window” . For an overview of the various transitions, see, for example, section 11.1 in Alon and Spencer.

**Connectivity threshold**

Rather than considering the size of the largest component, one could ask when the *entire* Erdös-Rényi graph is connected. Note that when , the constant in the size of the giant component is always strictly positive, so the graph is not connected. Therefore, in order for the entire graph to be connected, we must let (that is, the edge probability must be superlinear). It turns out that the appropriate scaling for this question is , and another phase transition arises here.

Theorem.Let . If , then the Erdös-Rényi graph is connected with probability tending to as . If , the graph is connected with probability tending to as .

To get some intuition, consider the probability that a vertex is isolated (that is, disconnected from every other vertex):

Thus for , we have

In particular, if , then there must exist an isolated vertex with positive probability as , in which case the graph is not connected (in fact, it is not hard to show that the variance of the number of isolated components is of the same order as its mean, so that the probability that the graph is connected tends to zero). Somewhat miraculously, it turns out that when there are no isolated vertices, then the graph must already be connected, so that we do indeed obtain the sharp transition described in the Theorem above. For a proof of this fact (by a clever combinatorial argument) see, for example, the lecture notes by van der Hofstad. Alternatively, one can use a random walk argument entirely in the spirit of the proofs in the previous lectures to prove that the random graph is connected for : by running simultaneous exploration processes from different vertices as we did in the proof of the supercritical case, one can show that *all* connected components must intersect when and thus the entire graph must be connected. See section 2.8 in Durrett for such an argument.