Lecture 2. Clique number

In this lecture all logs are in base {2}. We will prove the following.

Theorem For the centered and normalized clique number

\displaystyle \frac{\omega\left(G(n, \frac12)\right) - 2 \log n}{\log \log n}\xrightarrow{n\to\infty}-2 \quad\text{in probability.}

That is, as n\to\infty, the clique number of the Erdös-Rényi graph G(n,\frac12) is \omega\left(G(n, \frac12)\right)=2\log n-(2+o(1))\log\log n.

Proof: The proof is divided into two parts. First, we show that the clique number cannot be too big using a union bound. Then we show that the clique number cannot be too small using the second moment method. In the following, G_n denotes an Erdös-Rényi graph G(n,\frac12).

For the first part one has

\displaystyle \begin{array}{rcl} {\mathbb P}(\omega(G_n) \geq k) & = & {\mathbb P}( \exists S \subset [n], \; \text{s.t.} \; |S|=k \; \text{and} \; S \; \text{is a clique in} \; G_n) \\ & \leq & \sum_{S \subset [n], |S|=k} {\mathbb P}(S \; \text{is a clique in} \; G_n) \notag \\ & = & {n \choose k} \left(\frac12\right)^{{k \choose 2}} \\ & \leq & \left(\frac{e n}{k} \right)^k 2^{- \frac{k(k-1)}{2}} \\ & = & 2^{k \left( \log n - \log k + \log e- \frac{k-1}{2}\right)}. \end{array}

Thus, choosing k=2\log n - (2 - \epsilon) \log \log n, we obtain

{\mathbb P}(\omega(G_n) \geq 2\log n - (2 - \epsilon) \log \log n) \le n^{ - \frac{\epsilon}{2} \log \log n + \frac{1}{2} + \log e} \xrightarrow{n \rightarrow \infty} 0

for every {\epsilon >0}, where we used that k\ge\log n for large n.

For the second part it is useful to introduce some notation. Define

\displaystyle {X_S = 1\{S \; \text{is a clique in} \; G_n\}},\qquad {Y_k = \sum_{S \subset [n], |S|=k} X_S}.

In particular, {\omega(G_n) < k} iff {Y_k=0}. Thus we want to show that for {k=2\log n - (2 + \epsilon) \log \log n} one has {{\mathbb P}(Y_k = 0) \to 0}. Using the trivial observation that x=0 implies (x-y)^2=y^2, we get

\displaystyle {\mathbb P}(Y_k=0) \leq {\mathbb P}((Y_k - {\mathbb E} Y_k)^2 = ({\mathbb E} Y_k)^2) \leq \frac{\text{Var}(Y_k)}{({\mathbb E} Y_k)^2}

where we have used Markov’s inequality. Note that by linearity of the expectation, {\mathbb E} Y_k = {n \choose k} \left(\frac12\right)^{k\choose 2}. Furthermore, we can write

\displaystyle \text{Var}(Y_k) = \sum_{S \subset [n], |S|=k} \text{Var}(X_S) + \sum_{S, T \subset [n], |S|=|T|=k, S \neq T} \{ {\mathbb E} X_S X_T - ({\mathbb E} X_S) ({\mathbb E} X_T) \}.

As {X_S} are boolean random variables we have {\text{Var}(X_S) \leq {\mathbb E} X_S} and thus

\displaystyle \frac{\sum_{S \subset [n], |S|=k} \text{Var}(X_S)}{\left( {\mathbb E} Y_k \right)^2} \leq \frac{\sum_{S \subset [n], |S|=k} {\mathbb E} X_S}{\left( {\mathbb E} Y_k \right)^2} = \frac{1}{{n \choose k} \left(\frac12\right)^{k\choose 2}},

which tends to {0} for {k=2\log n - (2 + \epsilon) \log \log n} (see the first part of the proof and use the inequality {{n \choose k} \geq \left(\frac{n}{k}\right)^k}). Thus it remains to show that the following quantity tends to {0}:

\displaystyle \frac{\sum_{S, T \subset [n], |S|=|T|=k, S \neq T} \{{\mathbb E} X_S X_T - ({\mathbb E} X_S) ({\mathbb E} X_T)\}}{\left({\mathbb E} Y_k\right)^2} .

First note that, by the independence of the edges, for {S, T} with {|S \cap T| \leq 1} we have that {X_S} and {X_T} are independent, so that in the numerator of the above quantity one can restrict to {S, T} with {|S \cap T| \geq 2}. Now by an elementary reasoning we have (with {S_0} being an arbitrary subset of {k} vertices)

\displaystyle \begin{array}{rcl} && \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb E} X_S X_T \\ && = \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb P}(X_S = 1 \; \text{and} \; X_T = 1) \\ && = \sum_{S, T \subset [n], |S|=|T|=k, S \neq T, |S \cap T| \geq 2} {\mathbb P}(X_S = 1) {\mathbb P}(X_T =1 | X_S = 1) \\ && = \left( \sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1) \right) \left(\sum_{S \subset [n], |S|=k} {\mathbb P}(X_S=1) \right) \\ && = \left( \sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1) \right) \left({n \choose k} {\mathbb E} X_S \right) . \end{array}

Thus we are now left with proving that the following quantity goes to {0}:

\displaystyle \frac{\sum_{T \subset [n], |T|=k, T \neq S_0, |S_0 \cap T| \geq 2} {\mathbb P}(X_T = 1 | X_{S_0} = 1)}{{n \choose k} \left(\frac12 \right)^{- {k \choose 2}}} . \ \ \ \ \ (1)

Clearly one has

\displaystyle {\mathbb P}(X_T = 1 | X_{S_0} = 1) = \left( \frac12 \right)^{{k \choose 2} - {|T \cap S_0| \choose 2}} ,

which shows that (1) can be rewritten as

\displaystyle \sum_{s=2}^{k-1} \frac{{k \choose s} {n-k \choose k-s}}{{n \choose k}} 2^{{s \choose 2}} . \ \ \ \ \ (2)

Now note that since

\displaystyle {n-1 \choose k-1} = \frac{k}{n} {n \choose k} ,

one has

\displaystyle {n-k \choose k-s} \leq {n-s \choose k-s} = \left( \prod_{\alpha=1}^s \frac{k-\alpha}{n-\alpha} \right) {n \choose k} \leq \left(\frac{k}{n}\right)^s {n \choose k} .

Using { {k \choose s} \leq \left(\frac{e k}{s}\right)^s} one obtains that (2) is bounded from above by

\displaystyle \sum_{s=2}^{k-1} \left(\frac{k}{s}\right)^s \left(\frac{k}{n}\right)^s 2^{{s \choose 2}} = \sum_{s=2}^{k-1} 2^{s \left(2 \log k - \log n + \frac{s}{2} - \log s + \log e - \frac12\right)} .

As \frac{s}{2}-\log s is a convex function and as {2 \leq s \leq k}, one has {\frac{s}{2} - \log s \leq \max(0, \frac{k}{2} - \log k)}. Thus for {k} large enough the exponent in the above display is bounded from above by {\log k - \log n + \frac{k}{2} + c}, and for {k=2\log n - (2 + \epsilon) \log \log n} this latter is bounded by {- \frac{\epsilon}{2} \log \log n + c}. Thus we proved that (2) is bounded from above by

\displaystyle \sum_{s=2}^{k-1} 2^{ - s (\frac{\epsilon}{2} \log \log n - c) } ,

which tends to {0} as {n} tends to infinity, concluding the proof. \Box

07. March 2013 by Sebastien Bubeck
Categories: Random graphs | Comments Off

css.php