Welcome to the ‘I’m a bandit’ blog! If you are curious about this title, you can check my newly published book. However, despite its name, this blog will not be about bandits, but rather about more general problems in optimization, probability and statistics. To kick-off the blog I would like to recall one of the most famous result in the theory of random graphs: the largest clique in an Erdös-Rényi random graph is of logarithmic size. This result will form the basis for my second blog post, which should happen next week, where I will discuss more recent results on the clique number of random geometric graphs in high dimension.
In this post (and the next one) we consider graphs with vertices indexed by
. A random graph is defined by the random variable
(the random set of edges). We shall be interested in the clique number
of a graph
which is the size of the largest clique in
, that is
We will consider sequences of random graphs , and we will study the behavior of the random variables
as
tends to infinity.
The simplest model of random graphs was put forth by Erdös and Rényi in the early Sixties. The random variable in an Erdös-Rényi random graph of size
is defined by the following two properties:
- (a) For any pair
,
.
- (b) The events
are mutually independent.
The clique number of Erdös-Rényi random graphs is very well understood. In this post I would like to prove the following result that gives the asymptotic order of magnitude of the clique number.
Theorem 1 The normalized clique number
tends to
in probability.
Proof: The proof is divided into two parts. First, using an union bound we show that the clique number cannot be too big. Then we show that the clique number cannot be too small with the second moment method.
For the first part one has
Thus, ,
.
For the second part it is useful to introduce a few notation. Let , and
. In particular we have that
iff
. Thus we want to show that for
one has
. Now using a simple rewriting, and Markov’s inequality one obtains
Note that by linearity of the expectation, . Furthermore we can also write
Since are boolean random variables we have
and thus
and the latter quantity tends to for
(see the first part of the proof). Thus it remains to show that the following quantity tends to
:
First note that, by the independence of the edges, for with
we have that
and
are independent, so that in the numerator of the above quantity one can restrict to
with
. Now by an elementary reasoning we have (with
being an arbitrary subset of
vertices)
Thus we are now left with proving that the following quantity goes to :
Clearly one has
which shows that (1) can be rewritten as
An unpleasant calculation then shows that this term tends to , which finishes the proof.
By Abdullah July 29, 2015 - 1:56 pm
In the first line of re-writing P(Y_k=0) with Markov inequality.
How is P(Y=0) *<=* P((Y-EY)^2=(EY)^2), would *=* be more precise ?
By Xin January 17, 2013 - 8:00 am
When you say a sequence of graph G_n, do you mean G_n and G_{n+1} are independent?
Have you considered the case that G_n being subgraph of G_{n+1}, that is you expand the graph on itself.
Thank you!
By Sebastien Bubeck January 17, 2013 - 8:18 am
In fact it does not matter for what I presented: the distribution of G_{n+1} is the same in the two cases that you describe.
Of course there would be a difference between the two models that you propose if I were to look at events such as ‘does G_{n+1} contains G_n as a subgraph’, but usually this is not the type of questions one is interested in.