Lectures 7/8. Games on random graphs

The following two lectures by Rene Carmona are on games on random graphs.
Many thanks to Patrick Rebes­chini for scrib­ing these lectures!

In the following we discuss some results from the paper “Connectivity and equilibrium in random games” by Daskalakis, Dimakis, and Mossel. We define a random game on a random graph and we characterize the graphs that are likely to exhibit Nash equilibria for this game. We show that if the random graph is drawn from the Erdös-Rényi distribution, then in the high connectivity regime the law of the number of pure Nash equilibria converges toward a Poisson distribution, asymptotically, as the size of the graph is increased.

Let G=(V,E) be a simple (that is, undirected and with no self-edges) graph, and for each v\in V denote by N(v) the set of neighbors of v, that is, N(v):=\{v'\in V : (v,v')\in E\}. We think of each vertex in V as a player in the game that we are about to introduce. At the same time, we think of each edge (v,v')\in E as a strategic interaction between players v and v'.

Definition (Game on a graph). For each v\in V let S_v represent the set of strategies for player v, assumed to be a finite set. We naturally extend this definition to include families of players: for each A\subseteq V, let S_A= \times_{v\in A} S_v be the set of strategies for each player in A. For each v\in V, denote by u_v:  S_v\times S_{N(v)}\ni(\sigma_v,\sigma_{N(v)})\rightarrow u_v(\sigma_v,\sigma_{N(v)}) \in \mathbb{R} the reward function for player v. A game is a collection (S_v,u_v)_{v\in V}.

The above definition describes a game that is static, in the sense that the game is played only once, and local, in the sense that the reward function of each player depends only on its own strategy and on the strategy of the players in its neighbors. We now introduce the notion of pure Nash equilibrium.

Definition (Pure Nash equilibrium). We say that \sigma \in S_V is a pure Nash equilibrium (PNE) if for each v\in V we have

    \[u_v(\sigma_v,\sigma_{N(v)}) \ge u_v(\tau,\sigma_{N(v)}) 	\qquad\text{for each $\tau\in S_v$}.\]

A pure Nash equilibrium represents a state where no player can be better off by changing his own strategy if he is the only one who is allowed to do so. In order to investigate the existence of a pure Nash equilibrium it suffices to study the best response function defined below.

Definition (Best response function). Given a reward function u_v for player v\in V, we define the best response function \operatorname{BR}_v for v as

    \[\operatorname{BR}_v(\sigma_v,\sigma_{N(v)}) := 	\begin{cases}  		1	& \text{if } \sigma_v\in\arg\sup_{\tau\in S_v} u_v(\tau,\sigma_{N(v)}),\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly, \sigma is a pure Nash equilibrium if and only if \operatorname{BR}_v(\sigma_v,\sigma_{N(v)})=1 for each v\in V. We now define the type of random games that we will be interested in; in order to do so, we need to specify the set of strategies and the reward function for each player.

Definition (Random game on a fixed graph). For a graph G=(V,E) and an atomless probability measure \mu on \mathbb{R}, let \mathcal{D}_{G,\mu} be the associated random game defined as follows:

  1. S_v=\{0,1\} for each v\in V;
  2. \{ u_v(\sigma_v,\sigma_{N(v)}) \}_{v\in V, \sigma_v \in S_v, \sigma_{N(v)} \in S_{N(v)}} is a collection of independent identically distributed random variables with distribution \mu.

Remark. For each game \mathcal{D}_{G,\mu} the family \{\operatorname{BR}_v(0,\sigma_{N(v)})\}_{v\in V, \sigma_{N(v)}\in S_{N(v)}} is a collection of independent random variables that are uniformly distributed in \{0,1\}, and for each v\in V, \sigma_{N(v)}\in S_{N(v)} we have \operatorname{BR}_v(1,\sigma_{N(v)}) = 1-\operatorname{BR}_v(0,\sigma_{N(v)}) almost surely. In fact, note that \operatorname{BR}_v(0,\sigma_{N(v)})=1 if and only if u_v(0,\sigma_{N(v)}) \ge u_v(1,\sigma_{N(v)}) and this event has probability 1/2 since the two random variables appearing on both sides of the inequality sign are independent with the same law \mu and \mu is atomless. As far as the analysis of the existence of pure Nash equilibria is concerned, we could take the present notion of best response functions as the definition of our random game on a fix graph. In fact, note that the choice of \mu in \mathcal{D}_{G,\mu} does not play a role in our analysis, and we would obtain the same results by choosing different (atomless) distributions for sampling (independently) the reward function of each player.

Denote by G(n,p) the distribution of a Erdös-Rényi random graph with n vertices where each edge is present independently with probability p. We now introduce the notion of a random game on a random graph.

Definition (Random game on a ramon graph). For each n\in\mathbb{N}, p\in(0,1) and each probability measure \mu on \mathbb{R}, do the following:

  1. choose a graph G from G(n,p);
  2. choose a random game from \mathcal{D}_{G,\mu} for the graph G.

Henceforth, given a random variable X let \mathcal{L}(X) represent its distribution. Given two measures \mu and \nu on a measurable space, define the total variation distance between \mu and \nu as

    \[\| \mu - \nu \|_{TV} := \sup_{f: \| f\| \le 1} | \mu f - \nu f |,\]

where the supremum is taken over measurable functions such that the supremum norm is less than or equal to 1.

We are now ready to state the main theorem that we will prove in the following (Theorem 1.9 in Daskalakis, Dimakis, and Mossel).

Theorem 1 (High connectivity regime). Let Z^{n,p} be the number of pure Nash equilibria in the random game on random graph defined above. Let define the high-connectivity regime as

    \[p(n)= \frac{(2+\varepsilon(n)) \log n}{n},\]

where \varepsilon:\mathbb{N}\rightarrow\mathbb{R} satisfies the following two properties:

    \begin{align*} 	\varepsilon (n) &> c &\text{for each $n\in\mathbb{N}$, for some $c>0$,}\\ 	\varepsilon (n) &\le \frac{n}{\log n} - 2 &\text{for each $n\in\mathbb{N}$.} \end{align*}

Then, we have

    \[\mathbf{P}_{G(n,p)} \{ \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le O(n^{-\varepsilon/8}) + e^{-\Omega(n)} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

where \mathbf{P}_{G(n,p)} denotes the conditional probability given the graph G(n,p) and N_1 is a Poisson random variable with mean 1. In particular,

    \[\lim_{n\rightarrow\infty} \mathcal{L}(Z^{n,p(n)}) = \mathcal{L}(N_1),\]

which shows that in this regime a pure Nash equilibrium exists with probability converging to 1-\frac{1}{e} as the size of the network increases.

Remark. Using the terminology of statistical mechanics, the first result in Theorem 1 represents a quenched-type result since it involves the conditional distribution of a system (i.e., the game) given its environment (i.e., the graph). On the other hand, the second result represents an annealed-type result, where the unconditional probability is considered.

In order to prove Theorem 1 we need the following lemma on Poisson approximations. The lemma is adapted from the results of R. Arratia, L. Goldstein, and L. Gordon (“Two moments suffice for Poisson approximations: the Chen-Stein method“, Ann. Probab. 17, 9-25, 1989) and it shows how the total variation distance between the law of a sum of Bernoulli random variables and a Poisson distribution can be bounded by the first and second moments of the Bernoulli random variables. This result is a particular instance of the Stein’s method in probability theory.

Lemma 2 (Arratia, Goldstein, and Gordon, 1989). Let \{X_i\}_{i=0,1,\ldots,N} be a collection of Bernoulli random variables with p_i:=\mathbf{P}\{X_i=1\}. For each i\in\{0,1,\ldots,N\} let B_i\subseteq \{0,1,\ldots,N\} be such that \{X_j\}_{j\in B^c_i} is independent of X_i. Define

    \begin{align*} 	b_1 := \sum_{i=0}^N \sum_{j\in B_i} p_ip_j,\qquad\qquad 	b_2 := \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} p_{ij}, \end{align*}

where p_{ij} := \mathbf{P}\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and \lambda := \mathbf{E} Z = \sum_{i=0}^N p_i. If N_{\lambda} is a Poisson random variable with mean \lambda, then

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV} \le 2 (b_1+b_2).\]

Proof. We define the following operators that act on each function f : \mathbb{N}\rightarrow\mathbb{R}:

    \begin{align*} 	[Df](n) &:= f(n+1)-f(n)&\text{for each $n\in\mathbb{N}$},\\ 	[Tf](n) &:= nf(n)-\lambda f(n+1)&\text{for each $n\in\mathbb{N}$},\\ 	[Sf](n+1) &:= - \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} &\text{for each $n\in\mathbb{N}$}, 	& &[Sf](0):=0. \end{align*}

We point out that T characterizes N_\lambda in the sense that \mathbf{E} [[Tf](M)] = 0 if and only if M is a Poisson random variable with mean \lambda; T is an example of Stein’s operator. First of all, we show that for each f : \mathbb{N}\rightarrow\mathbb{R} we have TSf = f. In fact, if n=0 we have

    \[[TSf](0) = T[Sf](0) = - \lambda [Sf](1)  	= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le 0}]} 	{\mathbf{P}\{N_\lambda = 0\}} = f(0)\]

and if n\ge 1 we have

    \begin{align*} 	[TSf](n) &= T[Sf](n) =  n[Sf](n) - \lambda [Sf](n+1)\\ 	&= -n \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\lambda \mathbf{P}\{N_\lambda = n-1\}}  	+ \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= \frac{\mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n}] 	- \mathbf{E}[f(N_\lambda) \mathbf{1}_{N_\lambda \le n-1}]} 	{\mathbf{P}\{N_\lambda = n\}}\\ 	&= f(n). \end{align*}

For each i\in\{0,1,\ldots,N\} define \tilde Z_i := \sum_{j\in B^c_i}X_j and Y_i := Z-X_i = \sum_{j\in\{0,1,\ldots,N\}\setminus \{i\}} X_j. The following properties hold:

  1. \tilde Z_i \le Y_i \le Z,
  2. X_i f(Z) = X_i f(Y_i+1),
  3. f(Y_i+1)-f(Z+1) = X_i[f(Y_i+1)-f(Y_i+2)].

In what follows, consider any given function h:\mathbb{N}\rightarrow \mathbb{R} such that \| h\| := \sup_{n\in\mathbb{N}} |h(n)| \le 1. Define the function \tilde h as \tilde h(n):=h(n)-\mathbf{E}h(N_\lambda) for each n\in\mathbb{N}, and let f:= S \tilde h. From what was seen above we have Tf=\tilde h and we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] =& \ \mathbf{E} \tilde h (Z) = \mathbf{E} [Tf] (Z) 	= \mathbf{E} [Zf(Z)-\lambda f (Z+1)] \\ 	=& \ \sum_{i=0}^N \mathbf{E} [X_i f(Z) - p_i f(Z+1)] 	\stackrel{\text{(ii)}}{=} \sum_{i=0}^N \mathbf{E} [X_i f(Y_i+1) - p_i f(Z+1)]\\ 	=& \ \sum_{i=0}^N \mathbf{E} [p_i f(Y_i+1) - p_i f(Z+1)]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(Y_i+1)]\\ 	\stackrel{\text{(iii)}}{=}& \ \sum_{i=0}^N p_i \mathbf{E} [X_i (f(Y_i+1) - f(Y_i+2))] \\ 	& \ + \sum_{i=0}^N \mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]  	+ \sum_{i=0}^N \mathbf{E} [(X_i - p_i)f(\tilde Z_i + 1)]. \end{align*}

The first term is bounded above by \| Df \|\sum_{i=0}^N p^2_i while the third term is equal to 0 since \tilde Z_i is independent of X_i. In order to bound the second term we want to rewrite each term f(Y_i+1)-f(\tilde Z_i + 1) as a telescoping sum. In what follow fix i\in\{0,1,\ldots,N\}, label the elements of B_i\setminus\{i\} as \{j_1,\ldots,j_K\} and define

    \begin{align*} 	U_0 &:= \tilde Z_i = \sum_{j\in B^c_i} X_{j},\\ 	U_{k} &:= U_{k-1} + X_{j_k} \qquad \text{for $k\in\{1,\ldots,K\}$}. \end{align*}

Noticing that U_K=Y_i, we have

    \[f(Y_i+1)-f(\tilde Z_i + 1) = \sum_{k=1}^K [ f(U_{k-1}+X_{j_k}+1)  	- f(U_{k-1}+1)]\]

and we get

    \begin{align*} 	\lefteqn{\mathbf{E} [(X_i - p_i)(f(Y_i+1)-f(\tilde Z_i + 1))]=}\\ 	&\qquad\qquad= 	\sum_{k=1}^K \mathbf{E} [(X_i - p_i)(f(U_{k-1}+X_{j_k}+1) - f(U_{k-1}+1))]\\ 	&\qquad\qquad=  	\sum_{k=1}^K \mathbf{E} [(X_i - p_i) X_{j_k} (f(U_{k-1}+2) - f(U_{k-1}+1))]\\ 	&\qquad\qquad\le \| Df \| \sum_{k=1}^K \mathbf{E} [(X_i + p_i)X_{j_k}] 	= \| Df \| \sum_{k=1}^K (p_{ij_k} + p_ip_{j_k})\\ 	&\qquad\qquad= \| Df \| \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j). \end{align*}

Therefore, combining all together we get

    \begin{align*} 	\mathbf{E} [h(Z)-h(N_\lambda)] &\le \| Df \|\sum_{i=0}^N p^2_i  	+ \| Df \| \sum_{i=0}^N \sum_{j\in B_i\setminus \{i\}} ( p_{ij} + p_i p_j)\\ 	&= \| Df \| \bigg( \sum_{i=0}^N \sum_{j\in B_i \setminus \{i\}} p_{ij}  	+ \sum_{i=0}^N\sum_{j\in B_i} p_i p_j \bigg)\\ 	&= \| Df \| (b_1+b_2). \end{align*}

Since the total variation distance can be characterized in terms of sets as

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} | \mathbf{P}(Z\in A) - \mathbf{P}(N_\lambda\in A) |,\]

from this point on we restrict our analysis to indicator functions, which are easier to deal with than generic functions. For each A\subset\mathbb{N} define h_A:=\mathbf{1}_A, \tilde h_A := h_A-\mathbf{E}h_A(N_\lambda) = \mathbf{1}_A-\mathbf{P}\{N_\lambda \in A\} and f_A := S \tilde h_A. The previous result yields

    \[\| \mathcal{L} (Z) - \mathcal{L} (N_\lambda) \|_{TV}  	= 2 \sup_{A\subset \mathbb{N}} |\mathbf{E} [h_A(Z)-h_A(N_\lambda)]| 	\le 2 (b_1+b_2) \sup_{A\subset \mathbb{N}} \| Df_A \|\]

and the proof of the Lemma is concluded if we show that \| Df_A \| \le 1 for each A\subset \mathbb{N}. In fact, in what follows we will show that

    \[\sup_{A\subset \mathbb{N}} \| Df_A \| \le \frac{1-e^{-\lambda}}{\lambda},\]

where the right hand side is clearly upper bounded by 1. The proof that we are going to present is contained in the Appendix of “Poisson Approximation for Some Statistics Based on Exchangeable Trials” by Barbour and Eagleson.

First of all, note that for each A\subset \mathbb{N} we have f_A(0) = 0 and

    \begin{align*} 	f_A(n+1) &= - \frac{\mathbf{E}[\tilde h_A(N_\lambda) \mathbf{1}_{N_\lambda \le n}]} 	{\lambda \mathbf{P}\{N_\lambda = n\}} = \frac{e^\lambda n!}{\lambda^{n+1}}  	(\mathbf{P}\{N_\lambda \in A\} \mathbf{P}\{N_\lambda \le n\}  	- \mathbf{P}\{N_\lambda \in A \cap \{0,1,\ldots,n\}\}), \end{align*}

for each n\in\mathbb{N}. From this expression it is clear that f_A = \sum_{j\in A} f_{\{j\}} for each A\subset\mathbb{N}, which suggests that we can restrict our analysis to singletons. For each j\in\mathbb{N} we have

    \begin{align*} 	-f_{\{j\}}(n+1) = 	\begin{cases} 		-\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda \le n\}  		& \text{if }n < j\\ 		\frac{\lambda^j n!}{\lambda^{n+1}j!} \mathbf{P}\{N_\lambda > n\}  		& \text{if }n \ge j\\ 	\end{cases} \end{align*}

and from the series expansion of the Poisson probabilities it is easy seen that the function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n) is negative and decreasing if n \le j and is positive and decreasing if n \ge j+1. Hence, the only positive value taken by the difference function n\in\mathbb{N} \longrightarrow -f_{\{j\}}(n+1)+f_{\{j\}}(n) corresponds to the case n=j that can be bounded as follows:

    \begin{align*} 	-f_{\{n\}}(n+1)+f_{\{n\}}(n) &= \frac{1}{\lambda} \mathbb{P}\{N_\lambda > n\} 	+ \frac{1}{n} \mathbb{P}\{N_\lambda \le n-1\}\\ 	&= \frac{e^{-\lambda}}{\lambda} \left( \sum_{k=n+1}^\infty \frac{\lambda^k}{k!} 	+ \sum_{k=1}^n \frac{\lambda^k}{k!} \frac{k}{n} \right) \\ 	&\le \frac{e^{-\lambda}}{\lambda} (e^\lambda - 1) = \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Therefore, for each A\subset\mathbb{N}, n\in\mathbb{N} we have

    \begin{align*} 	- Df_A(n) &= -f_A(n+1) + f_A(n) = \sum_{j\in A} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&= \mathbf{1}_A (n) (-f_{\{n\}}(n+1) + f_{\{n\}}(n)) 	+ \sum_{j \in A : j\neq n} (-f_{\{j\}}(n+1) + f_{\{j\}}(n))\\ 	&\le \frac{1-e^{-\lambda}}{\lambda}. \end{align*}

Noticing that f_{A^c} = - f_A, then for each A\subset\mathbb{N}, n\in\mathbb{N} we also get

    \begin{align*} 	Df_A(n) \le - Df_{A^c}(n) \le \frac{1-e^{-\lambda}}{\lambda}, \end{align*}

and we proved that \sup_{A\subset \mathbb{N}} |Df_A| \le \frac{1-e^{-\lambda}}{\lambda}. \square

We now introduce the notation that will naturally allow us to use Lemma 2 to prove Theorem 1. We label the pure strategy profiles in S_V=\times_{v\in V} S_v as \{\sigma^0,\sigma^1,\ldots,\sigma^N\}, where N:=2^n-1 and as always n=|V|:=\mathop{\mathrm{card}} V. Often times, it will be convenient to use the labels 1,2,\ldots,n to enumerate the vertices of the graph G. Accordingly, one can think of the strategy profiles \sigma^i defined in a specific way, for example by positing that \sigma^i(1)\sigma^i(2)\ldots\sigma^i(n) is the binary decomposition of i. In particular \sigma^0 becomes the strategy where each player plays zero, that is, \sigma^0_v=0 for each v\in V, and \sigma^N the strategy where each player plays one, that is, \sigma^N_v=1 for all v\in V. For each i\in\{0,1,\ldots,N\} define

    \[X_i := 	\begin{cases}  		1	& \text{if $\sigma^i$ is a pure Nash equilibrium},\\ 		0	& \text{otherwise}. 	\end{cases}\]

Clearly the quantity Z:=\sum_{i=0}^N X_i identifies the number of pure Nash equilibria and Z>0 corresponds to the existence of a pure Nash equilibrium. We recall that both the randomness in the choice of the graph and the randomness in the choice of the game are embedded in the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}}. Note that conditionally on a given graph G sampled form G(n,p) we have, for each i\in\{0,1,\ldots,N\},

    \begin{align*} 	\mathbf{E}_{G} X_i  	&= \mathbf{P}_{G} \{\text{$\sigma^i$ is a pure Nash equlibrium}\}\\ 	&= \mathbf{P}_{G} \{\operatorname{BR}_v(\sigma^i_v,\sigma^i_{N(v)})  	= 1 \  v\in V \}\\ 	&= 2^{-n}, \end{align*}

from which it follows that

    \begin{align*} 	\mathbf{E}_{G} Z  	= \sum_{i=0}^N \mathbf{E}_{G} X_i = (1+N) 2^{-n} = 1. \end{align*}

That is, the current definition of a game on a fixed graph implies that the expected number of pure Nash equilibria is 1 for any given graph. It follows that also \mathbf{E} Z = 1. Notice that Theorem 1 adds more information on the table since it describes the asymptotic distribution of Z\equiv Z^{n,p(n)} on a particular regime of the Erdös-Rényi random graph.

In the way we set up the stage it seems tempting to apply Lemma 2 to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} just defined. However, this approach is not fruitful since, apart from trivial cases, any given random variable X_i has a neighborhood of dependence B_i that coincides with the entire set \{0,1,\ldots,N\}. To see this, consider any two strategy profiles \sigma^i and \sigma^j. As long as there exists v\in V such that \sigma^i_v=\sigma^j_v, then we can always find a realization of the graph G such that v does not have any edges attached to it, that is, N(v)=\varnothing; this implies that \operatorname{BR}_v(0,\sigma^i_{N(v)}) = \operatorname{BR}_v(0,\sigma^j_{N(v)}) and, consequently, it implies that X_i and X_j are not independent. Therefore, only X_0 and X_N are independent, where \sigma^0_v:=0 and \sigma^N_v:=1 for each v\in V. However, Lemma 2 can be fruitfully applied to the random variables \{X_i\}_{i\in\{0,1,\ldots,N\}} when we look at them conditionally on a given graph realization, as the following Lemma demonstrates (Lemma 2.2 of Daskalakis, Dimakis, Mossel).

Lemma 3. Let G=(V,E) be a graph. Define

    \[B_0 := \{ j\in\{0,1,\ldots,N\}  	: \exists v \in V \text{ such that } \sigma^j_{v'}=0  	\text{ for each $v'$ satisfying } (v,v') \in E \}\]

and for each i\in\{1,2,\ldots,N\} define

    \[B_i := \{ j \in \{0,1,\ldots,N\} : \sigma^j = \sigma^i \oplus \sigma^k  	\text{ for some } k\in B_0\},\]

where

    \[\sigma^i \oplus \sigma^j := (\sigma^i(1) \oplus \sigma^j(1),\ldots,  	\sigma^i(|V|) \oplus \sigma^j(|V|))\]

and \oplus is the exclusive-or operation. Then, for each i\in\{0,1,\ldots,N\} we have that X_i is independent of \{X_j\}_{j\in B^c_i}.

Proof.We first show that X_0 is independent of \{X_j\}_{j\in B^c_0}. In order to make the independency structure manifest, we characterize B^c_0. Recall that B_0 is the index set of the pure strategy profiles for which there exists a player having all his neighbors playing 0. Therefore, each strategy profile corresponding to an index set in B^c_0 is characterized by the fact that each player has at least one neighbor who is playing 1. Hence, for each j\in B^c_0 we have \sigma^0_{N(v)} \neq \sigma^j_{N(v)} for all v\in V and, consequently, the events

    \[\{X_0=\alpha\} = \{\operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})  	= \alpha \ \forall v\in V\} \qquad \alpha\in \{0,1\}\]

are independent of the events

    \[\{X_j=\alpha_j\} = \{\operatorname{BR}_v(\sigma^j_v,\sigma^j_{N(v)}) = \alpha_j 	 \ \forall v\in V\}  	 \qquad \alpha_j\in \{0,1\}, j\in B^c_0,\]

which proves our claim.

We now generalize this result to show that X_i is independent of \{X_j\}_{j\in B^c_i}, for each i\in \{1,2,\ldots,N\}. For any i\in\{0,1,\ldots,N\}, note that the exclusive-or map with respect to \sigma^i preserves the differences in the strategy profiles (and, of course, it also preserves the equalities). That is, if \sigma^j and \sigma^k are such that \sigma^j_v\neq \sigma^k_v for some v\in V, then also \sigma^i\oplus\sigma^j and \sigma^i\oplus\sigma^k are such that (\sigma^i\oplus\sigma^j)_v \neq (\sigma^i\oplus\sigma^k)_v. Therefore,

    \[\sigma^0_{N(v)} \neq \sigma^j_{N(v)} \qquad \text{for each $v\in V$}\]

holds true if and only if

    \[\sigma^i_{N(v)}  	= (\sigma^i\oplus\sigma^0)_{N(v)} \neq (\sigma^i\oplus\sigma^j)_{N(v)} 	\qquad\text{for each $v\in V$}\]

holds true. Equivalently stated, j\in B^c_0 if and only if k\in B^c_i, where k is the index set such that \sigma^k = \sigma^i\oplus\sigma^j. Hence, the proof is concluded once we notice that \sigma^i=\sigma^i\oplus\sigma^0 and that B_i is defined as the index set of the pure strategy profiles that are obtained by an exclusive-or map with respect to \sigma^i of a strategy profile in B_0. \square

For a given graph G=(V,E) with |V|=n, define p_i(G):=\mathbf{P}_G\{X_i=1\}=2^{-n}=(N+1)^{-1}, where N:=2^{n}-1 and \mathbf{P}_G represents the conditional probability conditionally on the graph G. Define B_i(G)\subseteq \{0,1,\ldots,N\} as in Lemma 3; then, conditionally on G we have that \{X_j\}_{j\in B^c_i(G)} is independent of X_i. Define

    \begin{align*} 	b_1(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)} p_i(G)p_j(G)  	= \frac{1}{(N+1)^2}\sum_{i=0}^N |B_i(G)|,\\ 	b_2(G) &:= \sum_{i=0}^N \sum_{j\in B_i(G)\setminus \{i\}} p_{ij}(G), \end{align*}

where p_{ij}(G) := \mathbf{P}_G\{X_i=1,X_j=1\}. Define Z:=\sum_{i=0}^N X_i and recall that \mathbf{E} Z = \mathbf{E} \mathbf{E}_G Z = 1. If N_1 is a Poisson random variable with mean 1, then Lemma 2 yields

    \[\| \mathcal{L}_G (Z) - \mathcal{L} (N_1) \|_{TV} \le 2 (b_1(G)+b_2(G)).\]

At this point, let us introduce the following two lemmas (Lemmas 2.4 and 2.5 of Daskalakis, Dimakis, Mossel).

Lemma 4. If G is sampled from the Erdös-Rényi distribution G(n,p), we have

    \begin{align*} 	\mathbf{E}[b_1(G)] &\le  	R(n,p) := \sum_{s=0}^n \binom{n}{s} 2^{-n} \min\{1,n(1-p)^{s-1}\},\\ 	\mathbf{E}[b_2(G)] &\le  	S(n,p) := \sum_{s=1}^n \binom{n}{s} 2^{-n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Lemma 5. Under the assumptions of Theorem 1 there exists \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ such that

    \begin{align*} 	R(n,p) &\le \alpha' n^{-\varepsilon/4} + e^{-\beta' n} &\text{for each $n>n_0'$},\\ 	S(n,p) &\le \alpha'' n^{-\varepsilon/4} + e^{-\beta'' n} &\text{for each $n>n_0''$}. \end{align*}

We now show how the proof of Theorem 1 follows easily from the two lemmas above.

Proof of Theorem 1. Let \alpha',\alpha'',\beta',\beta''\in \mathbb{R}_+ and n_0',n_0''\in\mathbb{N}_+ as in Lemma 5. Define \alpha^\star:=\max\{\alpha',\alpha''\}, \beta^\star:=\min\{\beta',\beta''\} and n_0^\star:=\max\{n_0',n_0''\}. Clearly, by Lemma 5 we have

    \begin{align*} 	R(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$},\\ 	S(n,p) &\le \alpha^\star n^{-\varepsilon/4}  	+ e^{-\beta^\star n} &\text{for each $n>n_0^\star$}. \end{align*}

Define the event

    \[A_n := \{ \max\{b_1(G),b_2(G)\} 	\le 2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}.\]

By the Markov inequality and the previous asymptotic bounds, for each n>n_0^\star we have

    \begin{align*} 	\mathbf{P}\{A^c_n\} &\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} + e^{- \beta^\star n} \}\\ 	&\le \mathbf{P}\{ b_1(G) > 	2\alpha^\star n^{-\varepsilon/8} \} 	+ \mathbf{P}\{ b_2(G) > 	2\alpha^\star n^{-\varepsilon/8} \}\\ 	&\le \frac{\mathbf{E}[b_1(G)]}{2\alpha^\star n^{-\varepsilon/8}} 	+ \frac{\mathbf{E}[b_2(G)]}{2\alpha^\star n^{-\varepsilon/8}}\\ 	&\le n^{-\varepsilon/8} + \frac{e^{-\beta^\star n}}{\alpha^\star  	n^{-\frac{\varepsilon}{8}}}, \end{align*}

where we used the result of Lemma 4 and the above estimates for R(n,p) and S(n,p). Since \varepsilon> c for some c\in\mathbb{R}_+, then there clearly exists n_0^{\star'} such that

    \[\mathbf{P}\{A^c_n\} \le 2 n^{-\varepsilon/8}\qquad\text{for each $n>n_0^{\star'}$}.\]

Hence, we have that \mathbf{P}\{A_n\} \ge 1-2n^{-\frac{\varepsilon}{8}} for n>n_0^{\star'}. Let us now define \alpha,\beta\in\mathbb{R}_+ and n_0\in\mathbb{N}_+ such that n_0>n_0^{\star'} and

    \[4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n} 	\qquad\text{for each $n>n_0$}.\]

Then

    \[\mathbf{P} \{ \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \}  	\ge 1- \frac{2}{n^{\varepsilon/8}},\]

which proves the first statement in Theorem 1. In fact, we have

    \begin{align*} 	&\mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} \} \\ 	&\qquad\ge \mathbf{P} \{ \| \mathcal{L}_{G}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV}  	\le \alpha n^{-\varepsilon/8} + e^{-\beta n} | A_n \} \mathbf{P} \{A_n\}\\ 	&\qquad\ge 1- 2n^{-\frac{\varepsilon}{8}}, \end{align*}

since by definition of A_n, on the event A_n we have

    \begin{align*} 	\| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} &\le 2 (b_1(G)+b_2(G))  	\le 4\alpha^\star n^{-\varepsilon/8} + 2e^{- \beta^\star n} 	\le \alpha n^{-\varepsilon/8} + e^{- \beta n}. \end{align*}

By the properties of conditional expectations we can now prove the convergence in total variation of the unconditional law of Z^{n,p(n)} to the law of N_1. In fact, for n>n_0 we have

    \begin{align*} 	&\| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} \\ 	&\qquad= \sup_{h: \| h\| \le 1} | \mathbf{E} [h(Z^{n,p(n)}) - h(N_1)] |\\ 	&\qquad\le \sup_{h: \| h\| \le 1} | \mathbf{E} [(h(Z^{n,p(n)}) - h(N_1)) \mathbf{1}_{A_n}] | 	+ \sup_{h: \| h\| \le 1} \mathbf{E} [|h(Z^{n,p(n)}) - h(N_1)| \mathbf{1}_{A_n^c}] \\ 	&\qquad\le \sup_{h: \| h\| \le 1} \mathbf{E} [ 	\mathbf{1}_{A_n} | \mathbf{E}_G [h(Z^{n,p(n)}) - h(N_1)] |] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \mathbf{E} [ 	\mathbf{1}_{A_n} \| \mathcal{L}_G(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} ] 	+ 2 \mathbf{P}\{A^c_n\}\\ 	&\qquad\le \alpha n^{-\varepsilon/8} + e^{-\beta n} + 4 n^{-\varepsilon/8}, \end{align*}

from which it follows that

    \[\lim_{n\rightarrow\infty} \| \mathcal{L}(Z^{n,p(n)}) - \mathcal{L}(N_1) \|_{TV} = 0.\]

Since convergence in total variation implies convergence in distribution, the previous result implies that Z^{n,p(n)} converges in distribution to N_1, which concludes the proof of Theorem 1. \square

We now provide the proof of Lemma 4, while we refer the reader to Daskalakis, Dimakis, Mossel for the proof of Lemma 5.

Proof of Lemma 4. We begin with the study of \mathbf{E} [b_1(G)]. By the symmetry of the model we have

    \begin{align*} 	\mathbf{E} [b_1(G)] &= \frac{1}{(N+1)^2}\sum_{i=0}^N \mathbf{E} [|B_i(G)|]  	= \frac{1}{N+1} \mathbf{E} [|B_0(G)|]. \end{align*}

Since |B_0(G)| = \sum_{i=0}^N \mathbf{1}_{i\in B_0(G)}, we have \mathbf{E} [|B_0(G)|] = \sum_{i=0}^N \mathbf{P}\{i\in B_0(G)\}. By the symmetry of the Erdös-Rényi distribution we also have that \mathbf{P}\{i\in B_0(G)\} = \mathbf{P}\{j\in B_0(G)\} if \sigma^i and \sigma^j have the same number of players playing 1 (equivalently, the same number of players playing 0).
Therefore, if we label the vertices of the graph as \{1,2,\ldots,n\}, we have

    \[\mathbf{E} [|B_0(G)|] = \sum_{s=0}^n \binom{n}{s}  	\mathbf{E} [\mathbf{1}_{i_s\in B_0(G)}]\]

where for each s\in\{0,1,\ldots,n\} the index set i_s\in\{0,1,\ldots,N\} is such that the strategy \sigma^{i_s} satisfies \sigma^{i_s}_{k}=1 if k\le s and \sigma^{i_s}_{k}=0 if k> s. Hence, the bound for \mathbf{E} [b_1(G)] in the statement of the Lemma is proved if we show that \mathbf{P} \{i_s\in B_0(G)\} \le n (1-p)^{s-1}. In fact, by definition of B_0(G) we have

    \begin{align*} 	\mathbf{P} \{i_s\in B_0(G)\}  	&= \mathbf{P} \{\text{$G : $ $\exists$ player $k\in\{1,\ldots,n\}$ such that  	$N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le \sum_{k=1}^n  	\mathbf{P} \{\text{$G : N(k)\cap\{1,\ldots,s\}=\varnothing$}\}\\ 	&\le n (1-p)^{s-1}. \end{align*}

We now study the term \mathbf{E} [b_2(G)]. Proceeding as above, by symmetry we have

    \begin{align*} 	\mathbf{E} [b_2(G)] &= (N+1) \sum_{j=1}^N  	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_j=1\} \mathbf{1}_{j\in B_0(G)} ]\\ 	&= 2^n \sum_{s=1}^n \binom{n}{s} 	\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ] \end{align*}

We now analyze the term \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]. As noticed above, i_s\in B_0(G) if and only if the graph G is such that there exists a player k\in\{1,\ldots,n\} such that N(k)\cap \{1, \dots,s\}=\varnothing. In the case in which such k also satisfies the property k\in\{1,\ldots,s\}, then \sigma^{i_s}_k=1 and \sigma^{i_s}_{k'}=0 for each k'\in N(k), and it follows that \mathbf{P}_G\{X_0=1,X_{i_s}=1\} = 0. In fact, the event \{X_0=1,X_{i_s}=1\} corresponds to the realizations where both strategy \sigma^0 and strategy \sigma^{i_s} are pure Nash equilibria, that is, \operatorname{BR}_v(\sigma^0_v,\sigma^0_{N(v)})=\operatorname{BR}_v(\sigma^{i_s}_v,\sigma^{i_s}_{N(v)})=1 for each v\in V. But it can not be that both 0 and 1 are best responses for the player k when all player in the neighbor play 0. Hence, \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} is different from 0 on the event

    \begin{align*} 	A:=& \ \{G: k\in\{1,\ldots,s\} \text{ implies } N(k)\cap \{1, 	\dots,s\} \neq \varnothing \}\\ 	=& \ \{ \text{$\not\exists$ isolated node in the subgraph induced by $\{1,\ldots,s\}$} \} \end{align*}

and on the event

    \[B:=\{G: \text{$\exists k\in\{s+1,\ldots,n\}$ such that $N(k)\cap \{1, 	\dots,s\} = \varnothing$} \}.\]

Define p_s:=\mathbf{P} \{A\} and m_s := |\{k\in \{s+1,\ldots,n\} : N(k)\cap \{1,\dots,s\} = \varnothing\}|. Note that we have B=\bigcup_{t=1}^{n-s} \{G: m_s=t\}. On the events A and B we have \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)}=\mathbf{P}_G\{X_0=1,X_{i_s}=1\} and we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A] \, p_s\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s. \end{align*}

Because of the independency structure in the Erdös-Rényi random graph we have

    \[\mathbf{P}\{m_s=t | A\} = \mathbf{P}\{m_s=t \} = \binom{n-s}{t}  	[(1-p)^s]^t [1-(1-p)^s]^{n-s-t}.\]

Furthermore, we have

    \[\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	| A, m_s = t] = \frac{1}{2^t} \frac{1}{2^{n-t}} \frac{1}{2^{n-t}}.\]

This follows immediately from the definition of pure Nash equilibrium in terms of the best response functions once noticed that on the event \{A, m_s = t\} there are exactly t players (each k\in\{s+1,\ldots,n\} such that N(k)\cap \{1,\dots,s\} = \varnothing\}) such that (\sigma^0_k,\sigma^0_{N(k)})=(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}), while for the remaining n-t players we have (\sigma^0_k,\sigma^0_{N(k)})\neq(\sigma^{i_s}_k,\sigma^{i_s}_{N(k)}). Putting everything together we get

    \begin{align*} 	&\mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\} \mathbf{1}_{i_s\in B_0(G)} ]\\ 	&\qquad= \sum_{t=1}^{n-s} \mathbf{E}[ \mathbf{P}_G\{X_0=1,X_{i_s}=1\}  	\mathbf{1}_{i_s\in B_0(G)} | A, m_s = t] \, \mathbf{P}\{m_s=t | A\} \, p_s\\ 	&\qquad= p_s \sum_{t=1}^{n-s} \binom{n-s}{t} [(1-p)^s]^t [1-(1-p)^s]^{n-s-t} 	\frac{1}{4^{n-t}}\\ 	&\qquad= \frac{p_s}{4^n} [(1+(1-p)^s)^{n-s} - (1-(1-p)^s)^{n-s}]. \end{align*}

Using the fact that p_s\le 1, it clearly follows that \mathbf{E} [b_2(G)] \le S(n,p). \square

01. May 2013 by Ramon van Handel
Categories: Random graphs | Leave a comment

css.php