## Lectures 7/8. Games on random graphs

*The following two lectures by Rene Carmona are on games on random graphs.
Many thanks to Patrick Rebeschini for scribing these lectures!*

In the following we discuss some results from the paper “Connectivity and equilibrium in random games” by Daskalakis, Dimakis, and Mossel. We define a random game on a random graph and we characterize the graphs that are likely to exhibit Nash equilibria for this game. We show that if the random graph is drawn from the Erdös-Rényi distribution, then in the high connectivity regime the law of the number of pure Nash equilibria converges toward a Poisson distribution, asymptotically, as the size of the graph is increased.

Let be a simple (that is, undirected and with no self-edges) graph, and for each denote by the set of neighbors of , that is, . We think of each vertex in as a *player* in the game that we are about to introduce. At the same time, we think of each edge as a *strategic interaction* between players and .

Definition(Game on a graph).For each let represent theset of strategiesfor player , assumed to be a finite set. We naturally extend this definition to include families of players: for each , let be the set of strategies for each player in . For each , denote by thereward functionfor player . Agameis a collection .

The above definition describes a game that is *static*, in the sense that the game is played only once, and *local*, in the sense that the reward function of each player depends only on its own strategy and on the strategy of the players in its neighbors. We now introduce the notion of pure Nash equilibrium.

Definition(Pure Nash equilibrium).We say that is apure Nash equilibrium(PNE) if for each we have

A pure Nash equilibrium represents a state where no player can be better off by changing his own strategy if he is the only one who is allowed to do so. In order to investigate the existence of a pure Nash equilibrium it suffices to study the *best response function* defined below.

Definition(Best response function).Given a reward function for player , we define thebest response functionfor as

Clearly, is a pure Nash equilibrium if and only if for each . We now define the type of random games that we will be interested in; in order to do so, we need to specify the set of strategies and the reward function for each player.

Definition(Random game on a fixed graph).For a graph and an atomless probability measure on , let be the associatedrandom gamedefined as follows:

- for each ;
- is a collection of independent identically distributed random variables with distribution .

**Remark.** For each game the family is a collection of independent random variables that are uniformly distributed in , and for each , we have almost surely. In fact, note that if and only if and this event has probability since the two random variables appearing on both sides of the inequality sign are independent with the same law and is atomless. As far as the analysis of the existence of pure Nash equilibria is concerned, we could take the present notion of best response functions as the definition of our random game on a fix graph. In fact, note that the choice of in does not play a role in our analysis, and we would obtain the same results by choosing different (atomless) distributions for sampling (independently) the reward function of each player.

Denote by the distribution of a Erdös-Rényi random graph with vertices where each edge is present independently with probability . We now introduce the notion of a random game on a random graph.

Definition(Random game on a ramon graph).For each , and each probability measure on , do the following:

- choose a graph from ;
- choose a random game from for the graph .

Henceforth, given a random variable let represent its distribution. Given two measures and on a measurable space, define the *total variation distance* between and as

where the supremum is taken over measurable functions such that the supremum norm is less than or equal to .

We are now ready to state the main theorem that we will prove in the following (Theorem 1.9 in Daskalakis, Dimakis, and Mossel).

Theorem 1(High connectivity regime).Let be the number of pure Nash equilibria in the random game on random graph defined above. Let define the high-connectivity regime as

where satisfies the following two properties:

Then, we have

where denotes the conditional probability given the graph and is a Poisson random variable with mean . In particular,

which shows that in this regime a pure Nash equilibrium exists with probability converging to as the size of the network increases.

**Remark.** Using the terminology of statistical mechanics, the first result in Theorem 1 represents a *quenched*-type result since it involves the conditional distribution of a system (i.e., the game) given its environment (i.e., the graph). On the other hand, the second result represents an *annealed*-type result, where the unconditional probability is considered.

In order to prove Theorem 1 we need the following lemma on Poisson approximations. The lemma is adapted from the results of R. Arratia, L. Goldstein, and L. Gordon (“Two moments suffice for Poisson approximations: the Chen-Stein method“, Ann. Probab. 17, 9-25, 1989) and it shows how the total variation distance between the law of a sum of Bernoulli random variables and a Poisson distribution can be bounded by the first and second moments of the Bernoulli random variables. This result is a particular instance of the Stein’s method in probability theory.

Lemma 2(Arratia, Goldstein, and Gordon, 1989).Let be a collection of Bernoulli random variables with . For each let be such that is independent of . Define

where . Define and . If is a Poisson random variable with mean , then

**Proof.** We define the following operators that act on each function :

We point out that characterizes in the sense that if and only if is a Poisson random variable with mean ; is an example of *Stein’s operator*. First of all, we show that for each we have . In fact, if we have

and if we have

For each define and . The following properties hold:

- ,
- ,
- .

In what follows, consider any given function such that . Define the function as for each , and let . From what was seen above we have and we get

The first term is bounded above by while the third term is equal to since is independent of . In order to bound the second term we want to rewrite each term as a telescoping sum. In what follow fix , label the elements of as and define

Noticing that , we have

and we get

Therefore, combining all together we get

Since the total variation distance can be characterized in terms of sets as

from this point on we restrict our analysis to indicator functions, which are easier to deal with than generic functions. For each define , and . The previous result yields

and the proof of the Lemma is concluded if we show that for each . In fact, in what follows we will show that

where the right hand side is clearly upper bounded by . The proof that we are going to present is contained in the Appendix of “Poisson Approximation for Some Statistics Based on Exchangeable Trials” by Barbour and Eagleson.

First of all, note that for each we have and

for each . From this expression it is clear that for each , which suggests that we can restrict our analysis to singletons. For each we have

and from the series expansion of the Poisson probabilities it is easy seen that the function is negative and decreasing if and is positive and decreasing if . Hence, the only positive value taken by the difference function corresponds to the case that can be bounded as follows:

Therefore, for each , we have

Noticing that , then for each , we also get

and we proved that .

We now introduce the notation that will naturally allow us to use Lemma 2 to prove Theorem 1. We label the *pure strategy profiles* in as , where and as always . Often times, it will be convenient to use the labels to enumerate the vertices of the graph . Accordingly, one can think of the strategy profiles defined in a specific way, for example by positing that is the binary decomposition of . In particular becomes the strategy where each player plays zero, that is, for each , and the strategy where each player plays one, that is, for all . For each define

Clearly the quantity identifies the number of pure Nash equilibria and corresponds to the existence of a pure Nash equilibrium. We recall that both the randomness in the choice of the graph and the randomness in the choice of the game are embedded in the random variables . Note that conditionally on a given graph sampled form we have, for each ,

from which it follows that

That is, the current definition of a game on a fixed graph implies that the expected number of pure Nash equilibria is for any given graph. It follows that also . Notice that Theorem 1 adds more information on the table since it describes the asymptotic distribution of on a particular regime of the Erdös-Rényi random graph.

In the way we set up the stage it seems tempting to apply Lemma 2 to the random variables just defined. However, this approach is not fruitful since, apart from trivial cases, any given random variable has a neighborhood of dependence that coincides with the entire set . To see this, consider any two strategy profiles and . As long as there exists such that , then we can always find a realization of the graph such that does not have any edges attached to it, that is, ; this implies that and, consequently, it implies that and are not independent. Therefore, only and are independent, where and for each . However, Lemma 2 can be fruitfully applied to the random variables when we look at them conditionally on a given graph realization, as the following Lemma demonstrates (Lemma 2.2 of Daskalakis, Dimakis, Mossel).

Lemma 3.Let be a graph. Define

and for each define

where

and is the exclusive-or operation. Then, for each we have that is independent of .

**Proof.**We first show that is independent of . In order to make the independency structure manifest, we characterize . Recall that is the index set of the pure strategy profiles for which there exists a player having all his neighbors playing . Therefore, each strategy profile corresponding to an index set in is characterized by the fact that each player has at least one neighbor who is playing . Hence, for each we have for all and, consequently, the events

are independent of the events

which proves our claim.

We now generalize this result to show that is independent of , for each . For any , note that the exclusive-or map with respect to preserves the differences in the strategy profiles (and, of course, it also preserves the equalities). That is, if and are such that for some , then also and are such that . Therefore,

holds true if and only if

holds true. Equivalently stated, if and only if , where is the index set such that . Hence, the proof is concluded once we notice that and that is defined as the index set of the pure strategy profiles that are obtained by an exclusive-or map with respect to of a strategy profile in .

For a given graph with , define , where and represents the conditional probability conditionally on the graph . Define as in Lemma 3; then, conditionally on we have that is independent of . Define

where . Define and recall that . If is a Poisson random variable with mean , then Lemma 2 yields

At this point, let us introduce the following two lemmas (Lemmas 2.4 and 2.5 of Daskalakis, Dimakis, Mossel).

Lemma 4.If is sampled from the Erdös-Rényi distribution , we have

Lemma 5.Under the assumptions of Theorem 1 there exists and such that

We now show how the proof of Theorem 1 follows easily from the two lemmas above.

**Proof of Theorem 1.** Let and as in Lemma 5. Define , and . Clearly, by Lemma 5 we have

Define the event

By the Markov inequality and the previous asymptotic bounds, for each we have

where we used the result of Lemma 4 and the above estimates for and . Since for some , then there clearly exists such that

Hence, we have that for . Let us now define and such that and

Then

which proves the first statement in Theorem 1. In fact, we have

since by definition of , on the event we have

By the properties of conditional expectations we can now prove the convergence in total variation of the unconditional law of to the law of . In fact, for we have

from which it follows that

Since convergence in total variation implies convergence in distribution, the previous result implies that converges in distribution to , which concludes the proof of Theorem 1.

We now provide the proof of Lemma 4, while we refer the reader to Daskalakis, Dimakis, Mossel for the proof of Lemma 5.

**Proof of Lemma 4.** We begin with the study of . By the symmetry of the model we have

Since , we have . By the symmetry of the Erdös-Rényi distribution we also have that if and have the same number of players playing (equivalently, the same number of players playing ).

Therefore, if we label the vertices of the graph as , we have

where for each the index set is such that the strategy satisfies if and if . Hence, the bound for in the statement of the Lemma is proved if we show that . In fact, by definition of we have

We now study the term . Proceeding as above, by symmetry we have

We now analyze the term . As noticed above, if and only if the graph is such that there exists a player such that . In the case in which such also satisfies the property , then and for each , and it follows that . In fact, the event corresponds to the realizations where both strategy and strategy are pure Nash equilibria, that is, for each . But it can not be that both and are best responses for the player when all player in the neighbor play . Hence, is different from on the event

and on the event

Define and . Note that we have . On the events and we have and we get

Because of the independency structure in the Erdös-Rényi random graph we have

Furthermore, we have

This follows immediately from the definition of pure Nash equilibrium in terms of the best response functions once noticed that on the event there are exactly players (each such that ) such that , while for the remaining players we have . Putting everything together we get

Using the fact that , it clearly follows that .