First week of activity at the Simons Institute

This first week at the Simons Institute was a lot of fun! I attended the first workshop in the Real Analysis program which was about Testing, Learning and Inapproximability. There was plenty of good talks and I learned a lot of new things.


– First I learned about parallel repetition. Let \Sigma be a finite alphabet. Consider a bipartite graph G with vertex sets U and V and a labeling of the edges with mappings from \Sigma to \Sigma, that is edge uv is labelled with some mapping \pi_{uv} : \Sigma \rightarrow \Sigma \pi_{uv} : \Sigma \rightarrow \Sigma. Now consider the following two players game (Alice and Bob are the players). First an edge uv is drawn at random from some known probability distribution \nu over the set of edges in G (the graph G and the ‘labels’ are also known to the players). The vertex u is revealed to Alice and the vertex v is revealed to Bob. Without communication each one of them has to output an alphabet symbol, say \alpha for Alice and \beta for Bob, and they win if \beta = \pi_{uv} ( \alpha ) . In other words they win if the pair of symbols is consistent with the labeling of the drawn edge. The value of the game v( G ) is the probability that Alice and Bob wins, which can be rewritten as:

    \[v(G) = \max_{f : U \rightarrow \Sigma, g : V \rightarrow \Sigma} \mathbb{P}_{uv \sim \nu}\big(g(v) = \pi_{uv} (f(u)) \big) .\]

Now we define the value of the repeated game as follows: the players simply play the above game k times (with no communication at all except before the game starts when they agree on their strategy, and with the sequence of edges being i.i.d.) and the value of the repeated game, denoted v( G^{\otimes k} ) is the probability that they win all the k games. This can be rewritten as:

    \begin{eqnarray*} v(G^{\otimes k}) = \max_{f : U^k \rightarrow \Sigma^k, g : V^k \rightarrow \Sigma^k} & & \mathbb{P}_{(u_1v_1, \hdots, u_kv_k) \sim \nu^{\otimes k}}\big(\forall i \in [k], \\ & & [g(v_1, \hdots, v_k)]_i = \pi_{u_iv_i}([f(v_1, \hdots, v_k)]_i) \big) . \end{eqnarray*}

One is now interested in understanding the relation between v( G^{\otimes k} ) and v(G).

At this point it’s probably a good idea to pause. If this is the first time that you read about parallel repetition you are probably like ‘wtf, it’s obvious that v( G^{\otimes k} ) = v(G)^k !!!’ (at least this was my initial reaction). It turns out that this is very far from being true. Clearly it’s true that v( G^{\otimes k} ) \geq v(G)^k since you can simply play the k games independently. But in some cases v( G^{\otimes k} ) might be much larger than v(G)^k, which is kind of amazing! Here is a beautiful example (which is due to Lance Fortnow I think) where v(G) = 1/2 and v( G^{\otimes 2} ) = 1/2. The graph G is the complete graph on the set of vertices U=V=\{0,1\} and the distribution \nu is uniform on the set of edges (in other words Alice receives a random bit u \in \{0,1\} and Bob receives a random bit v \in \{0,1\}). The alphabet is \Sigma = (\{A,B\} \times \{0,1\}) \cup \{\mathrm{err}\} where the symbol \mathrm{err} is introduced for sake of simplicity and we assume that the players cannot respond with \mathrm{err}. Now the mappings \pi_{uv} are given by:

    \[\pi_{uv}((A, u) )=(A, u), \ \pi_{uv}((B, v) )=(B, v), \ \pi_{uv}((A, \bar{u}) )= \pi_{uv}((B, \bar{v}) )=\mathrm{err}.\]

In words the players must choose either Alice or Bob and then they both have to output the bit of the chosen player. For instance if they choose Alice then Bob has to guess the bit of Alice (and Alice can just report her bit). Clearly there is not much one can do for the one-shot game: one has v(G) = 1/2. Now the situation becomes more interesting for the two rounds game: the players can agree that in the first round Bob will try to guess Alice’s first bit, and in the second round Alice will try to guess Bob’s second bit. The trick is that when Bob tries to guess Alice’s first bit he can use his second bit as a guess, and respectively Alice’s guess for Bob’s second bit can be her own first bit. That way they effectively reduced the two rounds game to a one-round game, since both predictions are correct if Bob’s second bit is equal to Alice’s first bit which happens with probability 1/2. In other words we proved v( G^{\otimes 2})  = 1/2 while naively one would have guessed that v( G^{\otimes 2} ) = 1/4.

Now that we have a better understanding of why the original question on the relation between v( G^{\otimes k} ) and v(G) is a non-trivial one, it is clear that the interesting thing to study is to upper bound the value v( G^{\otimes k} ) as a function of v(G) (recall the obvious lower bound v( G^{\otimes k} ) \geq v(G)^k). In essence we are asking to show a limit to the power of tricks such as the one described above. Such a result was first proved by Ran Raz in 1995. He showed that, essentially, if v(G) \leq 1- \epsilon then v( G^{\otimes k} ) \leq (1-\epsilon^2 /2)^k (the original proof gave \epsilon^{32} instead of \epsilon^2). At the Simons workshop David Steurer talked about a new proof of this result based on viewing the value of the game as the norm of some matrix, see this paper. This new proof method also lead to new upper bounds, including v( G^{\otimes k} ) \leq v(G)^{\sqrt{k}} (which matches some known lower bounds from Ran Raz). As far as I understand there are still plenty of open questions on this fundamental problem of parallel repetition.


– Joe Neeman talked about a new proof of Borell’s theorem which states that if X and Y are two standard Gaussian vectors in \mathbb{R}^n with \mathrm{cov}(X, Y) = \rho I_n, then half-spaces are maximizers of the function A \subset \mathbb{R}^n \mapsto \mathbb{P}(X \in A, Y \in A). The proof is really slick, it’s basically one page of fairly simple calculations, see Section 2 hereElchanan Mossel then talked about the ‘natural’ extension of this result to the case of uniform variables on the hypercube \{-1,1\}^n, see this paper.


James Lee talked about a very interesting problem: we know how to prove lower bounds for the size of extended formulations (see this blog post for instance), but such bounds are interesting only for exact optimization. A natural question is whether we can prove lower bounds on the size of LPs even for approximate optimization. Apparently there is a rich literature on this for specific type of LP relaxations such the Sherali-Adams hierarchy. Now in their new paper (which is not yet online) James Lee and his co-authors are able to prove general bounds instead of the one that were previously proved specifically for the Sherali-Adams hierarchy. Of course LPs are nice but SDPs are even nicer (see here) and a natural question is how to generalize these results to SDPs instead of LPs. This was touched upon in Raghu Meka‘s talk where he discussed lower bounds for the number of rounds to solve the hidden clique problem with the Lasserre hierarchy. The paper, joint work with Avi Wigderson, is available here.


Nati Linial gave an absolutely wonderful talk, probably one of the best that I have seen in months. My advice is to look at the video as soon as it is released on the Simons website (this should be a link to it hopefully). Just as a teaser here is one object that Linial is looking at. First a definition: for a graph G=(V,E) let p_i(G) be the fraction of the set of 3 vertices that contains exactly i-1 edges. The vector p(G) = (p_1(G), \hdots, p_4(G)) is called the 3-profile of G. Now the object that we look at is the set S \subset \mathbb{R}^4 of possible ‘asymptotic profile’, more precisely:

    \[S = \{x \in \mathbb{R}^4 : \exists (G_n = (V_n, E_n))_{n \in \mathbb{N}}, p(G_n) \xrightarrow[n \to \infty]{} x\} .\]

There is not much that we know about S. It is not convex: (1,0,0,0) and (0,0,0,1) are clearly in S but (1/2, 0,0,1/2) is of course not in S. We also know since the 60’s that for x \in S one has x_1 + x_4 \geq 1/4 and Linial recently proved that \min(x_2, x_4) \leq 0.26. A better understanding of the properties of S could lead to breakthroughs in the emerging theory of statistics for networks (see Nati’s talk for more details!).


Tomorrow the Big Data program will start with its Boot Camp. Apparently there has been a lot of hype around this boot camp and the seats have been ‘sold out’. If you can’t enter the auditorium, or if you are not in Berkeley, the event will be streamed live here.

This entry was posted in Theoretical Computer Science. Bookmark the permalink.

5 Responses to "First week of activity at the Simons Institute"

    • Avatar photo
    • Avatar photo