First two self-centered announcements:
A new problem around subgraph densities
Nati Linial and I have just uploaded our first paper together, titled ‘On the local profiles of trees‘. Some background on the paper: recently there has been a lot of interest in subgraph densities for very large graphs, mainly because of its importance for the emerging theory of graph limits, see here for a bunch of videos on this, here for a nice survey by Lovasz, and here for the book version of this survey. A basic issue is that we know almost nothing about subgraph densities. Consider for instance the set of distributions on -vertex subgraphs induced by very large graphs (that is, a distribution corresponds to a large graph in the sense that for a -vertex subgraph , is the probability that by picking vertices at random in one obtains as the induced subgraph). We know very little about . It is non-convex (think of the complete graph, then the empty graph, and then try to take convex combinations of the corresponding distributions). Even worse than that, Hatami and Norine proved that it is undecidable to determine the validity of a linear inequality for the set . Alright, so subgraph densities in general graphs are in some sense intractable. Can we make some simplifying assumptions and recover tractability? It turns out that the answer is essentially yes if you restrict your attention to trees and subtrees! For instance we prove in our paper with Nati that in this case the set of possible distributions becomes convex! We also initiate the study of the defining inequalities for this set, but we are still far from a complete picture. The paper contains a list of 7 open problems, and I strongly recommend the reader to read our short paper and try to solve some of them: they are really really fun to work with!
New paper on the multi-armed bandit
Che-Yu Liu (my first graduate student) and I uploaded a few weeks ago our first paper together, titled ‘Prior-free and prior-dependent regret bounds for Thompson Sampling‘. Let me say that there are still plenty of open questions around the theme developed in this paper. The abstract reads as follows: We consider the stochastic multi-armed bandit problem with a prior distribution on the reward distributions. We are interested in studying prior-free and prior-dependent regret bounds, very much in the same spirit as the usual distribution-free and distribution-dependent bounds for the non-Bayesian stochastic bandit. Building on the techniques of Audibert and Bubeck  and Russo and Roy  we first show that Thompson Sampling attains an optimal prior-free bound in the sense that for any prior distribution its Bayesian regret is bounded from above by This result is unimprovable in the sense that there exists a prior distribution such that any algorithm has a Bayesian regret bounded from below by . We also study the case of priors for the setting of Bubeck et al.  (where the optimal mean is known as well as a lower bound on the smallest gap) and we show that in this case the regret of Thompson Sampling is in fact uniformly bounded over time, thus showing that Thompson Sampling can greatly take advantage of the nice properties of these priors.
Next are three announcements related to different workshops/conferences:
Simons-Berkeley Research Fellowship 2014-2015
The Simons Institute at UC Berkeley has just posted their call for the Fellowships for next year. Next year’s programs are “Algorithmic Spectral Graph Theory”, “Algorithms & Complexity in Algebraic Geometry” and “Information Theory”. The deadline is December 15. Check out what Justin Thaler has to say about his experience as a research fellow this semester. Let me add that I also enjoy very much my stay there, and the paper with Nati I talked about above would not have been possible without the Simons Institute.
The call for papers for COLT 2014 is out, see the official website here. Let me also remind you that this edition will be in Barcelona, which should be a lot of fun.
Mathematics of Machine Learning
Together with Nicolo Cesa-Bianchi, Gabor Lugosi, and Sasha Rakhlin, we are organizing a special program in Barcelona on the Mathematics of Machine Learning from April 7, 2014 to July 14, 2014, see the website for all the details. We are still in the process of inviting people but if you are interested in participating please feel free to send us an email. We should also have soon more details on the large workshop that will take place right after COLT.