If you work in a field related to probability theory you most certainly have already come across the concentration of measure phenomenon: ‘a function that depends on many independent variables, but not too much on any one of them, is almost constant’. This extremely strong statement can be made formal in various ways, the most basic one being the Hoeffding inequality (and its well-known generalization the Mc Diarmid’s inequality). However in many applications Mc Diarmid’s inequality is too weak: the resulting bound does not account for the variance, and the assumption on the underlying function is very stringent. Fortunately one can go far beyond Mc Diarmid’s inequality, and this is explored in depth in the new book (published a few weeks ago!) ‘Concentration Inequalities‘, by Stéphane Boucheron, Gábor Lugosi and Pascal Massart.
I have read an early draft of the book during my postdoc with Gábor, and I strongly recommend it to anyone who is even just remotely interested in probabilistic phenomenons. The first 100 pages form a gentle introduction to the subject and they contain indispensable results for Machine Learners and alike. Then Chapter 5-6-7 are (in my opinion) the core of the book: they masterfully describe results that were always a bit daunting to me, including the geometrical point of view on the concentration of measure. This is where I stopped my first reading two years ago, but now that I have the book in my hands I realize that this is only the first half of it!!! I have started to read the second half, and right now I am having a lot of fun with influences and monotone sets ;).