Nevanlinna Prize

Nominations of people born on or after January 1, 1974

for outstanding contributions in Mathematical Aspects of Information Sciences including:

  1. All mathematical aspects of computer science, including complexity theory, logic of programming languages, analysis of algorithms, cryptography, computer vision, pattern recognition, information processing and modelling of intelligence.
  2. Scientific computing and numerical analysis. Computational aspects of optimization and control theory. Computer algebra.

Nomination Procedure: http://www.mathunion.org/general/prizes/nevanlinna/details/

 

Information and Inference (new journal)

The first issue of Information and Inference has just appeared:

http://imaiai.oxfordjournals.org/content/current

It includes the following editorial:

In recent years, a great deal of energy and talent have been devoted to new research problems arising from our era of abundant and varied data/information. These efforts have combined advanced methods drawn from across the spectrum of established academic disciplines: discrete and applied mathematics, computer science, theoretical statistics, physics, engineering, biology and even finance. This new journal is designed to serve as a meeting place for ideas connecting the theory and application of information and inference from across these disciplines.

While the frontiers of research involving information and inference are dynamic, we are currently planning to publish in information theory, statistical inference, network analysis, numerical analysis, learning theory, applied and computational harmonic analysis, probability, combinatorics, signal and image processing, and high-dimensional geometry; we also encourage papers not fitting the above description, but which expose novel problems, innovative data types, surprising connections between disciplines and alternative approaches to inference. This first issue exemplifies this topical diversity of the subject matter, linked by the use of sophisticated mathematical modelling, techniques of analysis, and focus on timely applications.

To enhance the impact of each manuscript, authors are encouraged to provide software to illus- trate their algorithm and where possible replicate the experiments presented in their manuscripts. Manuscripts with accompanying software are marked as “reproducible” and have the software linked on the journal website under supplementary material. It is with pleasure that we welcome the scien- tific community to this new publication venue.

Robert Calderbank David L. Donoho John Shawe-Taylor Jared Tanner

Comparing Variability of Random Variables

Consider exchangeable random variables {X_1, \ldots, X_n, \ldots}. A couple of facts seem quite intuitive:

Statement 1. The “variability” of sample mean {S_m = \frac{1}{m} \sum_{i=1}^{m} X_i} decreases with {m}.

Statement 2. Let the average of functions {f_1, f_2, \ldots, f_n} be defined as {\overline{f} (x) := \frac{1}{n} \sum_{i=1}^{n} f_i(x)}. Then {\max_{1\leq i \leq n} \overline{f}(X_i)} is less “variable” than {\max_{1\leq i \leq n} f_i (X_i)}.

 

To make these statements precise, one faces the fundamental question of comparing two random variables {W} and {Z} (or more precisely comparing two distributions). One common way we think of ordering random variables is the notion of stochastic dominance:

\displaystyle W \leq_{st} Z \Leftrightarrow F_W(t) \geq F_Z(t) \ \ \ \mbox{ for all real } t.

However, this notion really is only a suitable notion when one is concerned with the actual size of the random quantities of interest, while, in our scenario of interest, a more natural order would be that which compares the variability between two random variables (or more precisely, again, the two distributions). It turns out that a very useful notion, used in a variety of fields, is due to Ross (1983): Random variable {W} is said to be stochastically less variable than random variable {Z} (denoted by {\leq_v}) when every risk-averse decision maker will choose {W} over {Z} (given they have similar means). More precisely, for random variables {W} and {Z} with finite means

\displaystyle W \leq_{v} Z \Leftrightarrow \mathbb{E}[f(X)] \leq \mathbb{E}[f(Y)] \ \ \mbox{ for increasing and convex function } f \in \mathcal{F}

where {\mathcal{F}} is the set of functions for which the above expectations exist.

One interesting, but perhaps not entirely obvious, fact is that this notion of ordering {W\leq_v Z} is equivalent to saying that there is a sequence of mean preserving spreads that in the limit transforms the distribution of {W} into the distribution of another random variable {W'} with finite mean such that {W'\leq_{st} Z}! Also, using results by Hardy, Littlewood and Polya (1929), the stochastic variability order introduced above can be shown to be equivalent to Lorenz (1905) ordering used in economics to measure income equality.

Now with this, we are ready to formalize our previous statements. The first statement is actually due to Arnold and Villasenor (1986):

\displaystyle \frac{1}{m} \sum_{i=1}^{m} X_i \leq_v \frac{1}{m-1} \sum_{i=1}^{m-1} X_i \ \ \ \ \ \ \ \ \ \ \ \ \mbox{for all }\ \ m \in \mathbb{N}.

Note that when you apply this fact to a sequence of iid random variables with finite mean {\mu}, it strengthens the strong law of large number in that it ensures that the almost sure convergence of the sample mean to the mean value {\mu} occurs with monotonically decreasing variability (as the sample size grows).

The second statement comes up in proving certain optimality result in sharing parallel servers in fork-join queueing systems (J. 2008) and has a similar flavor:

\displaystyle \max_{1\leq i \leq n} \overline{f}(X_i) \leq_v \max_{1\leq i \leq n} f_i (X_i).

The cleanest way to prove both statements, to the best of my knowledge, is based on the following theorem first proved by Blackwell in 1953 (later strengthened to random elements in separable Banach spaces by Strassen in 1965, hence referred to by some as Strassen’s theorem):

Theorem 1 Let {W} and {Z} be two random variables with finite means. A necessary and sufficient condition for {W \leq_v Z} is that there are two random variables {\hat{W}} and {\hat{Z}} with the same marginals as {W} and {Z}, respectively, such that {\mathbb{E}[\hat{Z} |\hat{W}] \geq \hat{W}} almost surely.

For instance, to prove the first statement we consider {\hat{W} = W = \frac{1}{n} \sum_{i=1}^n X_i} and {Z = \frac{1}{n-1} \sum_{i=1}^{n-1} X_i}. All that is necessary now is to note that {\hat{Z} : = \frac{1}{n-1} \sum_{i\in I, i \neq J} X_i}, {J} is an independent uniform rv on the set {I := \{1,2, \ldots, n\}}, has the same distribution as random variable {Z}. Furthermore,

\displaystyle \mathbb{E} [ \hat{Z} | W ] = \mathbb{E} [ \frac{1}{n} \sum_{J=1}^{n} (\frac{1}{n-1} \sum_{i\in I, i \neq J} X_i ) | W ] = \mathbb{E} [ \frac{1}{n} \sum_{j=1}^{n} X_j | W ] = W.

Similarly to prove the second statement, one can construct {\hat{Z}} by selecting a random permutation of functions {f_1, \ldots, f_n}.