If you work long enough in any mathematical science, at some point you will need to estimate an integral that does not have a simple closed form. Maybe your function is really complicated. Maybe it’s really high dimensional. Often you cannot even write it down: it could be a quantity associated with a complex system, that you can only “query” at certain points by running an experiment. But you still need your integral, and then you turn to the trustworthy old Monte Carlo method. (Check this article by Nicholas Metropolis for the history of the method and what it has to do with Stanislaw Ulam’s uncle’s gambling habbit.) My goal in this post is to tell you a little bit about how you can do better than Monte Carlo using discrepancy theory.
The Problem and the Monte Carlo Method
Let us fix some notation and look at the simplest possible setting. We have a function , that maps the real interval
to the reals, and we want to know
We will estimate this integral with the average , where
is a sequence of numbers in
. The error of this estimate is
And here is the main problem I will talk about in this post:
How do we choose a sequence
of points in
so that the average
approximates the integral
as closely as possible?
Intuitively, for larger the approximation will be better, but what is the best rate we can achieve? Notice that we want a single sequence, so that if we want a more accurate approximation, we just take a few more terms and re-normalize, rather than start everything from scratch. The insight of the Monte Carlo method (surely familiar to every reader of this blog) is to take each
to be a fresh random sample from
. Then for any
, the expectation of
is exactly
(from now on I will skip the limits of my integrals: they will all be from
to
). The standard deviation is of order
and standard concentration inequalities tell us that, with high probability, will not be much larger than the latter quantity.
Quasi-Monte Carlo and Discrepancy
For a fixed function of bounded norm, the Monte Carlo method achieves
roughly on the order of
. In other words, if we want to approximate our integral to within
, we need to take about
random samples. It’s clear that in general, even for smooth functions, we cannot hope to average over fewer than
points, but is
really the best we can do? It turns out that for reasonably nice
we can do a lot better using discrepancy. We define the star discrepancy function of a sequence
as
Notice that this is really just a special case of where
is the indicator function of the interval
. A beautiful inequality due to Koksma shows that in a sense these are the only functions we need to care about:
is the total variation of
, a measure of smoothness, and for continuously differentiable functions it is equal to
. The important part is that we have bounded the integration error by the product of a term that quantifies how nice
is, and a term that quantifies how “random” the sequence
is. With this, our task is reduced to finding a sequence
with small star discrepancy, hopefully smaller than
. This is the essence of the quasi-Monte Carlo method.
The van der Corput Sequence
A simple sequence with low star discrepancy was discovered by van der Corput in the beginning of the previous century. Let us write the integer in binary as
, i.e.
. Then we define
to be number we get by flipping the binary digits of
around the radix point:
, or, in binary,
. The van der Corput sequence is
.
I have plotted the first eight terms of the van der Corput sequence on the left in the above picture: the index is on the
-axis and the value
on the
-axis. The terms alternate between
and
; they also visit each of
,
,
,
exactly once before they return, and so on. For example, each shaded rectangle on the right in the above picture contains exactly one point (the rectangles are open on the top). The key property of the van der Corput sequence then is that
if and only if
. So for any such dyadic interval, the discrepancy is at most
: the number of integers
less than
such that
is either
or
. We can greedily decompose an interval
into
dyadic intervals plus a leftover interval of length
; therefore the star discrepancy of the van der Corput sequence
is
. Remember that, together with Koksma’s inequality, this means that we can estimate the integral of any function
with bounded variation with error
, which, for sufficiently smooth
, is almost quadratically better than the Monte Carlo method! And with a deterministic algorithm, to boot. This was not that hard, so maybe we can achieve the ultimate discrepancy bound
? This is the question (asked by van der Corput) which essentially started discrepancy theory. The first proof that
is not achievable was given by van Aardenne-Ehrenfest. Klaus Roth simplified her bound and strengthened it to
using a brilliant proof based on Haar wavelets. Schmidt later proved that van der Corput’s
bound is assymptotically the best possible.
Higher Dimension, Other Measures, and Combinatorial Discrepancy
Quasi-Monte Carlo methods are used in real world applications, for example in quantitative finance, because of the better convergence rates they offer. But there are many complications that arise in practice. One issue is that we usually need to estimate integrals of high-dimensional functions, i.e. functions of a large number of variables. The Koksma inequality generalizes to higher dimensions (the generalization is known as the Koksma-Hlawka inequality), but we need to redefine both discrepancy and total variation for that purpose. The star discrepancy of a sequence
of
-dimensional points measures the worst-case absolute difference between the
-dimensional volume (Lebesgue measure) of any anchored box
and the fraction of points in the sequence
that fall in the box. The generalization of total variation is the Hardy-Krause total variation. Naturally, the best achievable discrepancy increases with dimension, while the class of functions of bounded total variation becomes more restrictive. However, we do not even know what is the best achievable star discrepancy for
or higher dimensional sequences! We know that no
-dimensional sequence has discrepancy better than
, where
is some constant that goes to
with
. The van der Corput construction generalizes to higher dimensions and gives sequences with discrepancy
(the implied constants here and in the lower bounds depend on
). The discrepancy theory community refers to closing this significant gap as “The Great Open Problem”.
Everything so far was about integration with respect to the Lebesgue measure, but in practice we are often interested in a different measure space. We could absorb the measure into the function to be integrated, but this can affect the total variation badly. We could do a change of variables, but, unless we have a nice product measure, this will result in a notion of discrepancy in which the test sets are not boxes anymore. Maybe the most natural solution is to redefine star discrepancy with respect to the measure we care about. But how do we find a low-discrepancy sequence with the new definition? It turns out that combinatorial discrepancy is very helpful here. A classical problem in combinatorial discrepancy, Tusnady’s problem, asks for is the smallest function such that any set of
points in
can be colored with red and blue so that in any axis-aligned box
the absolute difference between the number of red and the number of blue points is at most
(see this post for a generalization of this problem). A general transference theorem in discrepancy theory shows that for any probability measure in
there exists a sequence
with star discrepancy at most
. The best bound for
is
, only slightly worse than the best star discrepancy for Lebesgue measure. This transference result has long been seen as purely existential, because most non-trivial results in combinatorial discrepancy were not constructive, but recently we have seen amazing progress in algorithms for minimizing combinatorial discrepancy. While even with these advances we don’t get sequences that are nearly as explicit as the van der Corput sequence, there certainly is hope we will get there.
Conclusion
I have barely scratched the surface of Quasi Monte Carlo methods and geometric discrepancy. Koksma-Hlawka type inequalities, discrepancy with respect to various test sets and measures, combinatorial discrepancy are each a big topic in itself. The sheer breadth of mathematical tools that bear on discrepancy questions is impressive: diophantine approximation to construct low discrepancy sequences, reproducing kernels in Hilbert spaces to prove Koksma-Hlawka inequalities, harmonic analysis to prove discrepancy lower bounds, convex geometry for upper and lower bounds in combinatorial discrepancy. Luckily, there are some really nice references available. Matousek has a very accessible book on geometric discrepancy. Chazelle focuses on computer science applications. A new collection of surveys edited by Chen, Srivastav, and Travaglini has many of the latest developments.
By Jiantao Jiao February 20, 2015 - 2:12 am
Dear Sasho, many thanks for this wonderful post! I have a basic question regarding the Monte Carlo method and the quadrature/cubature methods in numerical integration. When we use a quadrature rule to do numerical integration, we not only design the points y_i, but also design the weight w_i, and sometimes we want to select the minimum number of nodes such that the integration is exact for polynomials of the highest possible order. But it seems that in Quasi Monte Carlo all the w_i have been set to be equal. Probably in practice sometimes quadrature rule is better, and sometimes Monte Carlo or Quasi Monte Carlo is better? In general how can we compare these two paradigm? Look forward to hearing from you! 🙂
By Sasho Nikolov April 6, 2015 - 3:18 pm
Dear Jintao,
I think you mean things like Gaussian quadrature rules? As far as I know, the issue with them is that (1) they work best if the function being integrated has a lot of structure, for example is close to a polynomial; (2) there are no sufficiently accurate variants for higher dimensions. However, if you have a one dimensional integral and you think your function is close to a polynomial, a Gaussian quadrature rule might be the right thing to use.
Sasho
By Ravi Ganti January 12, 2015 - 5:15 pm
I think there should be a square root over the integral in the standard deviation formula. No?
While the Koksma inequality is interesting, Are there other discrepancy measures that take into account the function f? The reason, why I am asking this question is that while the Koksma inequality is interesting, it seems like that the only way the sequence enters the picture has got nothing to do with the function at all. Is that correct?
By Sasho January 17, 2015 - 11:36 pm
Thanks, Ravi, you are absolutely right about the missing square root.
The fact that the guarantee works more or less regardless of the function is sort of the point here: in many applications you don’t know much about the function because it’s a complicated quantity that we don’t fully understand. The picture I have in mind is that in one extreme you have the Monte Carlo method in which you assume the least about the function (that it is square-integrable). In the other extreme you have simple functions with closed form expression for the integral. The Koksma inequality I wrote about in the post is an interesting point on this tradeoff in which you still make a fairly mild smoothness assumption on f but can dramatically improve the error guarantee.
But there are other tradeoffs possible. You can define discrepancy with respect to different test functions and then you can derive an appropriate Koksma-Hlawka type inequality using a standard (although not necessarily trivial to apply) method using reproducing kernel spaces. Identifying natural classes of functions and corresponding discrepancy measures and coming up with low discrepancy constructions is a challenging but important question. And then there is the field of Information Based Complexity (that I am decidedly not an expert in) which deals exactly with this problem of computing with continuous functions from discrete samples.
By vzn December 28, 2014 - 12:46 pm
its great to hear a very tangible motivating example for discrepancy theory which sometimes seems quite abstract.
this reminds me of methods of improving convergence efficiency of differential equation solvers eg runge kutta although encountered it many yrs ago.
see also this neat recent advance in discrepancy via SAT solvers, havent seen a lot of commentary in the blogosphere on it, undernoticed it seems.
By Suresh December 22, 2014 - 7:09 pm
Nice post, Sasho. I like the idea of coupling two separate terms (one for the function and one for the sequence).
By Sasho December 26, 2014 - 2:56 pm
Thanks Suresh! It’s the magic of Holder :). Maybe I should mention that the total variation can be replaced by the two-norm of the first derivative, and the worst-case discrepancy I defined by average discrepancy. In general you can use an
version of total variation and a
version of discrepancy, for conjugate pairs p and q.