The fact that the guarantee works more or less regardless of the function is sort of the point here: in many applications you don’t know much about the function because it’s a complicated quantity that we don’t fully understand. The picture I have in mind is that in one extreme you have the Monte Carlo method in which you assume the least about the function (that it is square-integrable). In the other extreme you have simple functions with closed form expression for the integral. The Koksma inequality I wrote about in the post is an interesting point on this tradeoff in which you still make a fairly mild smoothness assumption on f but can dramatically improve the error guarantee.

But there are other tradeoffs possible. You can define discrepancy with respect to different test functions and then you can derive an appropriate Koksma-Hlawka type inequality using a standard (although not necessarily trivial to apply) method using reproducing kernel spaces. Identifying natural classes of functions and corresponding discrepancy measures and coming up with low discrepancy constructions is a challenging but important question. And then there is the field of Information Based Complexity (that I am decidedly not an expert in) which deals exactly with this problem of computing with continuous functions from discrete samples.

]]>While the Koksma inequality is interesting, Are there other discrepancy measures that take into account the function f? The reason, why I am asking this question is that while the Koksma inequality is interesting, it seems like that the only way the sequence enters the picture has got nothing to do with the function at all. Is that correct?

]]>at the moment I am mainly looking for students who already have a couple of preprints.

Best,

Sebastien

I am a first year student in Mathematics. Reading your blog, I found myself really interested in your work and I saw you were looking for an intern. Is there a chance your internship offer is open to a first year student?

Best Regards,

Paolo

]]>this reminds me of methods of improving convergence efficiency of differential equation solvers eg runge kutta although encountered it many yrs ago.

see also this neat recent advance in discrepancy via SAT solvers, havent seen a lot of commentary in the blogosphere on it, undernoticed it seems. ]]>