Category Archives: Theoretical Computer Science

k-server, part 3: entropy regularization for weighted k-paging

If you have been following the first two posts (post 1, post 2), now is time to reap the rewards! I will show here how to obtain a -competitive algorithm for (weighted) paging, i.e., when the metric space corresponds to … Continue reading

Posted in Theoretical Computer Science | 10 Comments

k-server, part 2: continuous time mirror descent

We continue our -server series (see post 1 here). In this post we briefly discuss the concept of a fractional solution for -server, which by analogy with MTS will in fact be a fractional “anti-solution”. Then we introduce the continuous … Continue reading

Posted in Theoretical Computer Science | 1 Comment

k-server, part 1: online learning and online algorithms

The -server problem is a classical and very attractive instance of online decision making. The decisions to be made in this problem are simple: given a requested location in some finite metric space and a fleet of k servers currently sitting … Continue reading

Posted in Theoretical Computer Science | 4 Comments

Discrepancy algorithm inspired by gradient descent and multiplicative weights; after Levy, Ramadas and Rothvoss

A week or so ago at our Theory Lunch we had the pleasure to listen to Harishchandra Ramadas (student of Thomas Rothvoss) who told us about their latest discrepancy algorithm. I think the algorithm is quite interesting as it combines … Continue reading

Posted in Theoretical Computer Science | 1 Comment

STOC 2017 accepted papers

The list of accepted papers to STOC 2017 has just been released. Following the trend in recent years there are quite a few learning theory papers! I have already blogged about the kernel-based convex bandit algorithm; as well as the … Continue reading

Posted in Conference/workshop, Theoretical Computer Science | 2 Comments

Kernel-based methods for convex bandits, part 3

(This post absolutely requires to have read Part 1 and Part 2.) A key assumption at the end of Part 1 was that, after rescaling space so that the current exponential weights distribution is isotropic, one has (1)   for … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Kernel-based methods for convex bandits, part 2

The goal of this second lecture is to explain how to do the variance calculation we talked about at the end of Part 1 for the case where the exponential weights distribution is non-Gaussian. We will lose here a factor … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Kernel-based methods for bandit convex optimization, part 1

A month ago Ronen Eldan, Yin Tat Lee and myself posted our latest work on bandit convex optimization. I’m quite happy with the result (first polynomial time method with poly(dimension)-regret) but I’m even more excited by the techniques we developed. Next … Continue reading

Posted in Optimization, Theoretical Computer Science | Leave a comment

Guest post by Sasho Nikolov: Beating Monte Carlo

If you work long enough in any mathematical science, at some point you will need to estimate an integral that does not have a simple closed form. Maybe your function is really complicated. Maybe it’s really high dimensional. Often you … Continue reading

Posted in Theoretical Computer Science | 7 Comments

Komlos conjecture, Gaussian correlation conjecture, and a bit of machine learning

Today I would like to talk (somewhat indirectly) about a beautiful COLT 2014 paper by Nick Harvey and Samira Samadi. The problem studied in this paper goes as follows: imagine that you have a bunch of data points in with a certain … Continue reading

Posted in Optimization, Probability theory, Theoretical Computer Science | 5 Comments