*Guest post by Mark Sellke.*

In the comments of the previous blog post we asked if the new viewpoint on best of both worlds can be used to get clean “interpolation” results. The context is as follows: in a STOC 2018 paper followed by a COLT 2019 paper, the following corruption model was discussed: stochastic bandits, except for rounds which are adversarial. The state of the art bounds were of the form: optimal (or almost optimal) stochastic term plus , and it was mentioned as an open problem whether could be improved to (there is a lower bound showing that is necessary — when ). As was discussed in the comment section, it seemed that indeed this clean best of both worlds approach should certainly shed light on the corruption model. It turns out that this is indeed the case, and a one-line calculation resolves positively the open problem from the COLT paper. The formal result is as follows (recall the notation/definitions from the previous blog post):

Lemma:Consider a strategy whose regret with respect to the optimal action is upper bounded by(1)

Then in the -corruption stochastic bandit model one has that the regret is bounded by:

Note that by the previous blog post we know strategies that satisfy (1) with (see Lemma 2 in the previous post).

*Proof: In equation (1) let us apply Jensen over the corrupt rounds, this yields a term . For the non-corrupt rounds, let us use that*

The sum of the second term on the right hand side is upper bounded by . On the other hand the sum (over non-corrupt rounds) of the first term is equal to of the regret over the non-corrupt rounds, which is certainly smaller than of the total regret plus . Thus we obtain (denoting for the total regret):

which concludes the proof.