Do you have a source for Nemirovski’s results? I was not able to find either the late 70’s result, or the 1981 improvement you mention, on his webpages. ]]>

It seems that Nemirovski’s acceleration does not need to know how smooth the function f is in advance. But we should know it as a hyperparameter if we are using Nesterov’s acceleration. So the question is if it is possible to get an accelerated gradient descent algorithm without line search if we are not given the knowledge of the smooth parameter?

]]>kredi kartı asgari ödeme tutarı hesaplama,asgari ödeme hesaplama,kredi kartı asgarisi

]]>When we write the expression for x_(t+1), why we can ignore phi(y_t+1) and the term y_t+1 in the dot product?

Thanks!

]]>Fek Ücreti Kaldırıldı Mı?İpotek Fek ParasıFek Parası Nedir? ]]>

Another open problem that I like a lot is whether you can obtain a best of both worlds for linear bandits. The catch here is that I don’t know what a PolyINF type regularizer would look like (indeed the most natural regularizer for the linear case is the entropic barrier).

]]>This is really cool! Do you think there is any hope for a clean algorithm that interpolates between all amounts of adversarial perturbation?

Also, I think in the proof of lemma 1, the displayed equation has the wrong thing on the left side.

]]>x_{t+1} =_{x \in P_t} f(x) where $P_t$ is the span, is there a missing \min? ]]>