Thank you for share. ]]>

the bandit monograph contains most of what you should know to start doing research on bandit-related problems. The two other set of lecture notes are useful to place bandit problems within the broader context of optimization and online learning. Obviously there are many other useful references besides these ones, but it’s probably a good idea to first try to master the bandit monograph.

]]>I’m a PhD student in statistics and beginner for Bandit-related problems.

I was wondering that in order to get involved in this fruitful field, besides your basic bandit monograph, should I also go through your two published optimization lecture notes (intro.to online optimization & theory of convex optimization) ? Any suggestions to beginners?

Thanks a lot!

Best,

Eric

Excellent work! I’ve been reading this for the past few days and have found it to be a quite helpful resource. One comment:

On page 43, in your discussion on Nesterov’s Accelerated Gradient Descent, you say that, intuitively, \Phi_s become a finer and finer approximation “from below” to f in the sense of inequality (3.17).

I found the wording here a bit confusing (“from below”, in particular), since for some values of x we have \Phi_s(x) > f(x). For those x where \Phi_s(x) > f(x), the inequality (3.17) is putting a bound on how good of an approximation \Phi_s is to f “from above”.

Erik

]]>