This semester at Princeton I am going to teach a graduate course on ‘advanced optimization’. As you can see from the title, I’m allowing myself a lot of flexibility, and I will discuss various topics. The syllabus for the course reads as follows:
This course is a mathematical introduction to {convex, large-scale, stochastic}-optimization. Topics covered include the ellipsoid method, the analysis of diverse gradient-descent algorithms such as sub-gradient descent, Nesterov’s accelerated gradient descent, FISTA and mirror descent, as well as a discussion of complexity lower bounds à la Nemirovski. These methods will be compared to the conic programming approach, and applications to high-dimensional statistics and machine learning will also be discussed.
I plan to post my lecture notes on this blog after each lecture (for those of you in Princeton: we meet twice a week, Tuesday and Thursday, 11:00am-12:20pm, starting on February 5th). The next three months this blog is going to be almost exclusively about optimization, be ready! Also please feel free to post comments during the semester, regarding either the organization of the material, the topics covered, ect. Also note that (almost) everything that I will say is contained in the two following wonderful books: