COLT 2015 accepted papers and some cool videos

Like last year I compiled a list of the COLT 2015 accepted papers together with links to the arxiv version whenever I could find one. These papers were selected from 180 submissions, a number that keep rising in the recent years (of course this is true for all the major learning conferences, for instance ICML had over 1000 submissions this year). This strong program, together with a pretty good location (Paris, 5th), should make COLT 2015 quite attractive! Also, following the trend of COLT 2013 and COLT 2014, we will have some “pre-COLT” activity, with an optimization workshop co-organized by Vianney Perchet (also the main organizer for COLT itself) and Patrick Louis Combettes.

On a completely different topic, I wanted to share some videos which many readers of this blog will enjoy. These are the videos of the 2015 Breakthrough Prize in Mathematics Symposium, with speakers (and prize winners) Jacob Lurie, Terence Tao, Maxim Kontsevich, Richard Taylor, and Simon Donaldson. They were asked to give talks to a general audience, and they succeeded at different levels. Both Taylor and Lurie took this request very seriously, perhaps a bit too much even, and their talks (here and here) are very elementary (yet still entertaining!). Tao talks about the Polymath projects, and the video can be safely skipped unless you have never heard of Polymath. I understood nothing of Kontsevich’s talk (it’s pretty funny to think that his talk was prepared with the guideline of aiming at a general audience). My favorite talk by far was the one by Donaldson. Thanks to him I finally understand what the extra 7 unobserved dimensions of our universe could look like! There is also a panel discussion led by Yuri Milner with the 5 mathematicians. Unfortunately the questions are a bit dull, so there is not much that the panelists can do to make this interesting. Yet there are a few gems in the answers, such as Tao claiming that *universality* (such as in the Central Limit Theorem) is behind the unreasonable effectiveness of mathematics in physics, and Kontsevich who replies to Tao that this is a valid point at the macroscopic level, but the fact that mathematics work so well at a microscopic level (e.g., quantum mechanics) makes him question whether we live in a simulation. Kontsevich also says that there is no fundamental obstacle to building an A.I., and he even claims that he gave some thoughts to this problem, though I could not find any paper written by him on this matter.

COLT 2015 accepted papers

– Arpit Agarwal and Shivani Agarwal. On Consistent Surrogate Risk Minimization and Property Elicitation
Noga Alon, Nicolò Cesa-Bianchi, Ofer Dekel and Tomer Koren. Online Learning with Feedback Graphs: Beyond Bandits
– Sanjeev Arora, Rong Ge, Tengyu Ma and Ankur Moitra. Simple, Efficient, and Neural Algorithms for Sparse Coding
– Pranjal Awasthi, Maria Florina Balcan, Nika Haghtalab and Ruth Urner. Efficient Learning of Linear Separators under Bounded Noise
– Pranjal Awasthi, Moses Charikar, Kevin Lai and Andrej Risteski. Label optimal regret bounds for online local learning
– Maria-Florina Balcan, Avrim Blum and Santosh Vempala. Efficient Representations for Lifelong Learning and Autoencoding
– Akshay Balsubramani and Yoav Freund. Optimally Combining Classifiers Using Unlabeled Data
– Peter Bartlett, Wouter Koolen, Alan Malek, Eiji Takimoto and Manfred Warmuth. Minimax Fixed-Design Linear Regression
– Alexandre Belloni, Tengyuan Liang, Hari Narayanan and Alexander Rakhlin. Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions
Sébastien Bubeck, Ofer Dekel, Tomer Koren and Yuval Peres. Bandit Convex Optimization: sqrt{T} Regret in One Dimension
– Yang Cai, Constantinos Daskalakis and Christos Papadimitriou. Optimum Statistical Estimation with Strategic Data Sources
Nicolo Cesa-Bianchi, Yishay Mansour and Ohad Shamir. On the Complexity of Learning with Kernels
– Yuxin Chen, Hamed Hassani, Amin Karbasi and Andreas Krause. Sequential Information Maximization: When is Greedy Near-optimal?
Hubie Chen and Matt Valeriote. Learnability of Solutions to Conjunctive Queries: The Full Dichotomy
– Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng and Shang-Hua Teng. Efficient Sampling for Gaussian Graphical Models via Spectral Sparsification
– Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri and Manfred Warmuth. On-Line Learning Algorithms for Path Experts with Non-Additive Losses
– Rachel Cummings, Stratis Ioannidis and Katrina Ligett. Truthful Linear Regression
– Gautam Dasarathy, Robert Nowak and Xiaojin Zhu. $S^2$: An Efficient Graph Based Active Learning Algorithm with Application to Nonparametric Classification
Miroslav Dudik, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins and Masrour Zoghi. Contextual Dueling Bandits
– Miroslav Dudík, Robert Schapire and Matus Telgarsky. Convex Risk Minimization and Conditional Probability Estimation
– Justin Eldridge, Mikhail Belkin and Yusu Wang. Beyond Hartigan Consistency: Merge Distortion Metric for Hierarchical Clustering
– Moein Falahatgar, Ashkan Jafarpour, Alon Orlitsky, Venkatadheeraj Pichapati and Ananda Theertha Suresh. Faster Algorithms for Testing under Conditional Sampling
Uriel Feige, Yishay Mansour and Robert Schapire. Learning and inference in the presence of corrupted inputs
Nicolas Flammarion and Francis Bach. From Averaging to Acceleration, There is Only a Step-size
Dean Foster, Howard Karloff and Justin Thaler. Variable Selection is Hard
– Rafael Frongillo and Ian Kash. Vector-Valued Property Elicitation
Roy Frostig, Rong Ge, Sham Kakade and Aaron Sidford. Competing with the Empirical Risk Minimizer in a Single Pass
Pierre Gaillard and Sébastien Gerchinovitz. A Chaining Algorithm for Online Nonparametric Regression
– Nicolas Goix, Anne Sabourin and Stéphan Clémençon. Learning the dependence structure of rare events: a non-asymptotic study
– Zaid Harchaoui, Anatoli Juditsky, Arkadi Nemirovski and Dmitry Ostrovsky. Adaptive recovery of signals by convex optimization
– Sam Hopkins, Jonathan Shi and David Steurer. Tensor principal component analysis
Prateek Jain and Praneeth Netrapalli. Fast Exact Matrix Completion with Finite Samples
– Majid Janzamin, Animashree Anandkumar and Rong Ge. Learning Overcomplete Latent Variable Models through Tensor Methods
– Parameswaran Kamalaruban, Robert Williamson and Xinhua Zhang. Exp-Concavity of Proper Composite Losses
– Sudeep Kamath, Alon Orlitsky, Venkatadheeraj Pichapati and Ananda Theertha Suresh. On Learning Distributions from their Samples
Varun Kanade and Elchanan Mossel. MCMC Learning
Zohar Karnin and Edo Liberty. Online PCA with Spectral Bounds
Junpei Komiyama, Junya Honda, Hisashi Kashima and Hiroshi Nakagawa. Regret Lower Bound and Optimal Algorithm in Dueling Bandit Problem
– Samory Kpotufe, Ruth Urner and Shai Ben-David. Hierarchical label queries with data-dependent partitions
– Rasmus Kyng, Anup Rao, Sushant Sachdeva and Daniel A. Spielman. Algorithms for Lipschitz Learning on Graphs
Jan Leike and Marcus Hutter. Bad Universal Priors and Notions of Optimality
– Tengyuan Liang, Alexander Rakhlin and Karthik Sridharan. Learning with Square Loss: Localization through Offset Rademacher Complexity
– Haipeng Luo and Robert Schapire. Achieving All with No Parameters: Adaptive NormalHedge
– Mehrdad Mahdavi, Lijun Zhang and Rong Jin. Lower and Upper Bounds on the Generalization of Stochastic Exponentially Concave Optimization
– Konstantin Makarychev, Yury Makarychev and Aravindan Vijayaraghavan. Correlation Clustering with Noisy Partial Information
Issei Matsumoto, Kohei Hatano and Eiji Takimoto. Online Density Estimation of Bradley-Terry Models
– Behnam Neyshabur, Ryota Tomioka and Nathan Srebro. Norm-Based Capacity Control in Neural Networks
– Christos Papadimitriou and Santosh Vempala. Cortical Learning via Prediction
Richard Peng, He Sun and Luca Zanetti. Partitioning Well-Clustered Graphs: Spectral Clustering Works!
– Vianney Perchet, Philippe Rigollet, Sylvain Chassang and Erik Snowberg. Batched Bandit Problems
Patrick Rebeschini and Amin Karbasi. Fast Mixing for Discrete Point Processes
– Mark Reid, Rafael Frongillo, Robert Williamson and Nishant Mehta. Generalized Mixability via Entropic Duality
Hans Simon. An Almost Optimal PAC Algorithm
– Jacob Steinhardt and John Duchi. Minimax rates for memory-bounded sparse linear regression
– Christos Thrampoulidis, Samet Oymak and Babak Hassibi. Regularized Linear Regression: A Precise Analysis of the Estimation Error
– Santosh Vempala and Ying Xiao. Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity
– Huizhen Yu and Richard Sutton. On Convergence of Emphatic Temporal-Difference Learning
This entry was posted in Conference/workshop. Bookmark the permalink.

3 Responses to "COLT 2015 accepted papers and some cool videos"

    • Sebastien Bubeck

Leave a reply