COLT 2014 accepted papers

The accepted papers for COLT 2014 have just been posted! This year we had a record number of 140 submissions, out of which 52 were accepted (38 for a 20mn presentation and 14 for a 5mn presentation). In my opinion the program is very strong, I am looking forward to listen to all these talks in Barcelona! By the way now may be a very good time to register, the deadline for early registration is coming up fast!

Here is a list of the accepted papers together with links when I could find one (if you want me to add a link just send me an email).

COLT 2014 accepted papers

– Jacob Abernethy, Chansoo Lee, Abhinav Sinha and Ambuj Tewari. Learning with Perturbations via Gaussian Smoothing

– Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli and Rashish Tandon. Learning Sparsely Used Overcomplete Dictionaries

– Alekh Agarwal, Ashwin Badanidiyuru, Miroslav Dudik, Robert Schapire and Aleksandrs Slivkins. Robust Multi-objective Learning with Mentor Feedback- Morteza Alamgir, –

– Ulrike von Luxburg and Gabor Lugosi. Density-preserving quantization with application to graph downsampling

– Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher and James Voss. The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures

– Sanjeev Arora, Rong Ge and Ankur Moitra. New Algorithms for Learning Incoherent and Overcomplete Dictionaries

– Ashwinkumar Badanidiyuru, John Langford and Aleksandrs Slivkins. Resourceful Contextual Bandits

– Shai Ben-David and Ruth Urner. The sample complexity of agnostic learning under deterministic labels

– Aditya Bhaskara, Moses Charikar and Aravindan Vijayaraghavan. Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

– Evgeny Burnaev and Vladimir Vovk. Efficiency of conformalized ridge regression

– Karthekeyan Chandrasekaran and Richard M. Karp. Finding a most biased coin with fewest flips

– Yudong Chen, Xinyang Yi and Constantine Caramanis. A Convex Formulation for Mixed Regression: Minimax Optimal Rates

– Amit Daniely, Nati Linial and Shai Shalev-Shwartz. The complexity of learning halfspaces using generalized linear methods

– Amit Daniely and Shai Shalev-Shwartz. Optimal Learners for Multiclass Problems

– Constantinos Daskalakis and Gautam Kamath. Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures of Gaussians

– Ofer Dekel, Jian Ding, Tomer Koren and Yuval Peres. Online Learning with Composite Loss Functions Can Be Hard

– Tim van Erven, Wojciech Kotlowski and Manfred K. Warmuth. Follow the Leader with Dropout Perturbations

– Vitaly Feldman and Pravesh Kothari. Learning Coverage Functions and Private Release of Marginals

– Vitaly Feldman and David Xiao. Sample Complexity Bounds on Differentially Private Learning via Communication Complexity

– Pierre Gaillard, Gilles Stoltz and Tim van Erven. A Second-order Bound with Excess Losses

– Eyal Gofer. Higher-Order Regret Bounds with Switching Costs

– Sudipto Guha and Kamesh Munagala. Stochastic Regret Minimization via Thompson Sampling

– Moritz Hardt, Raghu Meka, Prasad Raghavendra and Benjamin Weitz. Computational Limits for Matrix Completion

– Moritz Hardt and Mary Wootters. Fast Matrix Completion Without the Condition Number

– Elad Hazan, Zohar Karnin and Raghu Meka. Volumetric Spanners: an Efficient Exploration Basis for Learning

– Prateek Jain and Sewoong Oh. Learning Mixtures of Discrete Product Distributions using Spectral Decompositions

– Kevin Jamieson, Matthew Malloy, Robert Nowak and Sebastien Bubeck. lil’ UCB: An Optimal Exploration Algorithm for Multi-Armed Bandits

– Satyen Kale. Multiarmed Bandits With Limited Expert Advice

– Varun Kanade and Justin Thaler. Distribution-Independent Reliable Learning

– Ravindran Kannan, Santosh S. Vempala and David Woodruff. Principal Component Analysis and Higher Correlations for Distributed Data

– Emilie Kaufmann, Olivier Cappé and Aurélien Garivier. On the Complexity of A/B Testing

– Matthäus Kleindessner and Ulrike von Luxburg. Uniqueness of ordinal embedding

– Kfir Levy, Elad Hazan and Tomer Koren. Logistic Regression: Tight Bounds for Stochastic and Online Optimization

– Ping Li, Cun-Hui Zhang and Tong Zhang. Compressed Counting Meets Compressed Sensing

– Che-Yu Liu and Sébastien Bubeck. Most Correlated Arms Identification

– Stefan Magureanu, Richard Combes and Alexandre Proutière. Lipschitz Bandits:Regret Lower Bounds and Optimal Algorithms

– Shie Mannor, Vianney Perchet and Gilles Stoltz. Approachability in unknown games: Online learning meets multi-objective optimization

– Brendan McMahan and Francesco Orabona. Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations

– Shahar Mendelson. Learning without Concentration

– Aditya Menon and Robert Williamson. Bayes-Optimal Scorers for Bipartite Ranking

– Elchanan Mossel, Joe Neeman and Allan Sly. Belief Propagation, Robust Reconstruction and Optimal Recovery of Block Models

– Andreas Maurer, Massimiliano Pontil and Bernardino Romera-Paredes. An Inequality with Applications to Structured Sparsity and Multitask Dictionary Learning

– Alexander Rakhlin and Karthik Sridharan. Online Nonparametric Regression

– Harish Ramaswamy, Balaji S.B., Shivani Agarwal and Robert Williamson. On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems

– Samira Samadi and Nick Harvey. Near-Optimal Herding

– Rahim Samei, Pavel Semukhin, Boting Yang and Sandra Zilles. Sample Compression for Multi-label Concept Classes

– Ingo Steinwart, Chloe Pasin and Robert Williamson. Elicitation and Identification of Properties

– Ilya Tolstikhin, Gilles Blanchard and Marius Kloft. Localized Complexities for Transductive Learning

– Robert Williamson. The Geometry of Losses

– Jiaming Xu, Marc Lelarge and Laurent Massoulie. Edge Label Inference in Generalized Stochastic Block Models: from Spectral Theory to Impossibility Results

– Se-Young Yun and Alexandre Proutiere. Community Detection via Random and Adaptive Sampling

– Yuchen Zhang, Martin Wainwright and Michael Jordan. Lower bounds on the performance of polynomial-time algorithms for sparse linear regression

This entry was posted in Conference/workshop. Bookmark the permalink.

One Response to "COLT 2014 accepted papers"

Leave a reply