COLT 2017 accepted papers

The list of accepted papers at COLT 2017 has been published and it looks particularly good (see below with links to arxiv version)! The growth trend of previous years continues with 228 submissions (14% increase from 2016) and 73 accepted papers. Note also that this year’s edition will be in Amsterdam beginning of July which should be fun. The deadline for early registration is in two days, so hurry up!

COLT 2017 accepted papers
– Constantinos Daskalakis, Manolis Zampetakis and Christos Tzamos. Ten Steps of EM Suffice for Mixtures of Two Gaussians
– Shachar Lovett and Jiapeng Zhang. Noisy Population Recovery from Unknown Noise
– Jonathan Scarlett, Ilija Bogunovic and Volkan Cevher. Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
– Avrim Blum and Yishay Mansour. Efficient Co-Training of Linear Separators under Weak Dependence
– Michal Moshkovitz and Dana Moshkovitz. Mixing Implies Lower Bounds for Space Bounded Learning
– Mitali Bafna and Jonathan Ullman. The Price of Selection in Differential Privacy
– Nader Bshouty, Dana Drachsler Cohen, Martin Vechev and Eran Yahav. Learning Disjunctions of Predicates
– Avinatan Hassidim and Yaron Singer. Submodular Optimization under Noise
– Debarghya Ghoshdastidar, Ulrike von Luxburg, Maurilio Gutzeit and Alexandra Carpentier. Two-Sample Tests for Large Random Graphs using Network Statistics
– Andreas Maurer. A second-order look at stability and generalization
– Eric Balkanski and Yaron Singer. The Sample Complexity of Optimizing a Convex Function
– Daniel Vainsencher, Shie Mannor and Huan Xu. Ignoring Is a Bliss: Learning with Large Noise Through Reweighting-Minimization
– Alekh Agarwal, Haipeng Luo, Behnam Neyshabur and Robert Schapire. Corralling a Band of Bandit Algorithms
– Nikita Zhivotovskiy. Optimal learning via local entropies and sample compression
– Max Simchowitz, Kevin Jamieson and Benjamin Recht. The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime
– Lunjia Hu, Ruihan Wu, Tianhong Li and Liwei Wang. Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes
– Nicolas Brosse, Alain Durmus, Eric Moulines and Marcelo Pereyra. Sampling from a log-concave distribution with compact support with proximal Langevin Monte Carlo
– Maxim Raginsky, Alexander Rakhlin and Matus Telgarsky. Non-Convex Learning via Stochastic Gradient Langevin Dynamics: A Nonasymptotic Analysis
– Alon Cohen, Tamir Hazan and Tomer Koren. Tight Bounds for Bandit Combinatorial Optimization
– Moran Feldman, Christopher Harshaw and Amin Karbasi. Greed Is Good: Near-Optimal Submodular Maximization via Greedy Optimization
– Bin Hu, Peter Seiler and Anders Rantzer. A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
– Jerry Li. Robust Sparse Estimation Tasks in High Dimensions (*to be merge)
– Yeshwanth Cherapanamjeri, Prateek Jain and Praneeth Netrapalli. Thresholding based Efficient Outlier Robust PCA
– Amir Globerson, Roi Livni and Shai Shalev-Shwartz. Effective Semisupervised Learning on Manifolds
– Yuchen Zhang, Percy Liang and Moses Charikar. A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics
– Ashok Cutkosky and Kwabena Boahen. Online Learning Without Prior Information
Joon Kwon, Vianney Perchet and Claire Vernade. Sparse Stochastic Bandits
– Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian and Nathan Srebro. Learning Non-Discriminatory Predictors
– Arpit Agarwal, Shivani Agarwal, Sepehr Assadi and Sanjeev Khanna. Learning with Limited Rounds of Adaptivity: Coin Tossing, Multi-Armed Bandits, and Ranking from Pairwise Comparisons
– Jerry Li and Ludwig Schmidt. Robust Proper Learning for Mixtures of Gaussians via Systems of Polynomial Inequalities
Alexandr Andoni, Daniel Hsu, Kevin Shi and Xiaorui Sun. Correspondence retrieval
– Andrea Locatelli, Alexandra Carpentier and Samory Kpotufe. Adaptivity to Noise Parameters in Nonparametric Active Learning
Salil Vadhan. On Learning versus Refutation
– Sebastian Casalaina-Martin, Rafael Frongillo, Tom Morgan and Bo Waggoner. Multi-Observation Elicitation
Vitaly Feldman and Thomas Steinke. Generalization for Adaptively-chosen Estimators via Stable Median
Shipra Agrawal, Vashist Avadhanula, Vineet Goyal and Assaf Zeevi. Thompson Sampling for the MNL-Bandit
Rafael Frongillo and Andrew Nobel. Memoryless Sequences for Differentiable Losses
– Pranjal Awasthi, Avrim Blum, Nika Haghtalab and Yishay Mansour. Efficient PAC Learning from the Crowd
– Tselil Schramm and David Steurer. Fast and robust tensor decomposition with applications to dictionary learning
– Yury Polyanskiy, Ananda Theertha Suresh and Yihong Wu. Sample complexity of population recovery
– Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski and Sanjeev Arora. On the Ability of Neural Nets to Express Distributions
Lijie Chen, Anupam Gupta, Jian Li, Mingda Qiao and Ruosong Wang. Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
Dylan Foster, Alexander Rakhlin and Karthik Sridharan. ZIGZAG: A new approach to adaptive online learning
This entry was posted in Conference/workshop. Bookmark the permalink.

4 Responses to "COLT 2017 accepted papers"

Leave a reply