Me
Kevin Jamieson
Assistant Professor
Allen School of Computer Science & Engineering
University of Washington

jamieson@cs.washington.edu
Computer Science & Engineering, Office 666
Paul Allen Center
University of Washington
Seattle, WA 98195

About

Kevin Jamieson is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and is the Guestrin Endowed Professor in Artificial Intelligence and Machine Learning. He received his B.S. in 2009 from the University of Washington under the advisement of Maya Gupta, his M.S. in 2010 from Columbia University under the advisement of Rui Castro, and his Ph.D. in 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc in the AMP lab at the University of California, Berkeley working with Benjamin Recht.

Jamieson’s research explores how to leverage already-collected data to inform what future measurements to make next, in a closed loop. Such active learning can extract considerably richer insights than any measurement plan fixed in advance, using the same statistical budget. His work ranges from theory to practical algorithms with guarantees to open-source machine learning systems and has been adopted in a range of applications, including measuring human perception in psychology studies, adaptive A/B/n testing in dynamic web-environments, numerical optimization, and choosing hyperparameters for deep neural networks.


Research

Preprints

Active Learning for Convex Regression, Max Simchowitz, Kevin Jamieson, Jordan Suchow, Tom Griffiths, Preprint, 2017. PDF

Publications

A framework for Multi-A(rmed)/B(andit) testing with online FDR control, Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright, NIPS, 2017. PDF

The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime, Max Simchowitz, Kevin Jamieson, Benjamin Recht, COLT, 2017. PDF

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization, Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar, ICLR, 2017*. PDF

Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations, Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg, International Conference on Robotics and Automation (ICRA), 2017. PDF

The Power of Adaptivity in Identifying Statistical Alternatives, Kevin Jamieson, Daniel Haas, Ben Recht, NIPS, 2016*. PDF

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding, Lalit Jain, Kevin Jamieson, Robert Nowak, NIPS, 2016. PDF

Best-of-K Bandits, Max Simchowitz, Kevin Jamieson, Benjamin Recht, COLT, 2016. PDF

Non-stochastic Best Arm Identification and Hyperparameter Optimization, Kevin Jamieson, Ameet Talwalkar, AISTATS, 2016. PDF

Top Arm Identification in Multi-Armed Bandits with Batch Arm Pulls, Kwang-Sung Jun, Kevin Jamieson, Robert Nowak, Xiaojin Zhu, AISTATS, 2016. PDF

NEXT: A System for Real-World Development, Evaluation, and Application of Active Learning, Kevin Jamieson, Lalit Jain, Chris Fernandez, Nick Glattard, Robert Nowak, NIPS, 2015. PDF

The Analysis of Adaptive Data Collection Methods for Machine Learning, Kevin Jamieson, PhD Thesis, University of Wisconsin - Madison, March 2015. PDF

Sparse Dueling Bandits, Kevin Jamieson, Sumeet Katariya, Atul Deshpande, and Robert Nowak, AISTATS, 2015. PDF

Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting, Kevin Jamieson and Robert Nowak, CISS, 2014. PDF

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, COLT, 2014. PDF

On Finding the Largest Mean Among Many, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, Asilomar, 2013. PDF

Query Complexity of Derivative-Free Optimization, Kevin Jamieson, Robert Nowak, and Ben Recht, Neural Information Processing Systems (NIPS), 2012. PDF (Extended version)

Active Ranking using Pairwise Comparisons, Kevin Jamieson and Robert Nowak, Neural Information Processing Systems (NIPS), 2011. PDF (Extended version)

Low-Dimensional Embedding using Adaptively Selected Ordinal Data Kevin Jamieson and Robert Nowak, Allerton Conference on Communication, Control, and Computing, 2011. PDF

Channel-Robust Classifiers, Hyrum S. Anderson, Maya R. Gupta, Eric Swanson, and Kevin Jamieson, IEEE Trans. on Signal Processing, 2010.

Training a support vector machine to classify signals in a real environment given clean training data, Kevin Jamieson, Maya R. Gupta, Eric Swanson and Hyrum S. Anderson, Proc. IEEE ICASSP, 2010.

Sequential Bayesian Estimation of the Probability of Detection for Tracking, Kevin Jamieson, Maya R Gupta, and David Krout, Proc. IEEE Conference on Information Fusion, 2009.


Coding Projects

Hyperband: Bandits for hyperparameter tuning

Me

Hyperband is a method for speeding up hyperparameter search. In contrast to Bayesian methods that focus energy on making better selections, Hyperband uses simple random search but exploits the iterative nature of training algorithms using recent advances in pure-exploration multi-armed bandits. Up to orders of magnitude improvements over Bayesian optimization are achievable on deep learning tasks.

Learn more Blog post 1 Blog post 2



The New Yorker Caption Contest

Me

Each week, the New Yorker magazine runs a cartoon contest where readers are invited to submit a caption for that week's cartoon - thousands are submitted. The NEXT team has teamed up with Bob Mankoff, cartoon editor of the New Yorker, to use crowdsourcing and adaptive sampling techniques to help decide the caption contest winner each week. This is an example of state-of-the-art active learning being implemented and evaluated in the real world using the NEXT system and the principles developed in that paper.

Learn more This week's contest



NEXT

Me

NEXT is a computational framework and open-source machine learning system that simplifies the deployment and evaluation of active learning algorithms that use human feedback, e.g. from Mechanical Turk. The system is optimized for the real-time computational demands of active learning algorithms and built to scale to handle a crowd of workers any size. The system is for active learning researchers as well as practitioners who want to collect data adaptively.

Conference Paper GitHub Page Official website


Beer Mapper

Me

Beer Mapper began as a practical implementation of my theoretical active ranking work on an iPhone/iPad to be used simply as a proof of concept and a cool prop to use in presentations of the theory. A brief page on my website descibring how it worked collected dust for several months until several blogs found it translating into large traffic and interest in it being brought to the app store. I teamed up with the tech startup Savvo based out of Chicago that is now leading the development of the app.

Official website