Kevin Jamieson
Assistant Professor, Allen School of Computer Science & Engineering
Adjunct Professor, Department of Statistics
University of Washington
Office: CSE2 340
Gates Center for Computer Science & Engineering
University of Washington
Seattle, WA 98195


Kevin Jamieson is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and is the Guestrin Endowed Professor in Artificial Intelligence and Machine Learning. He received his B.S. in 2009 from the University of Washington under the advisement of Maya Gupta, his M.S. in 2010 from Columbia University under the advisement of Rui Castro, and his Ph.D. in 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc in the AMP lab at the University of California, Berkeley working with Benjamin Recht.

Jamieson’s research explores how to leverage already-collected data to inform what future measurements to make next, in a closed loop. Such active learning can extract considerably richer insights than any measurement plan fixed in advance, using the same statistical budget. His work ranges from theory to practical algorithms with guarantees to open-source machine learning systems and has been adopted in a range of applications, including measuring human perception in psychology studies, adaptive A/B/n testing in dynamic web-environments, numerical optimization, and choosing hyperparameters for deep neural networks.


Post docs

Stephen Mussmann (co-advised with Ludwig Schmidt)

Graduate students

Jennifer Brennan

Andrew Wagenmaker

Yuhao Wan

Yifang Chen

Romain Camilleri


Lalit Jain -> Asst. Professor in UW Foster School of Business

Xin Yang (co-advised with Paul Beame) -> ByteDance

Julian Katz-Samuels -> Post-doc at U. Wisconsin - Madison

Jifan Zhang -> Graduate student at U. Wisconsin - Madison



First-Order Regret in Reinforcement Learning with Linear Function Approximation: A Robust Estimation Approach, Andrew Wagenmaker, Yifang Chen, Max Simchowitz, Simon S. Du, Kevin Jamieson, Preprint. PDF

Beyond No Regret: Instance-Dependent PAC Reinforcement Learning, Andrew Wagenmaker, Max Simchowitz, Kevin Jamieson, Preprint. PDF

Learning to Actively Learn: A Robust Approach, Jifan Zhang, Kevin Jamieson, Preprint. PDF

Adaptive Sampling for Convex Regression, Max Simchowitz, Kevin Jamieson, Jordan Suchow, Tom Griffiths, Preprint. PDF


Selective Sampling for Online Best-arm Identification, Romain Camilleri, Zhihan Xiong, Maryam Fazel, Lalit Jain, Kevin Jamieson, NeurIPS 2021. PDF

Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers, Julian Katz-Samuels, Blake Mason, Kevin Jamieson, Rob Nowak, NeurIPS 2021. PDF

Corruption Robust Active Learning, Yifang Chen, Simon Shaolei Du, Kevin Jamieson, NeurIPS 2021. PDF

Improved Algorithms for Agnostic Pool-based Active Classification, Julian Katz-Samuels, Jifan Zhang, Lalit Jain, Kevin Jamieson, ICML 2021. PDF

High-Dimensional Experimental Design and Kernel Bandits, Romain Camilleri, Julian Katz-Samuels, Kevin Jamieson, ICML 2021. PDF

Improved Corruption Robust Algorithms for Episodic Reinforcement Learning, Yifang Chen, Simon S. Du, Kevin Jamieson, ICML 2021. PDF

Task-Optimal Exploration in Linear Dynamical Systems, Andrew Wagenmaker, Max Simchowitz, Kevin Jamieson, ICML 2021. PDF

Leveraging Post Hoc Context for Faster Learning in Bandit Settings with Applications in Robot-Assisted Feeding, Ethan K. Gordon, Sumegh Roychowdhury, Tapomayukh Bhattacharjee, Kevin Jamieson, Siddhartha S. Srinivasa, ICRA 2021. PDF

Experimental Design for Regret Minimization in Linear Bandits, Andrew Wagenmaker, Julian Katz-Samuels, Kevin Jamieson, AISTATS 2021. PDF

An Empirical Process Approach to the Union Bound: Practical Algorithms for Combinatorial and Linear Bandits, Julian Katz-Samuels, Lalit Jain, Zohar Karnin, Kevin Jamieson, NeurIPS 2020. PDF

Active Learning for Identification of Linear Dynamical Systems, Andrew Wagenmaker, Kevin Jamieson, COLT 2020. PDF

Estimating the number and effect sizes of non-null hypotheses, Jennifer Brennan, Ramya Korlakai Vinayak and Kevin Jamieson, ICML 2020. PDF

Massively Parallel Hyperparameter Tuning, Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar, MLSys 2020. PDF

The True Sample Complexity of Identifying Good Arms, Julian Katz-Samuels, Kevin Jamieson, AISTATS 2020. PDF

Sequential Experimental Design for Transductive Linear Bandits, Tanner Fiez, Lalit Jain, Kevin Jamieson, Lillian Ratliff, NeurIPS 2019. PDF

Non-Asymptotic Gap-Dependent Regret Bounds for Tabular MDPs, Max Simchowitz, Kevin Jamieson, NeurIPS 2019. PDF

A New Perspective on Pool-Based Active Classification and False-Discovery Control, Lalit Jain, Kevin Jamieson, NeurIPS 2019. PDF

A Bandit Approach to Multiple Testing with False Discovery Control, Kevin Jamieson, Lalit Jain, NeurIPS, 2018. PDF

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization, Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar, JMLR, 2018*. PDF

Firing Bandits: Optimizing Crowdfunding, Lalit Jain, Kevin Jamieson, ICML, 2018. PDF

A framework for Multi-A(rmed)/B(andit) testing with online FDR control, Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright, NeurIPS, 2017. PDF

The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime, Max Simchowitz, Kevin Jamieson, Benjamin Recht, COLT, 2017. PDF

Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization, Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar, ICLR, 2017. PDF

Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations, Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg, International Conference on Robotics and Automation (ICRA), 2017. PDF

The Power of Adaptivity in Identifying Statistical Alternatives, Kevin Jamieson, Daniel Haas, Ben Recht, NeurIPS, 2016*. PDF

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding, Lalit Jain, Kevin Jamieson, Robert Nowak, NeurIPS, 2016. PDF

Best-of-K Bandits, Max Simchowitz, Kevin Jamieson, Benjamin Recht, COLT, 2016. PDF

Non-stochastic Best Arm Identification and Hyperparameter Optimization, Kevin Jamieson, Ameet Talwalkar, AISTATS, 2016. PDF

Top Arm Identification in Multi-Armed Bandits with Batch Arm Pulls, Kwang-Sung Jun, Kevin Jamieson, Robert Nowak, Xiaojin Zhu, AISTATS, 2016. PDF

NEXT: A System for Real-World Development, Evaluation, and Application of Active Learning, Kevin Jamieson, Lalit Jain, Chris Fernandez, Nick Glattard, Robert Nowak, NeurIPS, 2015. PDF

The Analysis of Adaptive Data Collection Methods for Machine Learning, Kevin Jamieson, PhD Thesis, University of Wisconsin - Madison, March 2015. PDF

Sparse Dueling Bandits, Kevin Jamieson, Sumeet Katariya, Atul Deshpande, and Robert Nowak, AISTATS, 2015. PDF

Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting, Kevin Jamieson and Robert Nowak, CISS, 2014. PDF

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, COLT, 2014. PDF

On Finding the Largest Mean Among Many, Kevin Jamieson, Matt Malloy, Robert Nowak, and Sebastien Bubeck, Asilomar, 2013. PDF

Query Complexity of Derivative-Free Optimization, Kevin Jamieson, Robert Nowak, and Ben Recht, Neural Information Processing Systems (NeurIPS), 2012. PDF (Extended version)

Active Ranking using Pairwise Comparisons, Kevin Jamieson and Robert Nowak, Neural Information Processing Systems (NeurIPS), 2011. PDF (Extended version)

Low-Dimensional Embedding using Adaptively Selected Ordinal Data Kevin Jamieson and Robert Nowak, Allerton Conference on Communication, Control, and Computing, 2011. PDF

Channel-Robust Classifiers, Hyrum S. Anderson, Maya R. Gupta, Eric Swanson, and Kevin Jamieson, IEEE Trans. on Signal Processing, 2010.

Training a support vector machine to classify signals in a real environment given clean training data, Kevin Jamieson, Maya R. Gupta, Eric Swanson and Hyrum S. Anderson, Proc. IEEE ICASSP, 2010.

Sequential Bayesian Estimation of the Probability of Detection for Tracking, Kevin Jamieson, Maya R Gupta, and David Krout, Proc. IEEE Conference on Information Fusion, 2009.

Selected Projects

Adaptive Mutliple Testing with FDR Control


Consider N possible treatments, say, drugs in a clinical trial, where each treatment either has a positive expected effect or no difference. If evaluating the ith treatment results in a noisy outcome how do we adaptively decide which treatment to try next if the goal is to discover as many true positives subject to the proportion of false discoveries being bounded by .05? Our solution to this problem was so efficient that we worked with Optimizely, the largest experimentation platform for A/B/n testing on the web, to get our algorithm implemented to be exploited by some of the web's most successful companies.

Conference paper Blog post

Optimizing Crowdfunding Platforms


Kiva is a nonprofit crowdfunding platform with a mission to help alleviate poverty around the world by enabling anyone in the crowd to lend as little as $25 to the borrower to help them start or grow a business, go to school, access clean energy or realize their potential. Unfortunately, borrowers outnumber lenders and not all projects can hit their reserve price and be funded. The challenge for the crowdfunding platform is deciding how to prioritize loans--what lenders see when they look at the website--to maximize the total number of fully funded projects. This objective is in contrast to maximizing total number of lending events, analogous to click through rate. We model this problem and propose an algorithm for this setting in our conference paper and we are actively working with Kiva to implement it in their system. The same principles directly apply to other crowdfunding systems as well (e.g., Kickstarter, IndieGoGo, etc.).

Conference paper

Hyperband: Bandits for hyperparameter tuning


Hyperband is a method for speeding up hyperparameter search. In contrast to Bayesian methods that focus energy on making better selections, Hyperband uses simple random search but exploits the iterative nature of training algorithms using recent advances in pure-exploration multi-armed bandits. Up to orders of magnitude improvements over Bayesian optimization are achievable on deep learning tasks.

Learn more New blog post (2018) Blog post (2016)

The New Yorker Caption Contest


Each week, the New Yorker magazine runs a cartoon contest where readers are invited to submit a caption for that week's cartoon - thousands are submitted. The NEXT team has teamed up with Bob Mankoff, cartoon editor of the New Yorker, to use crowdsourcing and adaptive sampling techniques to help decide the caption contest winner each week. This is an example of state-of-the-art active learning being implemented and evaluated in the real world using the NEXT system and the principles developed in that paper.

Learn more This week's contest



NEXT is a computational framework and open-source machine learning system that simplifies the deployment and evaluation of active learning algorithms that use human feedback, e.g. from Mechanical Turk. The system is optimized for the real-time computational demands of active learning algorithms and built to scale to handle a crowd of workers any size. The system is for active learning researchers as well as practitioners who want to collect data adaptively.

Conference Paper GitHub Page Official website

Beer Mapper


Beer Mapper began as a practical implementation of my theoretical active ranking work on an iPhone/iPad to be used simply as a proof of concept and a cool prop to use in presentations of the theory. A brief page on my website descibring how it worked collected dust for several months until several blogs found it translating into large traffic and interest in it being brought to the app store. I teamed up with the tech startup Savvo based out of Chicago that is now leading the development of the app.

Official website