Sham M. Kakade

University of Washington

Professor in:
Department of Computer Science
Department of Statistics

Senior Data Science Fellow,
eScience Institute

Adjunct Professor in
Department of Electrical Engineering

Sr. Principal Researcher at MSR-NYC


I am interested in the mathematical foundations of machine learning and AI. I focus on the design of provably efficient and practical algorithms that are relevant for a broad range of paradigms. My current interests include : i) reinforcement learning and controls ii) representation (and deep) learning and iii) natural language processing and memory. I enjoy collaborating with a diverse set of researchers to tackle these issues!


Book: Reinforcement Learning: Theory and Algorithms  

Alekh Agarwal, Nan Jiang, Wen Sun, and I are writing a monograph on Reinforcement Learning. We will be periodically making updates to the draft. Also, see here for a current course.

COVID19 Reports and CommonCircle

  • PACT: Privacy Sensitive Protocols and Mechanisms for Mobile Contact Tracing.
    Justin Chan, Landon Cox, Dean Foster, Shyam Gollakota, Eric Horvitz, Joseph Jaeger, Sham Kakade, Tadayoshi Kohno, John Langford, Jonathan Larson, Puneet Sharma, Sudheesh Singanamalla, Jacob Sunshine, Stefano Tessaro.
    In IEEE Bulletin on Data Engineering, Vol. 43 No. 2, 2020.
    ArXiv Report, arXiv: 2004.03544.

  • Mitigate/Suppress/Maintain: Local Targets for Victory Over COVID
    Divya Siddarth et. al.
    Rapid Response Initiative.
    In The Edmond J. Safra Center at Harvard University May 2020.
    Report, PDF.

  • Pandemic Resilience: Getting it Done
    Danielle Allen et. al.
    Rapid Response Initiative.
    In The Edmond J. Safra Center at Harvard University May 2020.
    Report, PDF.

  • Outpacing the Virus: Digital Response to Containing the Spread of COVID-19 while Mitigating Privacy Risks.
    Vi Hart et. al.
    Rapid Response Initiative.
    In The Edmond J. Safra Center at Harvard University May 2020.
    Report, PDF.

  • WA Exposure Notification App

  • In the News: from WA Governor's Office
  • WA Notify

  • News

    Martha White, Nicolas Le Roux, and I gave a NeurIPS 2020 tutorial on direct policy optimization methods in reinforcement learning. Check it out here! We have collab notebooks to demonstrate all the theoretical ideas!
    I'll be a part of this NSF AI Institute on the Foundations of Machine Learning. It is headed by UT Austin and UW is a part! See here and here.
    UW received a Tripods Phase 2 grant from the NSF. I helped us set this up in phase 1, see here.


    Sham Kakade is a professor in the Department of Computer Science and the Department of Statistics at the University of Washington. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the statistical foundations of reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models; the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and non-convex algorithms. He is the recipient of the ICML Test of Time Award (2020), the IBM Pat Goldberg best paper award (in 2007), INFORMS Revenue Management and Pricing Prize (2014). He has been program chair for COLT 2011.
    Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.



    Discovering Latent Structure in Societal-Scale Data  

    Cover Tree for Nearest Neighbor Search  

    Activities and Services

    Committee for the Sloan Research Fellowships in Computer Science (active).
    Co-organizer for the Simons Symposium on New Directions in Theoretical Machine Learning, May 2019.
    Co-organizer for the Simons Foundations of Machine Learning, Winter, 2017
    Co-chair for the Simon's Representational Learning workshop, March, 2017
    Co-chair for the IMS-MSR Workshop: Foundations of Data Science, June 11th, 2015.
    Steering committee for the fourth New England Machine Learning Day, May 18th, 2015.
    Program committee for the third New England Machine Learning Day, May 13th, 2014.
    Co-chair for New York Computer Science and Economics Day V, Dec 3rd, 2012.
    Program committee for the first New England Machine Learning Day, May 16th, 2012.
    Program chair for the 24th Annual Conference on Learning Theory (COLT 2011) which took place in Budapest, Hungary, on July 9-11, 2011.


    Tensor Decompositions for Learning Latent Variable Models, AAAI 2014
    Tensor Decompositions Methods for Learning Latent Variable Models, ICML 2013

    Course Links

    CS 6789: Foundations of Reinforcement Learning, guest teacher, Fall 2020
    CSE 599m: Reinforcement Learning and Bandits, Spring 2019
    CSE 446: Machine Learning, Winter 2019
    CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2018
    CSE 446: Machine Learning, Winter 2018
    CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2017
    CSE 546: Machine Learning, Autumn 2016
    CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2016
    CSE 546: Machine Learning, Autumn 2015
    Stat 928: Statistical Learning Theory
    Stat 991: Multivariate Analysis, Dimensionality Reduction, and Spectral Methods
    Large Scale Learning
    Learning Theory

    Current Postdocs

    Kendall Lowrey

    Current Students

    Motoya Ohnishi (co-advised with Emo Todorov)
    Aditya Kusupati (co-advised with Ali Farhadi)
    Krishna Pillutla (co-advised with Zaid Harchaoui)

    Former Postdocs

    Weihao Kong
    Ramya Korlakai Vinayak
    Praneeth Netrapalli
    Rong Ge
    Daniel Hsu

    Former Students and Interns (in reverse chronological order)

    Aravind Rajeswaran (co-advised with Emo Todorov)
    John Thickstun (co-advised with Zaid Harchaoui)
    Gaurav Mahajan
    Ben Edelman
    Ruosong Wang
    Gabriel Cadamuro (co-advised with Josh Blumenstock)
    Rahul Kidambi
    Chi Jin
    Aaron Sidford
    Roy Frostig
    David Belanger
    Chen Wang
    Qingqing Huang
    Jaehyun Park
    Karl Stratos
    Do-kyum Kim
    Praneeth Netrapalli
    Rashish Tandon
    Rong Ge
    Adel Javanmard
    Matus Telgarsky
    Daniel Hsu (while at TTI-C)
    Sathyanarayan Anand (while at TTI-C)

    Contact Info

    Email: sham [at] cs [dot] washington [dot] edu

    Computer Science & Engineering
    Gates 303

    University of Washington
    Seattle, WA 98195