Sham M. Kakade

Washington Research Foundation Data Science Chair

Associate Professor in both:
Department of Computer Science
Department of Statistics

Senior Data Science Fellow,
eScience Institute

Adjunct Professor in
Department of Electrical Engineering

University of Washington


Publications  


Reinforcement Learning: Theory and Algorithms  

Alekh Agarwal, Nan Jiang and I are writing a monograph on Reinforcement Learning. We will be periodically making updates.

News and Events

ADSI Workshop and Summer School:
Workshop Aug 19-21: Foundations of Learning and Control Workshop
Summer School Aug 13-17 Foundations of Data Science School
Algorithmic Foundations of Data Science Institute:
UW received an NSF Tripods grant. We have an institute on the theoretical foundations of data science.

Research

I study the mathematical foundations of machine learning and artificial intelligence, with a focus on designing provably efficient and practical algorithms that are relevant for a broad range of paradigms. I seek to use these advancements to help in making progress on core AI problems. My current interests include : i) reinforcement learning and controls ii) representation learning and iii) natural language processing and memory. I enjoy collaborating with a diverse set of researchers to tackle these issues!

Bio

Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in the Department of Computer Science and the Department of Statistics at the University of Washington, and is a co-director for the Algorithmic Foundations of Data Science Institute. He works on the mathematical foundations of machine learning and AI. Sham's thesis helped in laying the foundations of the PAC-MDP framework for reinforcement learning. With his collaborators, his additional contributions include: one of the first provably efficient policy search methods, Conservative Policy Iteration, for reinforcement learning; developing the mathematical foundations for the widely used linear bandit models and the Gaussian process bandit models; the tensor and spectral methodologies for provable estimation of latent variable models (applicable to mixture of Gaussians, HMMs, and LDA); the first sharp analysis of the perturbed gradient descent algorithm, along with the design and analysis of numerous other convex and non-convex algorithms. He is the recipient of the IBM Goldberg best paper award (in 2007) for contributions to fast nearest neighbor search and the best paper, INFORMS Revenue Management and Pricing Section Prize (2014). He has been program chair for COLT 2011.
Sham was an undergraduate at Caltech, where he studied physics and worked under the guidance of John Preskill in quantum computing. He then completed his Ph.D. in computational neuroscience at the Gatsby Unit at University College London, under the supervision of Peter Dayan. He was a postdoc at the Dept. of Computer Science, University of Pennsylvania , where he broadened his studies to include computational game theory and economics from the guidance of Michael Kearns. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.

Projects

MusicNet  

Discovering Latent Structure in Societal-Scale Data  

Cover Tree for Nearest Neighbor Search  


Activities and Services

Committee for the Sloan Research Fellowships in Computer Science (active).
Co-organizer for the Simons Symposium on New Directions in Theoretical Machine Learning , May 2019.
Co-organizer for the Simons Foundations of Machine Learning, Winter, 2017
Co-chair for the Simon's Representational Learning workshop, March, 2017
Co-chair for the IMS-MSR Workshop: Foundations of Data Science, June 11th, 2015.
Steering committee for the fourth New England Machine Learning Day, May 18th, 2015.
Program committee for the third New England Machine Learning Day, May 13th, 2014.
Co-chair for New York Computer Science and Economics Day V, Dec 3rd, 2012.
Program committee for the first New England Machine Learning Day, May 16th, 2012.
Program chair for the 24th Annual Conference on Learning Theory (COLT 2011) which took place in Budapest, Hungary, on July 9-11, 2011.

Tutorials

Tensor Decompositions for Learning Latent Variable Models, AAAI 2014
Tensor Decompositions Methods for Learning Latent Variable Models, ICML 2013

Course Links

CSE 599m: Reinforcement Learning and Bandits, Spring 2019
CSE 446: Machine Learning, Winter 2019
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2018
CSE 446: Machine Learning, Winter 2018
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2017
CSE 546: Machine Learning, Autumn 2016
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2016
CSE 546: Machine Learning, Autumn 2015
Stat 928: Statistical Learning Theory
Stat 991: Multivariate Analysis, Dimensionality Reduction, and Spectral Methods
Large Scale Learning
Learning Theory

Current Postdocs

Weihao Kong
Ramya Korlakai Vinayak

Current Students

Gabriel Cadamuro (co-advised with Josh Blumenstock)
Krishna Pillutla (co-advised with Zaid Harchaoui)
Aravind Rajeswaran (co-advised with Emo Todorov)
John Thickstun (co-advised with Zaid Harchaoui)

Former Postdocs

Praneeth Netrapalli
Rong Ge
Daniel Hsu

Former Students and Interns (in reverse chronological order)

Rahul Kidambi
Chi Jin
Aaron Sidford
Roy Frostig
David Belanger
Chen Wang
Qingqing Huang
Jaehyun Park
Karl Stratos
Do-kyum Kim
Praneeth Netrapalli
Rashish Tandon
Rong Ge
Adel Javanmard
Matus Telgarsky
Daniel Hsu (while at TTI-C)
Sathyanarayan Anand (while at TTI-C)

Contact Info

Email: sham [at] cs [dot] washington [dot] edu

Address:
Computer Science & Engineering
Gates 303

University of Washington
Seattle, WA 98195