Nan Jiang and I are
writing a monograph on Reinforcement Learning. We will be periodically making updates.
News and Events
ADSI Workshop and Summer School:
Workshop Aug 19-21:
Foundations of Learning and Control Workshop
Summer School Aug 13-17
Foundations of Data Science School
Algorithmic Foundations of Data Science Institute:
UW received an NSF Tripods grant. We have an institute on the theoretical
foundations of data science.
I study the mathematical
foundations of machine learning and artificial intelligence, with a focus on
designing provably efficient and practical
algorithms that are relevant for a broad range of paradigms.
I seek to use these advancements to help in making progress on core AI problems.
interests include : i) reinforcement learning and controls ii)
representation learning and iii) natural
language processing and memory. I enjoy collaborating with a diverse
set of researchers to tackle these issues!
Sham Kakade is a Washington Research Foundation Data Science Chair,
with a joint appointment in the Department of Computer Science
and the Department of
Statistics at the University
of Washington, and is a co-director for the Algorithmic Foundations
of Data Science Institute. He works on the mathematical
foundations of machine learning and AI. Sham's thesis helped in laying the
foundations of the PAC-MDP framework for reinforcement learning. With his
collaborators, his additional contributions include: one of the first
provably efficient policy search methods, Conservative Policy
Iteration, for reinforcement learning; developing the mathematical
foundations for the widely used linear bandit models and the Gaussian
process bandit models; the tensor and spectral methodologies for
provable estimation of latent variable models (applicable to mixture
of Gaussians, HMMs, and LDA); the first sharp analysis of the
perturbed gradient descent algorithm, along with the design and
analysis of numerous other convex and non-convex algorithms. He is the recipient
of the IBM Goldberg best paper award (in 2007) for contributions to
fast nearest neighbor search and the best paper, INFORMS Revenue
Management and Pricing Section Prize (2014). He has been program chair
for COLT 2011.
Sham was an undergraduate at Caltech,
where he studied physics and worked under the guidance of
in quantum computing.
He then completed his Ph.D. in computational neuroscience at the
at University College London, under the supervision of Peter
He was a postdoc at the
Dept. of Computer Science, University of Pennsylvania
, where he broadened his studies to include computational game theory
and economics from the guidance of Michael
Sham has been a Principal Research Scientist at
Microsoft Research, New England,
an associate professor at the
Department of Statistics, Wharton,
UPenn, and an assistant professor
Toyota Technological Institute at Chicago.
Activities and Services
Committee for the
Sloan Research Fellowships in Computer Science (active).
Co-organizer for the Simons Symposium on
New Directions in Theoretical Machine Learning , May 2019.
Co-organizer for the
Simons Foundations of Machine Learning, Winter, 2017
Co-chair for the
Simon's Representational Learning workshop, March, 2017
Co-chair for the
IMS-MSR Workshop: Foundations of Data Science, June 11th, 2015.
Steering committee for the fourth
New England Machine Learning Day, May 18th, 2015.
Program committee for the third
New England Machine Learning Day, May 13th, 2014.
New York Computer Science and Economics Day V, Dec 3rd, 2012.
Program committee for the first
New England Machine Learning Day, May 16th, 2012.
Program chair for the
24th Annual Conference on Learning Theory (COLT 2011) which took place in Budapest, Hungary, on July 9-11, 2011.
Tensor Decompositions for Learning Latent Variable Models, AAAI 2014
Tensor Decompositions Methods for Learning Latent Variable Models, ICML 2013
CSE 599m: Reinforcement Learning and Bandits, Spring 2019
CSE 446: Machine Learning, Winter 2019
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2018
CSE 446: Machine Learning, Winter 2018
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2017
CSE 546: Machine Learning, Autumn 2016
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2016
CSE 546: Machine Learning, Autumn 2015
Stat 928: Statistical Learning Theory
Stat 991: Multivariate
Analysis, Dimensionality Reduction, and Spectral Methods
Large Scale Learning
Ramya Korlakai Vinayak
Former Students and Interns
(in reverse chronological order)
Daniel Hsu (while at TTI-C)
Sathyanarayan Anand (while at TTI-C)
Email: sham [at] cs [dot] washington [dot] edu
Computer Science & Engineering
University of Washington
Seattle, WA 98195