Sham M. Kakade

Washington Research Foundation Data Science Chair

Associate Professor in both:
Department of Computer Science
Department of Statistics

Senior Data Science Fellow,
eScience Institute

Adjunct Professor in:
Department of Electrical Engineering

University of Washington


Publications  


News and Events:

ADSI Workshop and Summer School
Workshop Aug 19-21: Foundations of Learning and Control Workshop
Summer School Aug 13-17 Foundations of Data Science School
Algorithmic Foundations of Data Science Institute
UW received an NSF Tripods grant. We have an institute on the theoretical foundations of data science. Also, we received more support for our data science efforts.

Research:

I study the theoretical foundations of machine learning and AI in a broad range of paradigms, and I enjoy collaborating with a diverse set of researchers to tackle these issues. Understanding these foundations is central in our design of algorithms which are efficient from both computational and statistical perspectives. I seek to use these advancements to help in furthering the state of the art in various AI domains. Among my current interests are : i) reinforcement learning and controls and ii) natural language processing and memory.

Bio:

Sham Kakade is a Washington Research Foundation Data Science Chair, with a joint appointment in the Department of Computer Science and the Department of Statistics at the University of Washington. He works on the theoretical foundations of machine learning, focusing on designing provable and practically efficient algorithms. Amongst his contributions, with a diverse set of collaborators, are: establishing principled approaches in reinforcement learning (including the natural policy gradient, conservative policy iteration, and the PAC-MDP framework); provably efficient tensor decomposition methods for estimation of latent variable models (including mixture of Gaussians, latent Dirichlet allocation, hidden markov models, and overlapping communities in social networks); optimal algorithms in the stochastic and non-stochastic multi-armed bandit problems (including the linear bandit and the Gaussian process bandit models); faster algorithms for large scale convex and nonconvex optimization (including how to escape from saddle points efficiently). He is the recipient of the IBM Goldberg best paper award (in 2007) for contributions to fast nearest neighbor search and the best paper, INFORMS Revenue Management and Pricing Section Prize (2014). He has been program chair for COLT 2011.
Sham completed his Ph.D. at the Gatsby Computational Neuroscience Unit at University College London, under the supervision of Peter Dayan, and he was a postdoc at the Dept. of Computer Science, University of Pennsylvania , under the supervision of Michael Kearns. Sham was an undergraduate at Caltech , studying in physics under the supervision of John Preskill. Sham has been a Principal Research Scientist at Microsoft Research, New England, an associate professor at the Department of Statistics, Wharton, UPenn, and an assistant professor at the Toyota Technological Institute at Chicago.

Projects:

MusicNet  

Discovering Latent Structure in Societal-Scale Data  


Activities and Services

Co-organizer for the Simons Symposium on New Directions in Theoretical Machine Learning , May 2019.
Committee for the Sloan Research Fellowships in Computer Science (active).
Co-organizer for the Simons Foundations of Machine Learning, Winter, 2017
Co-chair for the Simon's Representational Learning workshop, March, 2017
Co-chair for the IMS-MSR Workshop: Foundations of Data Science, June 11th, 2015.
Steering committee for the fourth New England Machine Learning Day, May 18th, 2015.
Program committee for the third New England Machine Learning Day, May 13th, 2014.
Co-chair for New York Computer Science and Economics Day V, Dec 3rd, 2012.
Program committee for the first New England Machine Learning Day, May 16th, 2012.
Program chair for the 24th Annual Conference on Learning Theory (COLT 2011) which took place in Budapest, Hungary, on July 9-11, 2011.

Tutorials

Tensor Decompositions for Learning Latent Variable Models, AAAI 2014
Tensor Decompositions Methods for Learning Latent Variable Models, ICML 2013

Course Links

CSE 599m: Reinforcement Learning and Bandits, Spring 2019
CSE 446: Machine Learning, Winter 2019
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2018
CSE 446: Machine Learning, Winter 2018
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2017
CSE 546: Machine Learning, Autumn 2016
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2016
CSE 546: Machine Learning, Autumn 2015
Stat 928: Statistical Learning Theory
Stat 991: Multivariate Analysis, Dimensionality Reduction, and Spectral Methods
Large Scale Learning
Learning Theory

Current Postdocs

Weihao Kong
Ramya Korlakai Vinayak

Current Students

Gabriel Cadamuro (co-advised with Josh Blumenstock)
Rahul Kidambi
Krishna Pillutla (co-advised with Zaid Harchaoui)
Aravind Rajeswaran (co-advised with Emo Todorov)
John Thickstun (co-advised with Zaid Harchaoui)

Former Postdocs

Praneeth Netrapalli
Rong Ge
Daniel Hsu

Former Interns (in reverse chronological order)

Chi Jin
Aaron Sidford
Roy Frostig
David Belanger
Chen Wang
Qingqing Huang
Jaehyun Park
Karl Stratos
Do-kyum Kim
Praneeth Netrapalli
Rashish Tandon
Rong Ge
Adel Javanmard
Matus Telgarsky
Daniel Hsu (while at TTI-C)
Sathyanarayan Anand (while at TTI-C)

Contact Info

Email: sham [at] cs [dot] washington [dot] edu

Address:
Computer Science & Engineering
Gates 303

University of Washington
Seattle, WA 98195