MusicNet is out!
We have released the
MusicNet dataset to the machine learning and music communities as a
resource for training models and a common benchmark for comparing
results. MusicNet is a collection of 330
freely-licensed classical music recordings, together with over 1
million annotated labels indicating the precise time of each note
every recording, the instrument that plays each note, and the
note's position in the metrical structure of the composition. The
labels are acquired from musical scores aligned to recordings by
dynamic time warping.
There is more info on the
MusicNet website. Also, some press is
Foundations of Machine Learning 2017
We are putting together a machine learning semester at the Simons
Institute in Berkeley from Jan to May, 2017. We are seeking to bring
together a range of researchers to help develop novel and effective
Official Bio for publicity purposes.
Sham Kakade is a Washington
Research Foundation Data Science Chair, with a joint appointment in
of Statistics and the Department
of Computer Science at the University of
I completed my PhD at the Gatsby
Computational Neuroscience Unit under the supervision of Peter
Dayan, and I was an undergraduate at Caltech where I obtained my
BS in physics. Before joining UW, I was a principal research scientist at Microsoft
Research, New England. Previous to this, I was an
associate professor at the Department of
Statistics, Wharton, University of Pennsylvania (from
2010-2012), and I was an assistant professor at the Toyota Technological
Institute at Chicago (from 2005-2009). I was a
postdoc in the Computer and
Information Science department at the University of Pennsylvania
under the supervision of Michael Kearns.
My research focuses on both theoretical and
applied questions in machine learning and artificial
intelligence. My hope is to help to advance the state of
the art on core scientific, technological, and AI problems in the
near future. I
enjoy collaborating with researchers, across
a variety of different areas (including statistics, computer science,
signal processing, social sciences, economics, psychology, and biology/neuroscience).
Here are a few of my recent research interests: My
recent theoretical work focusses on developing computationally efficient
algorithms (both provably so and in practice) for large scale
learning problems, where I think about convex and non-convex issues
along with issues related to statistical efficiency. I have also been actively working on
applied problems in music, computer vision, and robotics. Some of the
challenges I am thinking about in these applied areas are
representational(e.g. how to develop appropriate architectures) and some
are computational (e.g. how to develop faster optimization algorithms,
say for deep learning methods).
If you are interested in machine learning, optimization,
statistics, CS theory, or related areas, please apply either through the
computer science website
statistics website. UW
is a great place for these activities.
Some good stuff at UW:
Some Activities and Services
Co-organizer for the
Simons Foundations of Machine Learning, Winter, 2017
Co-chair for the
Simon's Representational Learning workshop, March, 2017
Co-chair for the
IMS-MSR Workshop: Foundations of Data Science, June 11th, 2015.
Steering committee for the fourth
New England Machine Learning Day, May 18th, 2015.
Program committee for the third
New England Machine Learning Day, May 13th, 2014.
New York Computer Science and Economics Day V, Dec 3rd, 2012.
Program committee for the first
New England Machine Learning Day, May 16th, 2012.
Program chair for the
24th Annual Conference on Learning Theory (COLT 2011) which took place in Budapest, Hungary, on July 9-11, 2011.
Tensor Decompositions for Learning Latent Variable Models, AAAI 2014
Tensor Decompositions Methods for Learning Latent Variable Models, ICML 2013
CSE 546: Machine Learning, Autumn 2016
CSE 547 / STAT 548: Machine Learning for Big Data, Spring 2016
CSE 546: Machine Learning, Autumn 2015
Stat 928: Statistical Learning Theory
Stat 991: Multivariate
Analysis, Dimensionality Reduction, and Spectral Methods
Large Scale Learning
Gabriel Cadamuro (co-advised with
Daniel Hsu (while at UPenn)
(in reverse chronological order)
Daniel Hsu (while at TTI-C)
Sathyanarayan Anand (while at TTI-C)
Email: sham [at] cs [dot] washington [dot] edu
Department of Statistics, Office 303
Computer Science & Engineering, Office 436
Paul Allen Center
University of Washington
Seattle, WA 98195