Seminars, reading groups, activities

There are always numerous seminars and/or reading groups going on simultaneously. Here is a list that is sure to be outdated by the time you read this. Slack me with questions.


Machine learning and optimization seminar (ML-OPT)

The machine learning and optimization seminar is a venue for internal and external speakers to present their work on machine learning and data science. It takes place every Friday at 1:30 pm and is primarily intended for graduate students and post-docs to publicize their work. Join the mailing list here. This seminar is supported by the Institute for the Foundations of Data Science (IFDS), an NSF program I am a co-PI of. Please drop me an email if you think you may want to speak at this seminar.


Distinguished Seminar in Optimization & Data (DSOD)

I am a co-organizer of the Distinguished Seminar in Optimization & Data (DSOD), an interdepartmental talks series at the University of Washington, focused on all aspects of optimization and data science. See our YouTube playlist. The speakers of 2023 (see website for details):
Date Speaker Title
1/9/23 Éva Tardos, Cornell University Stability and Learning in Strategic Queueing Systems
4/3/23 Damek Davis, Cornell University Leveraging "partial" smoothness for faster convergence in nonsmooth optimization
5/1/23 Misha Belkin, University of California, San Diego The Challenges of Training Infinitely Large Neural Networks
5/22/23 Philippe Rigollet, MIT Statistical applications of Wasserstein gradient flows
6/5/23 Ting-Kei Pong, Hong Kong Polytechnic University The Challenges of Training Infinitely Large Neural Networks
10/2/23 Ryan O'Donnell, Carnegie Mellon University New directions in (quantum) distribution learning and testing
11/6/23 Lin Xiao, Fundamental AI Research - Meta Non-negative Gauss-Newton Methods for Empirical Risk Minimization
4/22/24 Jelena Diakonikolas, University of Wisconsin-Madison Nonsmooth Optimization on a Finer Scale
5/6/24 Amirali Ahmadi, Princeton University Complexity of Finding Local Minima in Continuous Optimization
6/3/24 Amitabh Basu, Johns Hopkins University Information complexity of mixed-integer convex optimization


Summer 2024 Reading Group on Overparameterized Linear Regression

Date Presenter Paper Slides
7/18/24 Gavin Brown An elementary analysis of ridge regression with random design Mourtada and Rosasco, 2022 Notes
7/25/24 Divyansh Pareek The optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization. Kobak, Lomond, and Sanchez 2020. Slides
8/1/24 Eric Frankel Memorize to generalize: on the necessity of interpolation in high dimensional linear regression Cheng, Duchi, and Kuditipudi 2022. Slides
8/8/24 Adhyyan Narang Classification vs regression in overparameterized regimes: Does the loss function matter? Muthukumar, Narang, Subramanian, Belkin, Hsu, & Sahai 2021. Slides
8/15/24 Artin Tajdini Optimal regularization can mitigate double descent Nakkiran, Venkat, Kakade, & Ma 2020. Slides
8/22/24 Noah Feinberg High-dimensional asymptotics of prediction: Ridge regression and classification Dobriban and Wager 2018. Slides
8/29/24 Gavin Brown More Data Can Hurt for Linear Regression: Sample-wise Double Descent Nakkiran 2019. Slides