Aravind Rajeswaran

Research Scientist at Facebook AI Research (FAIR)

PhD Candidate in Computer Science,
University of Washington

Contact:   aravraj@cs.uw.edu (or) aravraj@fb.com
Google Scholar | Bio | CV | GitHub | Calendar | Twitter


I am a research scientist at Facebook AI Research (FAIR). I obtained my PhD in computer science from the University of Washington under the guidance of Profs. Sham Kakade and Emo Todorov. During my PhD, I also collaborated closely with Sergey Levine and Chelsea Finn, and spent time as a student researcher at Google Brain and OpenAI. Before that, I recieved my Bachelors degree along with the best undergraduate thesis award from IIT Madras.

I work on algorithmic foundations and applications of deep learning and reinforcement learning. My recent focus involves policy and representation learning from scalable, task-agnostic, and off-domain datasets. Towards this end, I work on topics like offline RL, self-supervised learning, and transfer learning. I am also collaborating with Abhinav Gupta (FAIR/CMU) and Pieter Abbeel (UC Berkeley) on these topics.


Representative Papers

A Game Theoretic Framework for Model Based Reinforcement Learning
Aravind Rajeswaran, Igor Mordatch, Vikash Kumar
International Conference on Machine Learning (ICML) 2020; Project Webpage

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims
Neural Information Processing Systems (NeurIPS) 2020; Project Webpage

Meta Learning with Implicit Gradients
Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine
Neural Information Processing Systems (NeurIPS) 2019; Project Webpage

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, Sergey Levine
Robotics: Science and Systems (RSS) 2018; Project Webpage

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
International Conference on Learning Representations (ICLR) 2019; Project Webpage


Mentoring

I enjoy collaborating with a diverse set of students and researchers. I have had the pleasure of mentoring some highly motivated students at both the undergraduate and PhD levels.


All Publications and Preprints

Behavioral Priors and Dynamics Models: Improving Performance and Domain Transfer in Offline RL
Catherine Cang, Aravind Rajeswaran, Pieter Abbeel, Michael Laskin
ArXiv Preprint 2021; arXiv: 2106.09119 | Project Website

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin,
Pieter Abbeel, Aravind Srinivas, Igor Mordatch
ArXiv Preprint 2021; arXiv: 2106.01345 | Project Website

COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn
ArXiv Preprint 2021; arXiv: 2102.08363

Reinforcement Learning with Latent Flow
Wenling Shang, Xiaofei Wang, Aravind Srinivas, Aravind Rajeswaran, Yang Gao,
Pieter Abbeel, Michael Laskin
ArXiv Preprint 2021; arXiv: 2101.01857

Offline Reinforcement Learning from Images with Latent Space Models
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn
Learning for Dynamics and Control (L4DC) 2021; (Oral Presentation!) Project Webpage

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims
Neural Information Processing Systems (NeurIPS) 2020; Project Webpage

A Game Theoretic Framework for Model Based Reinforcement Learning
Aravind Rajeswaran, Igor Mordatch, Vikash Kumar
International Conference on Machine Learning (ICML) 2020; Project Webpage

Lyceum: An efficient and scalable ecosystem for robot learning
Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov
Learning for Dynamics and Control (L4DC) 2020; Project Webpage

Meta Learning with Implicit Gradients
Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine
Neural Information Processing Systems (NeurIPS) 2019; arXiv:1909.04630

Online Meta-Learning
Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
International Conference on Machine Learning (ICML) 2019; arXiv:1902.08438

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
International Conference on Learning Representations (ICLR) 2019; arXiv:1811.01848

Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low Cost
Henry Zhu, Abhishek Gupta, Aravind Rajeswaran, Sergey Levine, Vikash Kumar
International Conference on Robotics and Automation (ICRA) 2019; arXiv:1810.06045

Learning Deep Visuomotor Policies for Dexterous Hand Manipulation
Divye Jain, Andrew Li, Shivam Singhal, Aravind Rajeswaran, Vikash Kumar, Emanuel Todorov
International Conference on Robotics and Automation (ICRA) 2019; download PDF

Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Kendall Lowrey, Svetsolav Kolev, Jeremy Dao, Aravind Rajeswaran, Emanuel Todorov
IEEE SIMPAR 2018; arXiv:1803.10371 (Best paper award!)

Variance Reduction for Policy Gradient Using Action-Dependent Factorized Baselines
Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen,
Sham Kakade, Igor Mordatch, Pieter Abbeel
International Conference on Learning Representations (ICLR) 2018; arXiv:1803.07246
(Oral Presentation; top 2%)

Divide-and-Conquer Reinforcement Learning
Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
International Conference on Learning Representations (ICLR) 2018; arXiv:1711.09874

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, Sergey Levine
Robotics: Science and Systems (RSS) 2018; arxiv:1709.10087

Towards Generalization and Simplicity in Continuous Control
Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade
Neural Information Processing Systems (NIPS) 2017; arXiv:1703.02660

EPOpt: Learning Robust Neural Network Policies Using Model Ensembles
Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine
International Conference on Learning Representations (ICLR) 2017; arXiv:1610.01283

A Novel Approach for Phase Identification in Smart Grids Using Graph Theory
and Principal Component Analysis

Satya Jayadev P, Aravind Rajeswaran, Nirav P Bhatt, Ramkrishna Pasumarthy
American Control Conference (ACC) 2016; arXiv:1511.06063

Identifying Topology of Power Distribution Networks Based on Smart Meter Data
Jayadev P Satya, Nirav Bhatt, Ramkrishna Pasumarthy, Aravind Rajeswaran
IEEE Transactions on Smart Grid 2017; arXiv:1609.02678

A Graph Partitioning Approach for Leak Detection in Water Distribution Networks
Aravind Rajeswaran, Sridharakumar Narasimhan, Shankar Narasimhan
Computers & Chemical Engineering (C&ChE) 2017; arXiv:1606.01754


Teaching

CSE599G: Deep Reinforcement Learning (Instructor)
I designed and co-taught a course on deep reinforcement learning at UW in Spring 2018. The course presents a rigorous mathematical treatment of various RL algorithms along with illustrative applications in robotics. Deep RL courses at UW, MIT, and CMU have borrowed and built upon the material I developed for this course.

CSE547: Machine Learning for Big Data (Teaching Assistant)
This is an advanced graduate level course on machine learning with emphasis on machine learning at scale and distributed algorithms. Topics covered include hashing, sketching, streaming, large-scale distributed optimization, federated learning, and contextual bandits. I was the lead TA for this class.

CSE546: Machine Learning (Teaching Assistant)
This is the introductory graduate level machine learning class at UW.