Aravind Rajeswaran

PhD student in machine learning,
Computer Science & Engineering,
University of Washington

Contact: aravraj at
google scholar | github | twitter

I am a third year PhD student at UW with Sham Kakade and Emo Todorov. I also collaborate with Sergey Levine and his students. My current research focus is at the intersection of machine learning and robotics. In particular, I am investigating imitation learning and model-based reinforcement learning. I am also interested in specific robotics domains like dexterous hand manipulation, and learning in large scale and multi-agent settings.

I interned at Nvidia research in Summer 2018 with Dieter Fox and Byron Boots. In summer 2017, I interned at OpenAI with John Schulman and Igor Mordatch. Previously, I was an undergraduate student at IIT Madras with Balaraman Ravindran, and recieved the best undergraduate thesis award. Before switching focus to AI, I worked on statistical physics of complex networks.

Publications and Preprints

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
arXiv preprint 2018 (under review); arXiv:1811.01848

Dexterous Manipulation with Deep Reinforcement Learning: Efficient,
General, and Low Cost

Henry Zhu, Abhishek Gupta, Aravind Rajeswaran, Sergey Levine, Vikash Kumar
arXiv preprint 2018 (under review); arXiv:1810.06045

Reinforcement learning for non-prehensile manipulation: Transfer
from simulation to physical system

Kendall Lowrey, Svetsolav Kolev, Jeremy Dao, Aravind Rajeswaran, Emanuel Todorov
IEEE SIMPAR 2018; arXiv:1803.10371 (Best paper award!)

Learning Complex Dexterous Manipulation with Deep Reinforcement
Learning and Demonstrations

Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman,
Emanuel Todorov, Sergey Levine
Robotics: Science and Systems (RSS) 2018; arxiv:1709.10087

Variance Reduction for Policy Gradient Using Action-Dependent Factorized Baselines
Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen,
Sham Kakade, Igor Mordatch, Pieter Abbeel
International Conference on Learning Representations (ICLR) 2018; arXiv:1803.07246

Divide-and-Conquer Reinforcement Learning
Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine
International Conference on Learning Representations (ICLR) 2018; arXiv:1711.09874

Towards Generalization and Simplicity in Continuous Control
Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade
Neural Information Processing Systems (NIPS) 2017; arXiv:1703.02660

EPOpt: Learning Robust Neural Network Policies Using Model Ensembles
Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, Sergey Levine
International Conference on Learning Representations (ICLR) 2017; arXiv:1610.01283

A Novel Approach for Phase Identification in Smart Grids Using Graph Theory
and Principal Component Analysis

Satya Jayadev P, Aravind Rajeswaran, Nirav P Bhatt, Ramkrishna Pasumarthy
American Control Conference (ACC) 2016; arXiv:1511.06063

Identifying Topology of Power Distribution Networks Based on Smart Meter Data
Jayadev P Satya, Nirav Bhatt, Ramkrishna Pasumarthy, Aravind Rajeswaran
IEEE Transactions on Smart Grid 2017; arXiv:1609.02678

A Graph Partitioning Approach for Leak Detection in Water Distribution Networks
Aravind Rajeswaran, Sridharakumar Narasimhan, Shankar Narasimhan
Computers & Chemical Engineering (C&ChE) 2017; arXiv:1606.01754


CSE599G: Deep Reinforcement Learning (Instructor)
I co-taught a course on deep reinforcement learning at UW in Spring 2018. The course tasks a broad perspective on reinforcement learning the problem statement and covers techniques ranging from tabular dynamic programming methods, to policy gradient methods, to trajectory optimization. I co-designed the course structure and all the teaching material. This course is inspired by the course offerings of Emo Todorov, Balaraman Ravindran, and Sergey Levine.

CSE547: Machine Learning for Big Data (Teaching Assistant)