I am a Ph.D. candidate in Computer Science and Engineering at the University of Washington. I am part of the GRAIL , UW Reality Lab and Animation Research Lab. I am advised by Linda Shapiro. I collaborate with Wilmot Li, Alex Colburn, Gary Faigin, and Barbara Mones.


I am mainly interested in Computer Vision, Computer Graphics & Animation and the intersection of Vision and Machine Learning. My research focuses on creating more accurate, plausible and perceptually valid character animation by using machine learning. Applications include enhancing performance based animation, improved storytelling experience with better expression clarity and intensity, and teaching emotion recognition and conveyance to the children with autism spectrum disorder (ASD) in an interactive environment.

Prior to joining the University of Washington, I received my Masters in Technology from Delhi University (Netaji Subas Institute of technology). During my time there, I worked with Tarun Kumar Rawat on Fuzzy clustering techniques for effective medical image segmentation.


~ July 2018 ~ Research intern at Microsoft Research, Redmond. Excited to be part of VIBE group!

~ April 2018 ~ Research proposal selected as part of the 2018 Research Proposal Competition held by UW Reality Lab. Really excited to be part of the UW Reality Lab! Let's create magic!

~ March 2018 ~ Done with my first class as an instructor. Very enjoyable and satisfying experience.

~ March 2018 ~ Gave 3D Stylized character expression generation talk at WACV 2018.

~ Jan 2018 ~ Won the Adobe Research Fellowship 2018!

~ Dec 2017 ~ Teaching Research Topics in Computer Animation in Winter 2018.

~ Nov 2017 ~ Expressive character animation work covered by Geekwire.

~ Nov 2017 ~ Our character expression work won the People's Choice Award at UW CSE Affilates!

~ Oct 2017 ~ Enhanced lip sync accuracy in Adobe Character Animator CC 2018 (shipped in v1.0)

~ July 2017 ~ Gave a talk with Mira Dontcheva on Adobe and Creativity at Girls Who Code.

~ July 2017 ~ Presenting Stylized Character Expression Transfer work at WiCV, CVPR 2017

~ June 2017 ~ Creative Intelligence Lab intern at Adobe Seattle.

~ July 2016 ~ Our team (D Aneja and Y Lee) ranked 5th among 60 teams at CVPR 2016 ChaLearn Challenge Track3: Smile and Gender classification

~ June 2016 ~ Spending my summer as a Creative Technologies intern at Adobe Seattle!

~ June 2015 ~ Computer Vision Research intern at Lawrence Livermore National Lab as a part of Data Heroes Intern program.


Style matching in Embodied Conversational Agents active
Developing an expressive, embodied conversational agent that can recognize human behavior during open-ended conversations and automatically align its responses to the visual and conversational style of the other party.
Expressive Character Animation active
Developing a framework to generate more accurate, plausible and perceptually valid animation, by using deep learning to discover discriminative human facial features, feature mappings between humans and animated characters.
[People's Choice Award 2017]
[Geekwire article]
Lip Sync for cartoons
Developing a framework to generate high-quality lip sync for character animation.
[Adobe Character animator used in TV series - Our Cartoon President]
3D Optical Metrology of Machined Internal Parts
Developed a proof of concept of a 3D Surface Profile of an internal tapped hole for measurements and detecting defects.
Geometric Morphometric Analyses
Developed an algorithm to automatically locate landmarks on CT mouse hemi-mandible data for morphometric analyses.
Fine-grained classification of cars
Learning discriminative car parts for Fine-Grained classification on Stanford Cars dataset using occlusion window analysis.
Tele-operation of the PR2 robot
We used an Oculus Rift to teleoperate a PR2 robot. The PR2's Kinect camera provides stereoscopic vision and the PR2 follows the head tracking on the Oculus Rift.
Medical Image Segmentation
In this work, we compared three fuzzy based techniques for medical image segmentation for noisy CT scan brain image data.


An End-to-End Conversational Style Matching Agent
Rens Hoegen, Deepali Aneja, Daniel McDuff, Mary Czerwinski, IVA 2019

Learning to Generate 3D Stylized Character Expressions from Humans
Deepali Aneja, Bindita Chaudhuri, Alex Colburn, Gary Faigin, Linda Shapiro, Barbara Mones, WACV 2018
[Paper] [Poster]

Modeling Stylized Character Expressions via Deep Learning
Deepali Aneja, Alex Colburn, Gary Faigin, Linda G. Shapiro, Barbara Mones, ACCV 2016
[Paper] [Supplementary PDF] [Database]

Automated Detection of 3D Landmarks for the Elimination of Non-Biological Variation in Geometric Morphometric Analyses
Deepali Aneja,Siddharth Vora, Esra D. Camci, Linda G. Shapiro and Tim C. Cox, IEEE CBMS 2015.

Fuzzy clustering algorithms for effective medical image segmentation
Deepali Aneja, Tarun Kumar Rawat, IJISA, 2013.

Posters, Workshop Papers

Designing Style Matching Conversational Agents
Deepali Aneja, Rens Hoegen, Daniel McDuff, Mary Czerwinski, Conversational Agents, CHI 2019
[Extended Abstract - Coming soon!]

Learning Stylized Character Expressions from Humans
Deepali Aneja, Alex Colburn, Gary Faigin, Linda G. Shapiro, Barbara Mones, WiCV, CVPR 2017
[Extended Abstract] [Poster] [Extended Database - Coming soon!]

3D Optical Metrology of Machined Internal Parts
Deepali Aneja,Yuanzheng Gong,Eric Seibel, Linda G. Shapiro, UW CV Symposium, Amazon, 2015.

Learning discriminative car parts for Fine-Grained classification
Deepali Aneja,Kofi Boakye, CRA-W 2016


CSE 464B (Winter 2018) - Research Topics in Computer Animation

CSE 455 (Winter 2016) - Computer Vision

CSE 464B (Spring 2016) - Advanced topics in Computer Animation

CSE 421 (Fall 2016) - Introduction to Algorithms