About me

I am a fourth year PhD student in Computer Science and Engineering at the University of Washington. I work at the intersection of accessibility and AI, particularly to enhance sound awareness for d/Deaf or hard of hearing people. My research work is published at top HCI venues such as CHI, ASSETS, and UIST and has received three best paper awards and honorable mentions. I am advised by Profs. Jon Froehlich and Leah Findlater.

Before starting my PhD, I completed by masters from MIT Media Lab, bachelors from IIT Delhi, and took a gap year to backpack 21 countries. Through publishing a critical reflection of my travel experiences as a hard of hearing individual, I introduced the unique method of auto-ethnography to the field of accessible computing. My biggest pride is the indoor navigation system for visually impaired users which was installed in Indian National Science Museum for two years.

Besides professional life, I am a scuba instructor. I also conduct DIY workshops—so far, I have organized 11 one-week workshops in six low-income countries, with a total of 200+ students. Atleast two teams from these workshops have continued their projects and turned them into multinational companies.

Connect with me on email djain [at] uw [dot] edu or twitter.

Note: I am advocate of open science and research. Write to me and I will give all my data, methods, etc.

Recent news

Nov 19: Invited talk on NavGrad at Microsoft.
Oct 30: Glacier National park road trip!
Oct 26: Two talks at ASSETS 2020.
Oct 15: Invited talk on NavGrad at UW CSE Colloquia.
Oct 15: Invited talk on NavGrad at AccessComputing.
Oct 8: Invited talk on SoundWatch at UW CSE Colloquia.
Oct 12: NavGrad nominated for the best paper at ASSETS 2020!
Sep 28: SouthWest (Utah, Arizona) road trip!
Sep 15: Presented Vibes at ISWC 2020.
Aug 8: Vibes accepted to ISWC 2020!
Jul 22: NavGrad and SoundWatch accepted to ASSETS 2020!
Jul 15: Three of my ugrad mentees (Hung, Greg, and Robin) accepted to UW CSE MS program! Congrats!



Navigating Graduate School with a Disability

Deep Learning for Sound Awareness on SmartWatches

Field Study of a Tactile Sound Awareness Device

First slide of the talk. A scene of a kitchen in the background with the talk title: Field Deployment of a Smarthome Sound Awareness System for Deaf and Hard of Hearing Users

Field Deployment of a In-Home Sound Awareness System

First slide of the talk. Shows DJ riding on a camel in a desert. The title of the talk reads: Autoethography of a Hard of Hearing Traveler

Autoethnography of a hard of hearing traveler

First slide of the talk. A person claps in front of a tablet interface that visaulizes the clapping sound using a pulsating bubble. The title reads: Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing

Exploring sound awareness in the home

First slide of the talk with an image of an ear doning a hearing aid. The title reads: Deaf and Hard of Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies​

Online Survey of Wearable Sound Awareness

First slide of the talk showing a person walking and talking with another person. The first person is wearing a HoloLens which shows ​real-time captions in Augmented Reality. Title is Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing.

Towards accessible conversations in a mobile context

First slide of the talk showing a rocky beach with waves crashing over the beach. Talk title reads: Immersive Scuba Diving Simulator Using Virtual Reality

Immersive scuba diving simulator using virtual reality​

First slide of the talk showing a round table conversation with a person wearing a Google Glass. The directions of the active speakers in the conversation are visualized as arrows on the Glass. Talk title is Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing.

HMD Visualizations to Support Sound Awareness