About me

I am a PhD candidate in Computer Science and Engineering at the University of Washington, advised by Jon Froehlich and Leah Findlater. My research intersects human-computer interaction (HCI) and applied machine learning, and focuses on inventing novel sound sensing and feedback techniques to support accessibilty applications. This work has been published at the most selective HCI venues such as CHI, UIST, DIS, and ASSETS, has received multiple best paper awards and honorable mentions, drawn the attention of industry research such as Microsoft, Google, and Apple, and has been publicly released (e.g., SoundWatch). I am supported by a Microsoft Dissertation Grant and a Google-CMD-IT LEAP Alliance Fellowship.

Before starting my PhD, I completed by masters from MIT Media Lab, bachelors from IIT Delhi, and took a gap year to backpack 21 countries. Through publishing a critical reflection of my travel experiences as a hard of hearing individual, I pioneered the use of auto-ethnography in accessible computing research. My biggest pride is Roshni, a low-cost indoor navigation system for visually impaired users, which was installed in a National Science Museum for two years and has been used by over 75,000 people.

Besides research, I provide scuba instruction and conduct DIY workshops—so far, I have taught in seven one-week workshops in four countries. Atleast two teams from these workshops have continued their projects and launched multinational companies.

Excitingly, I am on the job market for the year 2021-22 and looking for tenure-track faculty as well as industrial research positions in accessibility, human-centered computing, applied AI, and acoustics. See a 1-minute pitch.

CV | Research Statement (3 pages) | Research Statement (5 pages) | Teaching Statement | Diversity Statement

If you think I can be a fruitful addition to your team, please reach out to me at djain [at] cs [dot] uw [dot] edu.

Recent news

May 26: SoundWatch featured in CACM Research Highlights
Jan 21: Our paper on Non Verbal Sound Detection for Disordered Speech was accepted to ICASSP 2022!
Nov 19: Our paper on a personalizable sound recognition system was accepted to CHI 2022!
Nov 8: Invited talk on Sound Accessibility at University of Michigan, School of Information.
Aug 6: My thesis work is now supported by a Google-CMD-IT LEAP Alliance Fellowship!
May 28: Our paper "A Taxonomy of Sounds in VR" won the best paper award at DIS 2021!
May 28: My thesis work is now supported by a Microsoft Research Dissertation Grant!
Apr 30: Teachable Sound Recognition paper accepted to IMWUT 2021!
Apr 19: SoundWatch invited for publication at CACM Research Highlights!
Apr 9: Sound Taxonomy paper accepted to DIS 2021!
Mar 12: Invited talk on Sound Sensing and Feedback at BITS Pilani.
Mar 4: Invited talk on Sound Sensing and Feedback at CMU HCII.


A t-SNE low dimensional cluster visualization of lots of short mouth sounds

Sound Actions

Non-Verbal Sound Detection
Commercialized on iPhone and iPad (Try it out)
A close up shot of a person attending a 10-person video conference on a laptop.


Sound Sensing for Deaf and Hard of Hearing Users

Navigating Graduate School with a Disability

Deep Learning for Sound Awareness on SmartWatches

Field Study of a Tactile Sound Awareness Device

First slide of the talk. A scene of a kitchen in the background with the talk title: Field Deployment of a Smarthome Sound Awareness System for Deaf and Hard of Hearing Users

Field Deployment of a In-Home Sound Awareness System

First slide of the talk. Shows DJ riding on a camel in a desert. The title of the talk reads: Autoethography of a Hard of Hearing Traveler

Autoethnography of a hard of hearing traveler

First slide of the talk. A person claps in front of a tablet interface that visaulizes the clapping sound using a pulsating bubble. The title reads: Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing

Exploring sound awareness in the home

First slide of the talk with an image of an ear doning a hearing aid. The title reads: Deaf and Hard of Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies​

Online Survey of Wearable Sound Awareness

First slide of the talk showing a person walking and talking with another person. The first person is wearing a HoloLens which shows ​real-time captions in Augmented Reality. Title is Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing.

Towards accessible conversations in a mobile context

First slide of the talk showing a rocky beach with waves crashing over the beach. Talk title reads: Immersive Scuba Diving Simulator Using Virtual Reality

Immersive scuba diving simulator using virtual reality​

First slide of the talk showing a round table conversation with a person wearing a Google Glass. The directions of the active speakers in the conversation are visualized as arrows on the Glass. Talk title is Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing.

HMD Visualizations to Support Sound Awareness