I am a PhD candidate in Computer Science and Engineering at the University of Washington, advised by Jon Froehlich and Leah Findlater. My research intersects human-computer interaction (HCI) and applied machine learning, and focuses on inventing novel sound sensing and feedback techniques to support accessibilty applications. This work has been published at the most selective HCI venues such as CHI, UIST, DIS, and ASSETS, has received multiple best paper awards and honorable mentions, drawn the attention of industry research such as Microsoft, Google, and Apple, and has been publicly released (e.g., SoundWatch). I am supported by a Microsoft Dissertation Grant and a Google-CMD-IT LEAP Alliance Fellowship.
Before starting my PhD, I completed by masters from MIT Media Lab, bachelors from IIT Delhi, and took a gap year to backpack 21 countries. Through publishing a critical reflection of my travel experiences as a hard of hearing individual, I pioneered the use of auto-ethnography in accessible computing research. My biggest pride is Roshni, a low-cost indoor navigation system for visually impaired users, which was installed in a National Science Museum for two years and has been used by over 75,000 people.
Besides research, I provide scuba instruction and conduct DIY workshops—so far, I have taught in seven one-week workshops in four countries. Atleast two teams from these workshops have continued their projects and launched multinational companies.
Excitingly, I am on the job market for the year 2021-22 and looking for tenure-track faculty as well as industrial research positions in accessibility, human-centered computing, applied AI, and acoustics. See a 1-minute pitch.
CV | Research Statement (3 pages) | Research Statement (5 pages) | Teaching Statement | Diversity Statement
If you think I can be a fruitful addition to your team, please reach out to me at djain [at] cs [dot] uw [dot] edu.
May 26: SoundWatch featured in CACM Research Highlights
Jan 21: Our paper on Non Verbal Sound Detection for Disordered Speech was accepted to ICASSP 2022!
Nov 19: Our paper on a personalizable sound recognition system was accepted to CHI 2022!
Nov 8: Invited talk on Sound Accessibility at University of Michigan, School of Information.
Aug 6: My thesis work is now supported by a Google-CMD-IT LEAP Alliance Fellowship!
May 28: Our paper "A Taxonomy of Sounds in VR" won the best paper award at DIS 2021!
May 28: My thesis work is now supported by a Microsoft Research Dissertation Grant!
Apr 30: Teachable Sound Recognition paper accepted to IMWUT 2021!
Apr 19: SoundWatch invited for publication at CACM Research Highlights!
Apr 9: Sound Taxonomy paper accepted to DIS 2021!
Mar 12: Invited talk on Sound Sensing and Feedback at BITS Pilani.
Mar 4: Invited talk on Sound Sensing and Feedback at CMU HCII.
Video of In-Home Sound Awareness System for DHH Users for UW CSE visit day showcase.
Read more →Project Amphibian: a virtual reality scuba diving simulator, my MS thesis at MIT and UIST 2016 paper.
Read more →Video of HoloSound AR Sound Sensing and Feedback for the ASSETS Poster.
Read more →A one-week intense design and innovation workshop that I led in the remote rural area of Kutch, India.
View on vimeo →Our future vision for interactive digitally encoded construction materials, presented at CHI 2016.
Read more →A series of experiments to leverage sensorial nature of "food" for making next-generation digital prototypes.
Read more →Project GlassEar: supplementary video to our CHI 2015 paper presented in Seoul, Korea April 18-23.
Read more →