I'm a final year PhD student in the University of Washington's natural language processing (NLP) group, advised by Noah Smith and Yejin Choi. My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language.
In the past, I've interned at AI2 on project Mosaic working on social commonsense reasoning, and at Microsoft Research working on long-term memory and storytelling with Eric Horvitz.
June 2020 update: I will be looking for academic positions this coming year!
We create an unsupervised model for controllable debiasing to rewrite and debias how characters are portrayed in sentences.
Details Watch EMNLP talk Read paperWe introduce a new formalism to capture implications of social bias in language, and 150k structured annotations on social media posts.
Details Watch ACL talk Read paperWe create a large-scale benchmark for social commonsense reasoning that is challenging for modern NLP models.
Details Read paperWe quantify racial bias in hate speech datasets and algorithms, and study how this bias arises in annotations.
Watch ACL talk Read paperIntroducing a knowledge graph for machine commonsense covering if-then inferential knowledge aroung everyday situations.
Browse Demo Read paperWe create a model and knowledge graph that enables commonsense inference on intents and reactions in relation to events.
Browse Demo Read paperWe create a new annotation framework of simple commonsense stories enabling reasoning about the mental states of its characters.
Browse stories Read paperWinner of the 2017 Alexa Prize to further conversational AI, our approach is user-centric and content-driven.
Learn more Read paperWe create connotation frames of power and agency and use them to analyze gender bias in movies.
Explore movies Read paper