Torode Family Associate Professor
Paul G. Allen School of Computer Science and Engineering
Adjunct at: UW Electrical and Computer Engineering, Linguistics
University of Washington

Senior Director, AllenNLP
Allen Institute for AI

Email: hannaneh [at] cs [dot] washington [dot] edu


Hanna Hajishirzi is a Torode Family Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a Senior Research Manager at the Allen Institute for AI. Her research spans different areas in NLP and AI, focusing on developing general-purpose machine learning algorithms that can solve diverse NLP tasks. Applications for these algorithms include question answering, representation learning, green AI, knowledge extraction, and conversational dialogue. Honors include the NSF CAREER Award, Sloan Fellowship, Allen Distinguished Investigator Award, Intel rising star award, best paper and honorable mention awards, and several industry research faculty awards. Hanna received her PhD from University of Illinois and spent a year as a postdoc at Disney Research and CMU.

Recent awards:

  • NSF CAREER award
  • Sloan Fellowship
  • Intel Rising Star Faculty Award
  • Allen Distinguished Investigator Award
  • Research faculty awards: Amazon, Facebook, Google, Samsung GRO, Bloomberg

My lab (H2lab) mainly publishes at NLP (ACL, NAACL, EMNLP), AI and ML conferences (AAAI, ICLR) across these areas:


Effectively unlimited quantities of ever-changing knowledge are available online in diverse styles (e.g., news v. science text) and formats (knowledge bases, web pages, and textual documents). My research addresses the challenge of enabling rich neural symbolic comprehension and reasoning given this diversity: how can we build AI systems that comprehend and combine evidence from various and evolving sources of textual and multi-modal knowledge to make complex inferences and draw logical conclusions? We build algorithms that balance three competing desiderata: interpretable, robust with high performance, and efficient and scalable in the following categories:

  • General-purpose NLP. Building NLP models that go beyond solving individual tasks and can learn new tasks from their descriptions or a few examples.

  • Reasoning and question answering. Building benchmarks and algorithms that offer rich natural language comprehension using open domain, multi-lingual, multi-hop, and interpretable reasoning; developing some of the first deep neural models for general reading comprehension (BiDAF), open domain QA, cross-lingual QA, multi-hop reasoning, and symbolic methods to solve math and geometry word problems.

  • Knowledge acquisition from multi-modal data. Devising general high-performance algorithms to extract knowledge from textual and visual data; devloping some of the first work in extracting knowledge from scientific text.

  • Representation learning. Integrating capabilities of symbolic representations into neural models to represent knowledge acquired from diverse structured and un-structured resources and forming knowledge-rich dense vectors to encode them; designing neural architectures that efficiently encode textual and visual data.


    Office: Paul Allen Center 654
    Phone: (206) 221-3921
    Email: hannaneh [at] cs [dot] washington [dot] edu