Gagan Bansal

I study and develop human-centered AI for creating better human-AI teams (e.g., for AI-assisted decision-making). Using insights from both Artificial Intelligence and Human-Computer Interaction, I've previously studied:

  • training models that maintain user trust across updates [1], properties of AI that facilitate user ability to create a mental model of trust [2],
  • training AI that optimize for human-AI team performance instead of just their individual accuracy [3], and
  • studying whether AI explanations help users decide when to trust AI recommendations [4].
My ongoing projects develop mechanisms to give users the agency to control and improve AI behavior using human-understandable vocabulary.

Select Publications

  • Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
    Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer
    ACL 2021
    TLDR: First user study to show that explanations can lead to higher appropriate reliance on AI than simply communicating AI's calibrated confidence. However, the best explanation approach can change with the modality.
  • Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
    Gagan Bansal*, Tongshuang Wu*, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, Daniel S. Weld
    CHI 2021
    TLDR: Many prior works argue that explanations improve decision-making. But they all observed improvements only when the AI was significantly better than the people/human-AI team. Getting rid of people would have performed even better. XAI should instead focus on appropriate reliance and complementary performance.
  • Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork
    Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, Daniel S. Weld
    AAAI 2021
    TLDR: For a simplified human-AI team, we formally show that the most accurate AI may not be the optimal teammate-- there exists another lower accuracy predictor that leads to higher team performance (expected utility).
  • Data Staining: A Method for Comparing Faithfulness of Explainers
    Jacob Sippy, Gagan Bansal, Daniel S. Weld
    ICML-WHI 2020
    TLDR: A new method to create unit-tests for assessing faithfulness of post hoc explainer to black-box models. Applicable to multiple domains (text, images) and is model-agnostic.
  • Updates in Human-AI Teams: Undertanding and Addressing the Performance/Compatibility Tradeoff
    Gagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Daniel S. Weld, Eric Horvitz
    AAAI 2019
    TLDR: In AI-assisted decision making, updates that increase AI's accuracy (e.g., from availability of more training data) can actually decrease human-AI team performance by introducing AI behavior that violates existing user expectations.
  • Beyond Accuracy: The Role of Mental Models in Human-AI Teams
    Gagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Daniel S. Weld, Eric Horvitz
    HCOMP 2019
    TLDR: In AI-assisted decision makign settings, complexity of an AI's error regions (parsimony and stochasticity) and task dimensionality affect user ability to create a mental of AI's competence.
  • The Challenge of Crafting Intelligible Intelligence
    Daniel S. Weld, Gagan Bansal
    CACM 2018
    TLDR: Argues that intellgibility is essential and highlights keys challenges and research directions that interdesciplinary research on AI and HCI, including supporting interactive explanations, drilldown, actionability, and control.
  • A Coverage-Based Utility Model for Identifying Unknown Unknowns
    Gagan Bansal, Daniel S. Weld.
    AAAI 2018
    TLDR: A new utility function for discovery of high-confidence AI errors that optimizes for both salience and diversity of errors.

Other Publications


I've had the opportunity to work with and advise many excellent undergraduate, Masters, and high-school students:

  • Prithvi Tarale (Autumn 2020-Present)
  • Joyce Zhou (Co-advised with Dan Weld, Autumn 2019-Summer 2020)
  • Jacob Sippy (Co-advised with Dan Weld, Autumn 2018-Summer 2020)
  • Lynsey Liu (Co-advised with Jonathan Bragg, Autumn 2017-2018)
  • Ziyao Huang (Co-advised with Jonathan Bragg, Autumn 2017-2018)
  • Diana Iftimie (Co-advised with Dan Weld, Winter 2017)


In my free time I enjoy social dancing, birding, biking, PNW outdoors, and classical music.