Niloofar Mireshghallah

Niloofar Mireshghallah

I am a postdoctoral scholar at the Paul G. Allen Center for Computer Science and Engineering at University of Washington, advised by Yejin Choi and Yulia Tsvetkov. I received my PhD from UC San Diego where I was advised by Taylor Berg-Kirkpatrick.

My research interests are privacy, natural language processing and societal implications of ML. I explore the interplay between data, its influence on models, and the expectations of the people who regulate and use these models. My work has been recognized by the NCWIT Collegiate Award and the Rising Star in Adversarial Machine Learning Award.

During my PhD I was a part-time researcher/intern at Microsoft Research (Privacy in AI, Algorithms, and Semantic Machines teams) working on differential privacy, model compression and data synthesis.

I am currently on the academic job market for faculty positions! Please contact me if you would like to discuss potential openings or collaborations.

✦ Explanation about my name: I used to publish under Fatemeh which is my legal name in paperwork. But I now go by Niloofar, which is the Lily flower in Farsi!

News Highlights

πŸ—ΊοΈ
πŸ—ΊοΈ

I will be giving an in-person talk at the Stanford NLP Seminar on January 16th! Reach out if you want to meet there!

πŸ—ΊοΈ

I will be visiting Johns Hopkins university to give a talk on the 9th! Reach out if you wanna meet up!

πŸŽ™οΈ

I appeared on a panel at the Future of Privacy Forum - Technologist Roundtable for Policymakers: Key Issues in Privacy and AI (write-up coming soon!)

πŸŽ™οΈ

I appeared on the Thesis Review podcast with Sean Welleck where I talked about my work on Auditing and Mitigating Safety Risks in Large Language Models.

πŸŽ™οΈ

I wrote a blogpost on "Should I do a postdoc?" based on my experience - check out the blog post and video with Sasha Rush!

πŸŽ™οΈ

I gave an invited keynote talk at the SRI International C3E workshop hosted by SRI and NSA. View talk slides.

πŸ“°

I was interviewed by UW News about OpenAI's O1 update and advances in math and reasoning. Read the interview.

Selected Publications

For the full list, please refer to my Google Scholar page.

Invited Talks

  • Stanford University

    NLP Seminar, Jan. 2025

    Privacy, Copyright and Data Integrity: The Cascading Implications of Generative AI

  • University of California, Los Angeles

    Guest lecture for CS 269 - Computational Ethics, LLMs and the Future of NLP, Jan. 2025

    A False Sense of Privacy: Semantic Leakage and Non-literal Copying in LLMs

  • NeurIPS Conference

    Red Teaming GenAI workshop, Dec. 2024

    A False Sense of Privacy: Semantic Leakage and Non-literal Copying in LLMs

  • NeurIPS Conference

    Panelist, Dec. 2024

    PrivacyML: Meaningful Privacy-Preserving Machine Learning tutorial

  • Johns Hopkins University

    CS Department Seminar, Dec. 2024

    Privacy, Copyright and Data Integrity: The Cascading Implications of Generative AIs

  • Future of Privacy Forum

    Panelist, Nov. 2024

    Technologist Roundtable for Policymakers: Key Issues in Privacy and AI

  • University of Utah

    Guest lecture for the School of Computing CS 6340/5340 NLP course, Nov. 2024

    Can LLMs Keep a Secret?

    Slides | Recording

  • UMass Amherst

    NLP Seminar, Oct. 2024

    Membership Inference Attacks and Contextual Integrity for Language

    Slides

  • Northeastern University

    Khoury College of Computer Sciences Security Seminar, Oct. 2024

    Membership Inference Attacks and Contextual Integrity for Language

    Slides

  • Stanford Research Institute (SRI) International

    Computational Cybersecurity in Compromised Environments (C3E) workshop, Sep. 2024

    Can LLMs keep a secret? Testing privacy implications of Language Models via Contextual Integrity

    Slides

  • LinkedIn Research

    Privacy Tech Talk, Sep. 2024

    Can LLMs keep a secret? Testing privacy implications of Language Models via Contextual Integrity

  • National Academies (NASEM)

    Forum on Cyber Resilience, Aug. 2024

    Oversharing with LLMs is underrated: the curious case of personal disclosures in human-LLM conversations

    Slides

  • ML Collective

    DLCT reading group, Aug. 2024

    Privacy in LLMs: Understanding what data is imprinted in LMs and how it might surface!

    Slides | Recording

  • Carnegie Mellon University

    Invited Talk, Jun. 2024

    Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs

    Slides

  • Generative AI and Law workshop, Washington DC

    Invited Talk, Apr. 2024

    What is differential privacy? And what is it not?

    Slides

  • Meta AI Research

    Invited Talk, Apr. 2024

    Membership Inference Attacks and Contextual Integrity for Language

  • Georgia Institute of Technology

    Guest lecture for the School of Interactive Computing, Apr. 2024

    Safety in LLMs: Privacy and Memorization

  • University of Washington

    Guest lecture for CSE 484 and 582 courses on Computer Security and Ethics in AI, Apr. 2024

    Safety in LLMs: Privacy and Memorization

  • Carnegie Mellon University

    Guest lecture for LTI 11-830 course on Computational Ethics in NLP, Mar. 2024

    Safety in LLMs: Privacy and Memorization

  • Simons Collaboration

    TOC4Fairness Seminar, Mar. 2024

    Membership Inference Attacks and Contextual Integrity for Language

    Slides | Recording

  • University of California, Santa Barbara

    NLP Seminar Invited Talk, Mar. 2024

    Can LLMs Keep a Secret? Testing Privacy Implications of LLMs

    Slides

  • University of California, Los Angeles

    NLP Seminar Invited Talk, Mar. 2024

    Can LLMs Keep a Secret? Testing Privacy Implications of LLMs

    Slides

  • University of Texas at Austin

    Guest lecture for LIN 393 course on Social Applications and Impact of NLP, Feb. 2024

    Can LLMs Keep a Secret? Testing Privacy Implications of LLMs

    Slides

  • Google Brain

    Google Tech Talk, Feb. 2024

    Can LLMs Keep a Secret? Testing Privacy Implications of LLMs

    Slides | Recording

  • University of Washington

    Allen School Colloquium, Jan. 2024

    Can LLMs Keep a Secret? Testing Privacy Implications of LLMs

    Slides | Recording

  • University of Washington

    eScience Institute Seminars, Nov. 2023

    Privacy Auditing and Protection in Large Language Model

    Slides

  • CISPA Helmholtz Center for Security

    Invited Talk, Sep. 2023

    What does privacy-preserving NLP entail?

  • Max Planck Institute for Software Systems

    Next 10 in AI Series, Sep. 2023

    Auditing and Mitigating Safety Risks in LLMs

    Slides

  • Mila / McGill University

    Invited Talk, May 2023

    Privacy Auditing and Protection in Large Language Models

  • EACL 2023

    Tutorial co-instruction, May 2023

    Private NLP: Federated Learning and Privacy Regularization

    Slides | Recording

  • LLM Interfaces Workshop and Hackathon

    Invited Talk, Apr. 2023

    Learning-free Controllable Text Generation

    Slides | Recording

  • University of Washington

    Invited Talk, Apr. 2023

    Auditing and Mitigating Safety Risks in Large Language Models

    Slides

  • NDSS Conference

    Keynote talk for EthiCS workshop, Feb. 2023

    How much can we trust large language models?

  • Google

    Federated Learning Seminar, Feb. 2023

    Privacy Auditing and Protection in Large Language Models

    Slides

  • University of Texas Austin

    Invited Talk, Oct. 2022

    How much can we trust large language models?

    Slides

  • Johns Hopkins University

    Guest lecture for CS 601.670 course on Artificial Agents, Sep. 2022

    Mix and Match: Learning-free Controllable Text Generation

    Slides

  • KDD Conference

    Adversarial ML workshop, Aug. 2022

    How much can we trust large language models?

    Slides | Recording

  • Microsoft Research Cambridge

    Invited Talk, Mar. 2022

    What Does it Mean for a Language Model to Preserve Privacy?

    Slides

  • University of Maine

    Guest lecture for COS435/535 course on Information Privacy Engineering, Dec. 2021

    Improving Attribute Privacy and Fairness for Natural Language Processing

    Slides

  • National University of Singapore

    Invited Talk, Nov. 2021

    Style Pooling: Automatic Text Style Obfuscation for Fairness

    Slides

  • Big Science for Large Language Models

    Invited Panelist, Oct. 2021

    Privacy-Preserving Natural Language Processing

    Recording

  • Research Society MIT Manipal

    Cognizance Event Invited Talk, Jul. 2021

    Privacy and Interpretability of DNN Inference

    Slides | Recording

  • Alan Turing Institute

    Privacy and Security in ML Seminars, Jun. 2021

    Low-overhead Techniques for Privacy and Fairness of DNNs

    Slides | Recording

  • Split Learning Workshop

    Invited Talk, Mar. 2021

    Shredder: Learning Noise Distributions to Protect Inference Privacy

    Slides | Recording

  • University of Massachusetts Amherst

    Machine Learning and Friends Lunch, Oct. 2020

    Privacy and Fairness in DNN Inference

  • OpenMined Privacy Conference

    Invited Talk, Sep. 2020

    Privacy-Preserving Natural Language Processing

    Slides | Recording

  • Microsoft Research AI

    Breakthroughs Workshop, Sep. 2020

    Private Text Generation through Regularization

Awards and Honors

πŸ†

Momental Foundation Mistletoe Research Fellowship (MRF) Finalist, 2023

🌟

Rising Star in Adversarial Machine Learning (AdvML) Award Winner, 2022. AdvML Workshop

🌟

Rising Stars in EECS, 2022. Event Page

πŸŽ“

UCSD CSE Excellence in Leadership and Service Award Winner, 2022

🌟

FAccT Doctoral Consortium, 2022. FAccT 2022

πŸ‘©β€πŸ’»

Qualcomm Innovation Fellowship Finalist, 2021. Fellowship Page

πŸ‘©β€πŸ’»

NCWIT (National Center for Women & IT) Collegiate Award Winner, 2020. NCWIT Awards

πŸŽ“

National University Entrance Exam in Math, 2014. Ranked 249th of 223,000

πŸŽ“

National University Entrance Exam in Foreign Languages, 2014. Ranked 57th of 119,000

πŸŽ“

National Organization for Exceptional Talents (NODET), 2008. Admitted, ~2% Acceptance Rate

Featured Press & Media

Recent Co-organized Workshops

[for full list check my CV]

Industry Research Experience

  • Microsoft Semantic Machines

    Fall 2022-Fall 2023 (Part-time), Summer 2022 (Intern)

    Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner

  • Microsoft Research, Algorithms Group, Redmond Lab

    Winter 2022 (Intern)

    Mentors: Sergey Yekhanin, Arturs Backurs

  • Microsoft Research, Language, Learning and Privacy Group, Redmond Lab

    Summer 2021 (Intern), Summer 2020 (Intern)

    Mentors: Dimitrios Dimitriadis, Robert Sim

  • Western Digital Co. Research and Development

    Summer 2019 (Intern)

    Mentor: Anand Kulkarni

Diversity, Inclusion & Mentorship

πŸ”Ή

Mentor for the mentorship program at WiML event in NeurIPS 2024

πŸ”Ή

D&I chair at NAACL 2025

πŸ”Ή

Widening NLP (WiNLP) co-chair

πŸ”Ή

Socio-cultural D&I chair at NAACL 2022

πŸ”Ή

Mentor for the Graduate Women in Computing (GradWIC) at UCSD

πŸ”Ή

Mentor for the UC San Diego Women Organization for Research Mentoring (WORM) in STEM

πŸ”Ή

Co-leader for the "Feminist Perspectives for Machine Learning & Computer Vision" Break-out session at the Women in Machine Learning (WiML) 2020 Un-workshop Held at ICML 2020

πŸ”Ή

Mentor for the USENIX Security 2020 Undergraduate Mentorship Program

πŸ”Ή

Volunteer at the Women in Machine Learning 2019 Workshop Held at NeurIPS 2019

πŸ”Ή

Invited Speaker at the Women in Machine Learning and Data Science (WiMLDS) NeurIPS 2019 Meetup

πŸ”Ή

Mentor for the UCSD CSE Early Research Scholars Program (CSE-ERSP) in 2018

Professional Services

[Outdated, for an updated version check my CV]
β—†

Reviewer for ICLR 2022

β—†

Reviewer for NeurIPS 2021

β—†

Reviewer for ICML 2021

β—†

Shadow PC member for IEEE Security and Privacy Conference Winter 2021

β—†

Artifact Evaluation Program Committee Member for USENIX Security 2021

β—†

Reviewer for ICLR 2021 Conference

β—†

Program Committee member for the LatinX in AI Research Workshop at ICML 2020 (LXAI)

β—†

Reviewer for the 2020 Workshop on Human Interpretability in Machine Learning (WHI) at ICML 2020

β—†

Program Committee member for the MLArchSys workshop at ISCA 2020

β—†

Security & Privacy Committee Member and Session Chair for Grace Hopper Celebration (GHC) 2020

β—†

GHC (Grace Hopper Celebration) 2020 Privacy and Security Committee Member

β—†

Reviewer for ICML 2020 Conference

β—†

Artifact Evaluation Program Committee Member for ASPLOS 2020

β—†

Reviewer for IEEE TC Journal

β—†

Reviewer for ACM TACO Journal

Books I Like!

πŸ“š

Small Is Beautiful: Economics As If People Mattered by E. F. Schumacher

πŸ“š

Quarter-life by Satya Doyle Byock

πŸ“š

The Body Keeps the Score by Bessel van der Kolk

πŸ“š

36 Views of Mount Fuji by Cathy Davidson

πŸ“š

Indistractable by Nir Eyal

πŸ“š

Sapiens: A Brief History of Humankind by Yuval Noah Harari

πŸ“š

The Martian by Andy Weir

πŸ“š

The Solitaire Mystery by Jostein Gaarder

πŸ“š

The Orange Girl by Jostein Gaarder

πŸ“š

Life is Short: A Letter to St Augustine by Jostein Gaarder

πŸ“š

The Alchemist by Paulo Coelho