I am a postdoctoral scholar at the Paul G. Allen Center for Computer Science and Engineering at University of Washington, advised by Yejin Choi and Yulia Tsvetkov. I received my PhD from UC San Diego where I was advised by Taylor Berg-Kirkpatrick.
My research interests are privacy, natural language processing and societal implications of ML. I explore the interplay between data, its influence on models, and the expectations of the people who regulate and use these models. My work has been recognized by the NCWIT Collegiate Award and the Rising Star in Adversarial Machine Learning Award.
During my PhD I was a part-time researcher/intern at Microsoft Research (Privacy in AI, Algorithms, and Semantic Machines teams) working on differential privacy, model compression and data synthesis.
I am currently on the academic job market for faculty positions! Please contact me if you would like to discuss potential openings or collaborations.
✦ Explanation about my name: I used to publish under Fatemeh which is my legal name in paperwork. But I now go by Niloofar, which is the Lily flower in Farsi!
Featured Press Articles
Recent Co-organized Workshops
Selected Publications
For the full list, please refer to my Google Scholar page.
- Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild (COLM 2024)
- Do membership inference attacks work on large language models? (COLM 2024)
- CopyBench: Measuring literal and non-literal reproduction of copyright-protected text in language model generation (Preprint)
- Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs (Preprint)
- A Roadmap to Pluralistic Alignment (ICML 2024)
- Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory (ICLR 2024, Spotlight)
- Privacy-preserving in-context learning with differentially private few-shot generation (ICLR 2024)
- Smaller Language Models are Better Black-box Machine-Generated Text Detectors (EACL 2024)
- Privacy-Preserving Domain Adaptation of Semantic Parsers (ACL 2023)
- Membership Inference Attacks against Language Models via Neighbourhood Comparison (ACL 2023, findings)
- Differentially Private Model Compression (NeurIPS 2022)
- Memorization in NLP Fine-tuning Methods (EMNLP 2022)
- Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks (EMNLP 2022)
- What Does it Mean for a Language Model to Preserve Privacy? (FAccT 2022)
- Mix and Match: Learning-free Controllable Text Generation (ACL 2022)
- Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness (EMNLP 2021)
- Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy (WWW 2021). Recording here.
- Shredder: Learning Noise Distributions to Protect Inference Privacy (ASPLOS 20). Recording here.
Industry Research Experience
-
Microsoft Semantic Machines
Fall 2022-Fall 2023 (Part-time), Summer 2022 (Intern)
Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
-
Microsoft Research, Algorithms Group, Redmond Lab
Winter 2022 (Intern)
Mentors: Sergey Yekhanin, Arturs Backurs
-
Microsoft Research, Language, Learning and Privacy Group, Redmond Lab
Summer 2021 (Intern), Summer 2020 (Intern)
Mentors: Dimitrios Dimitriadis, Robert Sim
-
Western Digital Co. Research and Development
Summer 2019 (Intern)
Mentor: Anand Kulkarni
Invited Talks
Selected Talks
On Differential Privacy and Contextual Integrity:
- Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLM, CMU, June 2024, slides.
- What is differential privacy? And what is it not?, Generative AI and Law workshop, DC, Apr 2024, slides.
- Membership Inference Attacks and Contextual Integrity for Language, TOC4Fairness Seminar, March 2024, slides & recording.
- Can LLMs Keep a Secret? Testing Privacy Implications of LLMs, UW CSE Colloquium, January 2024, slides and recording.
Course Material on Privacy
- Safety in LLMs: Privacy and Memorization, Guest lecture for UW CSE 484/584, CMU 11-830 and GaTech, March 2024, slides.
All Talks
- Privacy in LLMs: Understanding what data is imprinted in LMs and how it might surface!, ML Collective DLCT reading group, Aug 2024 slides and recording.
- Oversharing with LLMs is underrated: the curious case of personal disclosures in human-LLM conversations, National Academies Forum, Aug 2024, slides.
- Membership Inference Attacks and Contextual Integrity for Language, UCSB and UCLA, March 2024 slides.
- Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory, UT Austin LIN 393, Feb 2024, slides.
- Can LLMs Keep a Secret? Testing Privacy Implications of Language Models in interactive Settings, Google, Feb 2024, slides and recording.
- Privacy Auditing and Protection in Large Language Model, UW eScience Institute, November 2023, slides.
- What does privacy-preserving NLP entail?, CISPA Helmholtz Center for Security, September 2023.
- Auditing and Mitigating Safety Risks in Large Language Models, Max Planck Institute - Next 10 in AI, September 2023, slides.
- Privacy Auditing and Protection in Large Language Models, Mila, May 2023.
- Federated Learning and Privacy Regularization, Tutorial on Privacy-Preserving NLP at EACL, May 2023, slides and recording.
- Auditing and Mitigating Safety Risks in Large Language Models, Cohere for AI, May 2023, slides and recording.
- Learning-free Controllable Text Generation, LLM Interfaces Workshop and Hackathon, Apr 2023, slides and recording.
- Auditing and Mitigating Safety Risks in Large Language Models, University of Washington, Apr 2023, slides.
- How much can we trust large language models?, Ethics Workshop at NDSS 2023, Feb 2023.
- Privacy Auditing and Protection in Large Language Models, Google's FL Seminar, Feb 2023 slides.
- How Much Can We Trust Large Language Models?, University of Texas Austin, Oct 2022 slides.
- Mix and Match: Learning-free Controllable Text Generation, Johns Hopkins University, Sep 2022 slides.
- How Much Can We Trust Large Language Models?, Adversarial ML workshop at KDD, Rising Star Talk, Aug 2022 (slides, recording).
- What Does it Mean for a Language Model to Preserve Privacy?, Mar 2022 slides.
- Improving Attribute Privacy and Fairness for Natural Language Processing at the University of Maine, Dec 2021 slides.
- Style Pooling: Automatic Text Style Obfuscation for Fairness at the National University of Singapore, Nov 2021 slides.
- Privacy-Preserving Natural Language Processing Panel at the Big Science for Large Language Models, Oct 2021. Recording here.
- Privacy and Interpretability of DNN Inference slides at the Research Society MIT Manipal, July 2021. Recording here.
- Low-overhead Techniques for Privacy and Fairness of DNNs slides at the Alan Turing Institute's Privacy-preserving Data Analysis Seminar Series, June 2021. Recording here.
- Shredder: Learning Noise Distributions to Protect Inference Privacy slides at the Split Learning Workshop, March 2021. Recording here.
- Introduction to NLP and Career Prospects slides at the University Institute Of Engineering and Technology, February 2021. Recording here.
- Privacy and Fairness in Deep Neural Network Inference slides at Machine Learning and Friends Lunch at UMass Amherst, October 2020
- Privacy-Preserving Natural Language Processing slides at the OpenMined Privacy Conference, September 2020. Recording here.
- Invited poster session at the Microsoft Research AI Breakthroughs Workshop, September 2020
Diversity, Inclusion & Mentorship
- D&I chair at NAACL 2025
- Widening NLP (WiNLP) co-chair
- Socio-cultural D&I chair at NAACL 2022
- Mentor for the Graduate Women in Computing (GradWIC) at UCSD
- Mentor for the UC San Diego Women Organization for Research Mentoring (WORM) in STEM
- Co-leader for the "Feminist Perspectives for Machine Learning & Computer Vision" Break-out session at the Women in Machine Learning (WiML) 2020 Un-workshop Held at ICML 2020
- Mentor for the USENIX Security 2020 Undergraduate Mentorship Program
- Volunteer at the Women in Machine Learning 2019 Workshop Held at NeurIPS 2019
- Invited Speaker at the Women in Machine Learning and Data Science (WiMLDS) NeurIPS 2019 Meetup
- Mentor for the UCSD CSE Early Research Scholars Program (CSE-ERSP) in 2018
Professional Services
- Reviewer for ICLR 2022
- Reviewer for NeurIPS 2021
- Reviewer for ICML 2021
- Shadow PC member for IEEE Security and Privacy Conference Winter 2021
- Artifact Evaluation Program Committee Member for USENIX Security 2021
- Reviewer for ICLR 2021 Conference
- Program Committee member for the LatinX in AI Research Workshop at ICML 2020 (LXAI)
- Reviewer for the 2020 Workshop on Human Interpretability in Machine Learning (WHI) at ICML 2020
- Program Committee member for the MLArchSys workshop at ISCA 2020
- Security & Privacy Committee Member and Session Chair for Grace Hopper Celebration (GHC) 2020
- GHC (Grace Hopper Celebration) 2020 Privacy and Security Committee Member
- Reviewer for ICML 2020 Conference
- Artifact Evaluation Program Committee Member for ASPLOS 2020
- Reviewer for IEEE TC Journal
- Reviewer for ACM TACO Journal
Books I Like!
- Small Is Beautiful: Economics As If People Mattered by E. F. Schumacher
- Quarter-life by Satya Doyle Byock
- The Body Keeps the Score by Bessel van der Kolk
- 36 Views of Mount Fuji by Cathy Davidson
- Indistractable by Nir Eyal
- Sapiens: A Brief History of Humankind by Yuval Noah Harari
- The Martian by Andy Weir
- The Solitaire Mystery by Jostein Gaarder
- The Orange Girl by Jostein Gaarder
- Life is Short: A Letter to St Augustine by Jostein Gaarder
- The Alchemist by Paulo Coelho