niloofar@cs.washington.edu


Books I Like!

  • Small Is Beautiful:Economics As If People Mattered by E. F. Schumacher
  • Quarter-life by Satya Doyle Byock
  • The Body Keeps the Score by Bessel van der Kolk
  • 36 Views of Mount Fuji by Cathy Davidson
  • Indistractable by Nir Eyal
  • Sapiens: A Brief History of Humankind by Yuval Noah Harari
  • The Martian by Andy Weir
  • The Solitaire Mystery by Jostein Gaarder
  • The Orange Girl by Jostein Gaarder
  • Life is Short: A Letter to St Augustine by Jostein Gaarder
  • The Alchemist by Paulo Coelho


  • Niloofar Mireshghallah (Fatemeh)

    I am a postdoctroal scholar at the Paul G. Allen Center for Computer Science and Engineering at University of Washington, advised by Yejin Choi and Yulia Tsvetkov. I received my PhD from the University of California, San Diego where I was advised by Taylor Berg-Kirkpatrick, you can find my defense slides here and my proposal slides here. I am also a part-time researcher at Microsoft Semantic Machines and a volunteer research scientist at OpenMined. My research interests are privacy-preserving ML, natural language processing and fairness. I am open to collaborations, so if you have a cool idea and you'd like to discuss it, feel free to email me!
    Explanation about my name: I used to publish under Fatemeh, which is my legal name in paperwork, and I cannot legally change it yet (long story lol). But I now go by Niloofar, which means 'Water Lily' in Farsi!
    Google Scholar | CV| Bio | GitHub | Twitter
      Recent Co-organized Workshops

  • Privacy Regulation and Protection in Machine Learning (PML @ICLR2024)
  • Privacy-Preserving Artificial Intelligence (PPAI @AAAI2024)
  • Generative AI + Law (GenLaw @ICML2023)
  •   Invited Talks

  • Safety in LLMs: Privacy and Memorization, Guest lecture for UW CSE 484/584, March 2024, slides.
  • Can LLMs Keep a Secret? Testing Privacy Implications of Language Models in interactive Settings, Google, February 2024, slides.
  • Can LLMs Keep a Secret? Testing Privacy Implications of LLMs, UW CSE Colloquium, January 2024, slides and recording.
  • Privacy Auditing and Protection in Large Language Model, UW eScience Institute, November 2023, slides.
  • What does privacy-preserving NLP entail?, CISPA Helmholtz Center for Security, September 2023.
  • Auditing and Mitigating Safety Risks in Large Language Models, Max Planck Institute - Next 10 in AI, September 2023, slides.
  • Privacy Auditing and Protection in Large Language Models, Mila, May 2023.
  • Federated Learning and Privacy Regularization, Tutorial on Privacy-Preserving NLP at EACL, May 2023, slides and recording.
  • Auditing and Mitigating Safety Risks in Large Language Models, Cohere for AI, May 2023, slides and recording.
  • Learning-free Controllable Text Generation, LLM Interfaces Workshop and Hackathon, Apr 2023, slides and recording.
  • Auditing and Mitigating Safety Risks in Large Language Models, University of Washington, Apr 2023, (slides).
  • How much can we trust large language models?, Ethics Workshop at NDSS 2023, Feb 2023.
  • Privacy Auditing and Protection in Large Language Models, Google's FL Seminar, Feb 2023 (slides).
  • How Much Can We Trust Large Language Models?, University of Texas Austin, Oct 2022 (slides).
  • Mix and Match: Learning-free Controllable Text Generation, Johns Hopkins University, Sep 2022 (slides).
  • How Much Can We Trust Large Language Models?, Adversarial ML workshop at KDD, Rising Star Talk, Aug 2022 (slides, recording).
  • What Does it Mean for a Language Model to Preserve Privacy?, Mar 2022 (slides).
  • Improving Attribute Privacy and Fairness for Natural Language Processing at the University of Maine, Dec 2021 (slides).
  • Style Pooling: Automatic Text Style Obfuscation for Fairness at the National University of Singapore, Nov 2021 (slides).
  • Privacy-Preserving Natural Language Processing Panel at the Big Science for Large Language Models, Oct 2021. Recording here.
  • Privacy and Interpretability of DNN Inference(slides) at the Research Society MIT Manipal, July 2021. Recording here.
  • Low-overhead Techniques for Privacy and Fairness of DNNs (slides) at the Alan Turning Institue's Privacy-preserving Data Analysis Seminar Series, June 2021. Recording here.
  • Shredder: Learning Noise Distributions to Protect Inference Privacy (slides) at the Split Learning Workshop, March 2021. Recording here.
  • Introduction to NLP and Career Prospects (slides) at the University Institute Of Engineering and Technology, February 2021. Recording here
  • Privacy and Fairness in Deep Neural Network Inference (slides) at Machine Learning and Friends Lunch at UMass Amherst, October 2020. Recording here.
  • Privacy-Preserving Natural Language Processing (slides) at the OpenMined Privacy Conference, September 2020. Recording here.
  • Invited poster session at the Microsoft Research AI Breakthroughs Workshop, September 2020
  •   Research Experience
    Fall 2022-Present
    Part-time Researcher
    Microsoft Semantic Machines
    Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
    Summer 2022
    Research Intern
    Microsoft Semantic Machines
    Mentors: Richard Shin, Yu Su, Tatsunori Hashimoto, Jason Eisner
    Winter 2022
    Research Intern
    Microsoft Research, Algorithms Group, Redmond Lab
    Mentors: Sergey Yekhanin, Arturs Backurs
    Summer 2021
    Research Intern
    Microsoft Research, Language, Learning and Privacy Group, Redmond Lab
    Mentors: Dimitrios Dimitriadis, Robert Sim
    Summer 2020
    Research Intern
    Microsoft Research, Language, Learning and Privacy Group, Redmond Lab
    Mentor: Robert Sim
    Summer 2019
    Research Intern
    Western Digital Co. Research and Development
    Mentor: Anand Kulkarni
      Publications

    [For the full list, please refer to my Google Scholar page.]

  • Alpaca against Vicuna: Using LLMs to Uncover Memorization of LLMs Preprint

  • Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory ICLR 2024 (Spotlight)

  • Privacy-preserving in-context learning with differentially private few-shot generation ICLR 2024

  • Smaller Language Models are Better Black-box Machine-Generated Text Detectors EACL 20224

  • Simple Temporal Adaptation to Changing Label Sets: Hashtag Prediction via Dense KNN EMNLP 20224

  • A Block Metropolis-Hastings Sampler for Controllable Energy-based Text Generation CoNLL 20224

  • Privacy-Preserving Domain Adaptation of Semantic Parsers ACL 2023

  • Membership Inference Attacks against Language Models via Neighbourhood Comparison ACL 2023 (findings)

  • Differentially Private Model Compression NeurIPS 20222

  • Memorization in NLP Fine-tuning Methods EMNLP 2022

  • Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks EMNLP 2022

  • What Does it Mean for a Language Model to Preserve Privacy? in FAccT 2022.

  • User Identifier: Implicit User Representations for Simple and Effective Personalized Sentiment in NAACL 2022.

  • Mix and Match: Learning-free Controllable Text Generation, ACL 2022.

  • Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness, 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021).

  • U-Noise: Learnable Noise Masks for Interpretable Image Segmentation, 2021 IEEE International Conference on Image Processing (ICIP 2021).

  • Privacy Regularization: Joint Privacy-Utility Optimization in Language Models, 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021).

  • Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy, 30th Web Conference (WWW 2021). Recording here.

  • Privacy in Deep Learning: A survey. Please let me know if there is any related work that is missing!

  • Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy , 2020 CCS Privacy-Preserving Machine Learning in Practice (PPMLP) workshop (PPMLP 2020).

  • Shredder: Learning Noise Distributions to Protect Inference Privacy, 25th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 20). Recording here.

  • Shredder: Learning Noise Distributions to Protect Inference Privacy with a Self-Supervised Learning Approach, Thirty-fourth Annual Conference on Neural Information Processing Systems (NeurIPS19), Privacy in Machin Learning Workshop (PriML19).
    Code available at shredder-v2-self-supervised

  • Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge Thirty-sixth International Conference on Machine Learning (ICML19), Security and Privacy of Machin Learning Workshop (SPML19).
    Code available at shredder-v1

  • Energy-Efficient Permanent Fault Tolerance in Hard Real-Time Systems, IEEE Transactions on Computers, March 2019

  • ReLeQ: An Automatic Reinforcement Learning Approach for Deep Quantization of Neural Networks, NeurIPS ML for systems workshop, December 2018
  •   Diversity, Inclusion & Mentorship

  • Widening NLP (WiNLP) co-chair
  • Socio-cultral D&I chair at NAACL 2022
  • Mentor for the Graduate Women in Computing (GradWIC) at UCSD
  • Mentor for the UC Sand Diego Women Organization for Research Mentoring (WORM) in STEM
  • Co-leader for the "Feminist Perspectives for Machine Learning & Computer Vision" Break-out session at the Women in Machine Learning (WiML) 2020 Un-workshop Held at ICML 2020
  • Mentor for the USENIX Security 2020 Undergraduate Mentorship Program
  • Volunteer at the Women in Machine Learning 2019 Workshop Held at NeurIPS 2019
  • Invited Speaker at the Women in Machine Learning and Data Science (WiMLDS) NeurIPS 2019 Meetup
  • Mentor for the UCSD CSE Early Research Scholars Program (CSE-ERSP) in 2018
  •   Professional Services

  • Reviewer for ICLR 2022
  • Reviewer for NeurIPS 2021
  • Reviewer for ICML 2021
  • Shadow PC member for IEEE Security and Privacy Conference Winter 2021
  • Artifact Evaluation Program Committee Member for USENIX Security 2021
  • Reviewer for ICLR 2021 Conference
  • Program Committee member for the LatinX in AI Research Workshop at ICML 2020 (LXAI)
  • Reviewer for the 2020 Workshop on Human Interpretability in Machine Learning (WHI) at ICML 2020
  • Program Committee member for the MLArchSys workshop at ISCA 2020
  • Security & Privacy Committee Member and Session Chair for Grace Hopper Celebration (GHC) 2020
  • GHC (Grace Hopper Celebrateion) 2020 Privacy and Security Committer Member
  • Reviewer for ICML 2020 Conference
  • Artifact Evaluation Program Committee Member for ASPLOS 2020
  • Reviewer for IEEE TC Journal
  • Reviewer for ACM TACO Journal
  •   TA Experiences, UC San Diego
    Fall 2020

  • TA of CSE 276C: Mathematics for Robotics, Graduate Level, Instructor: Dr. Henrik I. Christensen
  • Winter and Fall 2019

  • TA of CSE 240D: Accelerator Design for Deep Learning, Graduate Level, Instructor: Dr. Hadi Esmaeilzadeh
  •   Volunteer TA Experiences, Sharif University of Technology
    Fall 2017

  • Head TA of Digital Electronics
  • Head TA of Probability and Statistics
  • Spring 2017

  • TA of Computer Architecture
  • TA of Signals and Systems
  • Head TA of Probability and Statistics
  • Fall 2016

  • TA of Advanced Programming
  • Head TA of Numerical Methods