Postdoctoral Scholar
University of Washington
I am a postdoc in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, working with Prof. Baris Kasikci. My research focuses on building efficient systems infrastructure for modern storage and memory tecnologies. I work on building systems that are able to achieve the best performance out of modern and heterogeneous memory technologies such as CXL-attached memory, GPU memory, as well as byte addressable storage for improving the efficiency of datacenter applications as well as machine learning systems.
Prior to joining UW, I completed my PhD at the University of Texas at Austin under the guidance of Prof. Vijay Chidambaram, where I developed innovative solutions for persistent memory file systems and key-value stores. My doctoral research focused on transparently achieving high performance for datacenter applications on modern byte-addressable storage technologies.
Proceedings of the ACM on Programming Languages 9 (OOPSLA1), 477-506, 2025
In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI 2023)
In Proceedings of the 28th ACM Symposium on Operating Systems Principles, 2021. (SOSP 2021)
In Proceedings of the 27th ACM Symposium on Operating Systems Principles, pp. 494-508. ACM, 2019. (SOSP 2019)
Proceedings of the 33rd Annual Conference on Neural Information Processing Systems. (NeurIPS 2019)
Proceedings of the 15th Symposium of Operating Systems Principles, pp. 497-514. ACM, 2017. (SOSP 2017)
Proceedings of the 8th ACM The Eighth SIGOPS AsiaPacific Workshop on Systems, Sep 2017. Best Poster Award. (ApSys 2017)
arXiv preprint arXiv:2510.00229, 2025
arXiv preprint arXiv:2506.17538, 2025
arXiv preprint arXiv:2502.12216, 2025
arXiv preprint arXiv:2502.20969, 2025
University of Washington (Ongoing)
Exploring fine-tuning techniques for local LLMs to improve their tool calling capabilities in agentic systems.
Key contributions: Fine-tuning, Tool calling, Agentic systems, local LLMs
University of Washington (Ongoing)
Building a benchmark suite and an evaluation leaderboard for evaluating the performance of generative AI applications and agentic workflows on end-user devices. Also building a runtime system that accelerates local LLM performance for agentic workflows using smart GPU sharing, scheduling and memory management system for end-user devices.
Key contributions: Benchmark suite, performance evaluation, end-user devices, smart GPU sharing, scheduling and memory management
University of Washington (Ongoing)
Building a tiered memory system that uses smart allocation policies for achieving fine-grained hot data tracking, along with adaptive migrations for reacting to workload changes.
Key contributions: Smart allocation policies, fine-grained data tracking, adaptive migration strategies
University of Washington & UT Austin (Ongoing)
Building a distributed Persistent Memory manager for transparent scaling of single-node memory-mapped applications across a cluster.
Key contributions: Distributed memory management, transparent scaling architecture, cluster-wide memory mapping
Microsoft Research (CHI 2023)
Building a shared-state system with client-side caching and easy-to-use API for supporting distributed interactive applications in the cloud.
Key contributions: Client-side caching, distributed state management, cloud application support
UT Austin (SOSP 2021)
A PM file system aimed at preserving hugepages for improving the performance of emerging PM applications.
Key contributions: Hugepage preservation, performance optimization for PM applications, graceful aging behavior
UT Austin & VMware Research (SOSP 2019)
A user-space file system aimed at improving performance of POSIX applications on persistent memory by converting reads and writes to loads and stores from user space, and passing metadata operations to the kernel.
Key contributions: User-space file system design, POSIX compatibility, metadata operation optimization
UT Austin (SOSP 2017)
A key-value store based on fragmented log-structured merge trees, which reduces IO Amplification while increasing throughput.
Key contributions: Fragmented LSM-tree design, IO amplification reduction, throughput optimization
Assistant Instructor, UT Austin
Spring 2023
Teaching Assistant, UT Austin
Spring 2022
Teaching Assistant, UT Austin
Fall 2019
May 2024
July 2022
October 2021
October 2021
October 2019
August 2018
University of Washington
Aug 2023 - Present
Mentor: Prof. Baris Kasikci
Microsoft Research, Redmond
May 2021 - Aug 2021
Mentor: Dr. Badrish Chandramouli
Microsoft Research, Redmond
May 2020 - Aug 2020
Mentor: Dr. Badrish Chandramouli
Microsoft Research, India
May 2019 - Aug 2019
Mentor: Dr. Harsha Vardhan Simhadri
VMware Research
May 2018 - Aug 2018
Mentor: Dr. Aasheesh Kolli
University of Wisconsin, Madison
Jan 2017 - July 2017
Mentors: Prof. Remzi Arpaci-Dusseau and Prof. Vijay Chidambaram