This is Zihao Ye, a third-year Ph.D. student at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, advised by Luis Ceze in the SAMPL research group. I also work closely with Tianqi Chen on Apache TVM project.
We are organizing talks at SAMPL, topics include Systems, Architecture, Compilers, Verification and Machine Learning.
Besides research, I enjoy diving into software/hardware details, as well as working on open-source projects.
I have broad interests in Computer Systems, Compiler, Programming Languages, and Computer Architecture. My current research centers around sparse computation:
- Programming Abstraction/Compilers for Sparse Workloads on Heterogeneous Hardware.
- Domain Specific Accelerator and Memory Architecture for Sparsity.
- Scalable and Affordable Sparse Computation in Data Centers.
Feel free to drop me an email if we have aligned interests, and I’m open to collaborations.
- Jun 2023 I’ll give a talk on Sparsity in LLMs at CTSTA @ PLDI 2023, see you all in Orlando!
- May 2023 Please take a look at the MLC-LLM project, which enables the deployment of LLM on a wide range of hardware platforms. I am honored to be a part of this project and collaborate with an extraordinary team!
- Mar 2023 SparseTIR has been awarded Distinguished Artifact at ASPLOS 2023!
- Feb 2023 I’m going to be a TA for CSE 599M in Spring 2023, which is ML for ML Systems taught by Luis Ceze. I’m really excited to be a part of this new course.
- Jan 2023 We are proud to announce that SparseTIR will be featured at ASPLOS 2023. We’ll be heading to Vancouver and can’t wait to see everyone there!
Compiler for Sparsity in Deep Learning
SparseTIR is a tensor-level abstraction for representing and optimizing sparse/irregular operators in Deep Learning. The project is in close collaboration with Ruihang and Tianqi from CMU Catalyst and Junru from OctoML, we thank the support and advice from advisors TQ and Luis, and assistance from the TensorIR team.
Composable Abstractions for Sparse Compilation in Deep Learning
- Mar 2023 @ ASPLOS 2023, Vancouver, Canada
- Mar 2023 @ TVMCon
- Nov 2022 @ Amazon AI
- Oct 2022 @ CRISP Liaison Meeting
- Aug 2022 @ Tsinghua NISF-EFC Group
- Aug 2022 @ Cornell Zhang Research Group
- July 2022 @ Google MLIR Reading Group
- Dec 2021 @ TVMCon
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning.
The 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2023. Distinguished Artifact Award.