This is Zihao Ye, a second-year Ph.D. student at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, advised by Luis Ceze in the SAMPL research group. I also work closely with Tianqi Chen on Apache TVM project.
We are organizing talks at SAMPL, topics include Systems, Architecture, Compilers, Verification and Machine Learning.
I have broad interests in Computer Systems, Compiler, Programming Languages, and Computer Architecture. My current research centers around sparse computation:
- Programming Abstraction/Compilers for Sparse Workloads on Heterogeneous Hardware.
- Domain Specific Accelerator and Memory Architecture for Sparsity.
- Scalable and Affordable Sparse Computation in Data Centers.
Feel free to drop me an email if we have aligned interests, and I’m open to collaborations.
SparseTIR is a unified abstraction for representing and optimizing sparse/irregular workloads in Deep Learning on top of TVM Tensor IR. It aims to generate efficient code for various sparse formats on heterogeneous hardware.
Composable Abstractions for Sparse Compilation in Deep Learning
- Nov 2022 @ Amazon AI
- Oct 2022 @ CRISP Liaison Meeting
- Aug 2022 @ Tsinghua NISF-EFC Group
- Aug 2022 @ Cornell Zhang Research Group
- July 2022 @ Google MLIR Reading Group
- Dec 2021 @ TVMCon
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning.
arXiv preprint arXiv:2207.04606, 2022.