Email: moreau [at] cs [dot] washington [dot] edu
I am a Ph.D. Candidate at the University of Washington in Computer Science & Engineering. My research focuses on cross-stack optimizations for Pareto-optimal deep learning systems.
I am advised by Luis Ceze in the Sampa architecture group. I received my B.A.Sc. in Electrical and Computer Engineering from the University of Toronto in 2012, and my M.S. in Computer Science and Engineering from the University of Washington in 2015.
My reseach focuses on novel ways to make computer systems more capable and energy efficient. I have been involved in research in approximate computing, which aims to expose cross-stack quality-performance trade-offs in a vast range of application domains; most typically applications that deal with inherently noisy data.
More recently I have been collaborating with Tianqi Chen on buiding a hardware accelerator back-end for TVM, the end-to-end compiler stack for deep learning systems. Our TVM paper was one of six to get a contributed talk at SysML 2018.
This year, I am organizing with Grigori Fursin and other collaborators the first ACM ReQuEST tournament on reproducibe and Pareto-Efficient deep learning systems research. It will be co-located with the ASPLOS 2018 conference in Williamsburg.
As part of my Master's project, I delevopped SNNAP, a co-processor prototype that can improve the energy efficiency of programs through an algorithmic transformation that approximates regions of code with neural networks. This work was partly funded by a Qualcomm Innovation Fellowship. This work is described in a paper that I presented at HPCA 2015, and won 2nd best Project Demo at the C-FAR semi-annual meeting in 2016.
I also helped teach Luis Ceze's computer architecture class in the spring of 2017, where we introduced students to hardware-software codesign for machine learning problems. This lab is open sourced on GitHub and is an approachable introduction to FPGA acceleration for machine learning algorithms. I have summarized the results from the open-ended competition of this lab here.
MATIC: Learning Around Errors for Efficient Low-Voltage Neural Network Accelerators. Sung Kim, Patrick Howe, Thierry Moreau, Armin Alaghi, Luis Ceze and Visvesh Sathe. In DATE 2018 (application track best paper!).
TVM: End-to-End Compilation Stack for Deep Learning. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Yan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy. In SysML Conference 2018 (one of six contributed talks!).
Exploring Quality-Energy Tradeoffs with Arbitrary Quantization. Thierry Moreau, Felipe Augusto, Patrick Howe, Armin Alaghi, Luis Ceze. In CODES+ISSS 2017 (special session). [slides]
Exploring Computation-Communication Tradeoffs in Camera Systems. Amrita Mazumdar, Thierry Moreau, Sung Kim, Meghan Cowan, Armin Alaghi, Luis Ceze, Mark Oskin, Visvesh Sathe. In IISWC 2017.
Compilation and Hardware Support for Approximate Acceleration. Thierry Moreau, Adrian Sampson, Andre Baixo, Mark Wyse, Ben Ransford, Jacob Nelson, Luis Ceze, and Mark Oskin. In TECHCON 2015. [slides]
REACT: A Framework for Rapid Exploration of Approximate Computing Techniques. Mark Wyse, Andre Baixo, Thierry Moreau, Bill Zorn, James Bornholt, Adrian Sampson, Luis Ceze, and Mark Oskin. In WAX 2015 (co-located with PLDI).
SNNAP: Approximate Computing on Programmable SoCs via Neural Acceleration. Thierry Moreau, Mark Wyse, Jacob Nelson, Adrian Sampson, Hadi Esmaeilzadeh, Luis Ceze, and Mark Oskin. In HPCA 2015. [slides]
A Taxonomy of Approximate Computing Techniques. Thierry Moreau, Joshua San Miguel, Mark Wyse, James Bornholt, Armin Alaghi, Luis Ceze, Natalie Enright Jerger and Adrian Sampson. In IEEE Embedded Systems Letter, October 2017.
Approximate Computing: Making Mobile Systems More Efficient. Thierry Moreau, Adrian Sampson, and Luis Ceze. In IEEE Pervasive Computing, April/June 2015.
TVM: End-to-End Optimization Stack for Deep Learning. Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Yan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, Arvind Krishnamurthy. ArXiv:1802.04799.
QAPPA: A Framework for Navigating Quality-Energy Tradeoffs with Arbitrary Quantization. Thierry Moreau, Felipe Augusto, Patrick Howe, Armin Alaghi, Luis Ceze. UW-CSE Tech Report (UW-CSE-17-03-02).
ACCEPT: A Programmer-Guided Compiler Framework for Practical Approximate Computing. Adrian Sampson, Andre Baixo, Benjamin Ransford, Thierry Moreau, Joshua Yip, Luis Ceze and Mark Oskin. UW-CSE Tech Report (UW-CSE-15-01-01).
CSE599S: Hardware/Software Co-Optimization for Machine Learning, Spring 2018, Co-Instructor, with Luis Ceze.
CSE548: Computer Architecture, Spring 2017, head Teaching-Assistant, with Luis Ceze.
CSE352: Hardware Design and Implementation, Spring 2013, head Teaching-Assistant, with Mark Oskin.
My work is generously supported by the Center for Future Architectures Research, the Qualcomm Innovation Fellowship, the National Sciences and Engineering Research Council of Canada and the Weil Family Endowed Fellowship.