Title:
Tiramisu: A Polyhedral Compiler for Dense and Sparse Deep Learning
Abstract:
Tiramisu is a polyhedral compiler for deep learning. It has two unique
features: (1) it is the first sparse DNN compiler; and (2) it can
express and optimize general RNNs (Recurrent Neural Networks). Tiramisu
relies on a flexible representation based on the polyhedral model and
has a rich scheduling language allowing fine-grained control of
optimizations. We show that Tiramisu matches or outperforms Intel
MKL-DNN by up to 5x for sparse DNNs. We also show that Tiramisu allows
many new capabilities such as wavefront parallelization for RNNs.
Bio:
Riyadh Baghdadi is a postdoctoral associate at CSAIL. He got his PhD
from ENS Paris. He works on the area of compiler optimization and code
generation for high performance architectures. Recently, he has been
interested in exploring the intersection of compilers and deep learning.