Abstract: Tensor and Linear Algebra are powerful tools with applications in data analytics, machine learning, science, and engineering. The massive growth of data in these applications makes performance critical. For applications that use sparse tensors, where most components are zeros, programmers must choose between libraries with hand-optimized implementations of select operations and generalized software systems with poor performance. In this talk, I will present compiler abstractions and techniques that combine tensor expressions with specifications of sparse irregular tensor data structures to produce efficient parallel source code. I will show solutions to the three main problems of sparse tensor algebra compilation: how to represent tensor data structures, how to characterize sparse iteration spaces, and how to generate code to coiterate over irregular data structures. I will also show how to optimize sparse tensor algebra code in a compiler and how to programmatically map sparse data to tensors. We have implemented these techniques in the TACO sparse tensor algebra compiler. It is the first compiler to generate sparse code for any basic tensor expression on many sparse tensor representations. The generated code matches or exceeds the performance of hand-optimized libraries while generalizing to any expression and many user-specified irregular data structures.

Bio: Fredrik Kjolstad is a PhD student at MIT, working with Saman Amarasinghe on topics in compilers and programming languages. He will join Stanford as an Assistant Professor in 2020. He received his master degree from the University of Illinois at Urbana-Champaign and his bachelor degree from the Norwegian University of Science and Technology in Gjøvik. He has received the Eureka and Rosing prizes for his bachelor project, the Adobe Fellowship, a best poster award, and two best paper awards.