MIT Fast Code Seminar
Algorithms, Compilers, Accelerators, and Whatever It Takes

Description

The MIT Fast Code Seminar is a weekly seminar that will cover the latest research topics in the theory and practice of performance engineering. Topics of interest include, but are not limited to, algorithm design and implementation; techniques for improving parallelism and locality; high-performance programming languages and frameworks; compilers for parallel code; tools for analyzing performance; hardware techniques for improving performance; parallel and concurrent data structures; models and algorithms for emerging technologies; high-performance solutions for databases, operating systems, networking, and artificial intelligence; and just plain clever hacks. Beginning in Fall 2019, the seminar will meet on Mondays at 2-3pm unless specified otherwise. To receive seminar announcements, please subscribe to this mailing list.

This seminar is running via Zoom.

Summer 2020 Schedule

DateSpeakerAffiliationTitleVideo
Monday 6/1/2020John OwensUC Davis Dynamic Data Structures on the GPU Link
Monday 6/8/2020David BaderNew Jersey Institute
of Technology
Solving Global Grand Challenges with High Performance Data Analytics Link
Monday 6/15/2020Wen-Mei HwuUIUC Fast GPU Code for Graphs Link
Slides
Monday 6/22/2020Aydin BulucLawrence Berkeley
National Lab/UC Berkeley
Sparse Matrices Beyond Solvers: Graphs, Biology, and Machine Learning Link
Slides
Monday 6/29/2020Umit CatalyurekGeorgia Tech Fast graph analytics on heterogenous and deep-memory architectures Link
Monday 7/13/2020Michael Axtmann and
Peter Sanders
Karlsruhe Institute
of Technology
Engineering Scalable Parallel Sorting Algorithms
Monday 7/20/2020Stephen ChouMIT Format Abstractions for Sparse Tensor Algebra Compilation
Monday 7/27/2020Larry RudolphTwo Sigma
Monday 8/3/2020Kathy YelickUC Berkeley/
Lawrence Berkeley
National Lab

Previous Seminars

DateSpeakerAffiliationTitleVideo
Tuesday 6/11/2019Charles LeisersonMIT The Resurgence of Software Performance Engineering Link
Tuesday 7/9/2019Fredrik KjolstadMIT The Sparse Tensor Algebra Compiler Link
Tuesday 7/16/2019Song HanMIT AutoML for Efficiently Designing Efficient Neural Network Architectures Link
Tuesday 7/23/2019Maurice HerlihyBrown University Speculative Concurrency for Ethereum Smart Contracts Link
Tuesday 7/30/2019I-Ting Angelina LeeWashington University
in St. Louis
Advances in Determinacy Race Detection for Task-Parallel Code
Tuesday 8/6/2019Jeremy KepnerMIT Lincoln Laboratory
Supercomputing Center
Optimal system settings: How to not lose before you begin
Tuesday 8/20/2019Tao B. SchardlMIT Tapir: Embedding Recursive Fork-Join Parallelism into LLVM's Intermediate Representation
Tuesday 8/27/2019Laxman DhulipalaCarnegie Mellon University Algorithms and Systems for Processing Massive Static and Evolving Graphs
Monday 9/16/2019Bill DallyNVIDIA Corporation and
Stanford University
Domain-Specific Accelerators Link
Monday 9/23/2019Riyadh BaghdadiMIT Tiramisu: A Polyhedral Compiler for Dense and Sparse Deep Learning Link
Monday 9/30/2019Valentin ChuravyMIT Julia: Making dynamic programs run fast
Monday 10/21/2019Alex ConwayRutgers University SplinterDB: Closing the Bandwidth Gap on NVMe Link
Monday 11/4/2019Yunming ZhangMIT GraphIt: A Domain-Specific Language for Writing High-Performance Graph Applications Link
Monday 11/18/2019Charith MendisMIT How to Modernize Compiler Technology Link
Monday 11/25/2019Bruce MaggsDuke University and
Emerald Innovations
A Speed-of-Light Internet Service Provider Link
Tuesday 2/18/2020S. Tucker TaftAdaCore Safe Parallel Programming -- ParaSail, Ada 202X, OpenMP, and Rust Link
Monday 4/20/2020Neil Thompson
& Yash Sherry
MIT How fast are Algorithms Improving?
Monday 5/4/2020Ariya ShajiiMIT Seq: a high-performance language for bioinformatics
Monday 5/11/2020Alex AikenStanford Program Optimization for Machine Learning Link

Organizers

Julian Shun (lead organizer)
Saman Amarasinghe
Adam Belay
Charles Leiserson
Tao B. Schardl