MIT Fast Code Seminar
Algorithms, Compilers, Accelerators, and Whatever It Takes

Description

The MIT Fast Code Seminar is a weekly seminar that will cover the latest research topics in the theory and practice of performance engineering. Topics of interest include, but are not limited to, algorithm design and implementation; techniques for improving parallelism and locality; high-performance programming languages and frameworks; compilers for parallel code; tools for analyzing performance; hardware techniques for improving performance; parallel and concurrent data structures; models and algorithms for emerging technologies; high-performance solutions for databases, operating systems, networking, and artificial intelligence; and just plain clever hacks. Beginning in Fall 2019, the seminar will meet on Mondays at 2-3pm unless specified otherwise. To receive seminar announcements, please subscribe to this mailing list.

Click below to watch the livestream of the talk:
Kiva livestream
Star livestream

This seminar is cancelled until further notice.

Spring 2020 Schedule

DateSpeakerAffiliationTitleVideo
Tuesday 2/18/2020
32-D449 (Kiva)
S. Tucker TaftAdaCore Safe Parallel Programming -- ParaSail, Ada 202X, OpenMP, and Rust Link
TBD
32-D463 (Star)
Neil ThompsonMIT How fast are Algorithms Improving?
Monday 4/27/2020
32-D463 (Star)
Thiago TeixeiraUIUC
Monday 5/4/2020
32-D463 (Star)
Ariya ShajiiMIT
Monday 5/11/2020
32-D463 (Star)
Stephen ChouMIT

Previous Seminars

DateSpeakerAffiliationTitleVideo
Tuesday 6/11/2019Charles LeisersonMIT The Resurgence of Software Performance Engineering Link
Tuesday 7/9/2019Fredrik KjolstadMIT The Sparse Tensor Algebra Compiler Link
Tuesday 7/16/2019Song HanMIT AutoML for Efficiently Designing Efficient Neural Network Architectures Link
Tuesday 7/23/2019Maurice HerlihyBrown University Speculative Concurrency for Ethereum Smart Contracts Link
Tuesday 7/30/2019I-Ting Angelina LeeWashington University
in St. Louis
Advances in Determinacy Race Detection for Task-Parallel Code
Tuesday 8/6/2019Jeremy KepnerMIT Lincoln Laboratory
Supercomputing Center
Optimal system settings: How to not lose before you begin
Tuesday 8/20/2019Tao B. SchardlMIT Tapir: Embedding Recursive Fork-Join Parallelism into LLVM's Intermediate Representation
Tuesday 8/27/2019Laxman DhulipalaCarnegie Mellon University Algorithms and Systems for Processing Massive Static and Evolving Graphs
Monday 9/16/2019Bill DallyNVIDIA Corporation and
Stanford University
Domain-Specific Accelerators Link
Monday 9/23/2019Riyadh BaghdadiMIT Tiramisu: A Polyhedral Compiler for Dense and Sparse Deep Learning Link
Monday 9/30/2019Valentin ChuravyMIT Julia: Making dynamic programs run fast
Monday 10/21/2019Alex ConwayRutgers University SplinterDB: Closing the Bandwidth Gap on NVMe Link
Monday 11/4/2019Yunming ZhangMIT GraphIt: A Domain-Specific Language for Writing High-Performance Graph Applications Link
Monday 11/18/2019Charith MendisMIT How to Modernize Compiler Technology Link
Monday 11/25/2019Bruce MaggsDuke University and
Emerald Innovations
A Speed-of-Light Internet Service Provider Link

Organizers

Julian Shun (lead organizer)
Saman Amarasinghe
Adam Belay
Charles Leiserson
Tao B. Schardl