
Abstract: We introduce algorithms that use predictions from machine learning applied to the input to circumvent worst-case analysis. We aim for algorithms that have near optimal performance when these predictions are good, but recover the prediction-less worst case behavior when the predictions have large errors. We look at examples of recent results in job scheduling, caching, and network measurement to show how predictions can be used effectively while still allowing for theoretical guarantees.
Bio: Michael Mitzenmacher is a Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. Michael
has authored or co-authored over 200 conference and journal publications on a variety of topics, including algorithms for the Internet, efficient hash-based data structures, erasure and error-correcting codes, power laws, and compression. He is interested in both algorithms for AI applications, and how predictors from AI systems can yield better algorithms with rigorous performance bounds. He is an ACM Fellow and an IEEE Fellow. He has a widely used textbook on randomized algorithms and probabilistic techniques in computer science published by Cambridge University Press.
Michael Mitzenmacher graduated summa cum laude with a B.A. in mathematics and computer science from Harvard in 1991. After studying
mathematics for a year in Cambridge, England, on the Churchill Scholarship, he obtained his Ph. D. in computer science at U.C. Berkeley in 1996. He then worked at Digital Systems Research Center until joining the Harvard faculty in 1999. He served as the chair for computer science from 2010 to 2013 and co-chair in the 2018-2019 academic year.

Michael Mitzenmacher, PhD
Title
Professor of Computer Science | Harvard University
