Ray: A System for High-performance, Distributed Python Applications
Ray: A System for High-performance, Distributed Python Applications


Ray is an open-source distributed framework from U.C. Berkeley’s RISELab that easily scales Python applications from a laptop to a cluster, with an emphasis on ML/AI systems, such as reinforcement learning. It is now used in many production deployments.

In this tutorial, we'll use several hands-on examples to explore the problems that Ray solves and the useful features it provides, such as rapid distribution, scheduling, and execution of “tasks” and management of distributed stateful “serverless” computing. We'll see how it's used in several ML libraries (and play with examples using those libraries). You'll learn when to use Ray and how to use it in your projects.


Dean Wampler (@deanwampler) is an expert in streaming data systems, focusing on applications of ML/AI. He is Head of Evangelism at Anyscale.io, which is focused on distributed Python for ML/AI. Previously, he was an engineering VP at Lightbend, where he led the development of Lightbend CloudFlow, an integrated system for building and running streaming data applications with Akka Streams, Apache Spark, Apache Flink, and Apache Kafka. Dean has written books for O'Reilly and contributed to several open source projects. He is a frequent conference speaker and a co-organizer of several conferences and user groups in Chicago. Dean has a Ph.D. in Physics from the University of Washington.