Scaling AI Workloads with the Ray Ecosystem

Abstract: 

Today, AI applications are becoming pervasive across all sectors of our industry. Driven by a few fundamental trends, there is no indication of slowing down. In fact, the trend continues rapidly, making distributed computing at scale a norm and necessity. But distributed computing is not easy. It has its challenges. Building distributed applications today requires tons of expertise. For many developers, it is out of reach. Current solutions to these challenges have their shortcomings and tradeoff.

Ray aims to address these shortcomings. As a general-purpose distributed computing framework, it makes programming a cluster of machines as easy as programming a laptop, thereby enabling many more developers and practitioners to take advantage of the advances in cloud computing and scale their machine learning workloads to solve harder problems, without needing to be experts in distributed systems. Besides a core general-purpose distributed-compute system, Ray encompasses a collection of state-of-the-art native libraries targeting scalable machine learning. These include libraries for hyperparameter tuning, distributed training, reinforcement learning, model serving, and last-mile ML data pre-processing and ingestion for model training.

This talk will introduce Ray’s overview; survey its ecosystem of both native and integrated ML libraries; and discuss key applications and developments in the Ray ecosystem, drawing upon lessons from discussions with practitioners over the years of developing Ray with the community—and at Anyscale. In particular, we will demonstrate how you can easily scale three common ML workloads, from your laptop to the cluster, with Ray’s native libraries: training, hyperparameter tuning and optimization (HPO), and large-scale batch inference.

Using the popular XGBoost for classification, we will show how you can scale model training, hyperparameter tuning, and inference—from a laptop or single node to a Ray cluster, with tangible performance difference when using Ray.

The takeaways from this talk are :

Why distributed computing has become the norm and necessity, not an exception
Learn Ray’s architecture, core concepts, and programming primitives
Understand Ray’s ecosystem of scalable ML libraries
Easily extend or transition your laptop to a Ray cluster
Scale three ML workloads using Ray’s native libraries:
Training on a single node vs. Ray cluster, using XGBoost with/without Ray
Tuning HPO using XGBoost with Ray and Ray Tune
Inferencing at scale, using XGBoost with/without Ray

Bio: 

Robert Nishihara is one of the creators of Ray, a distributed system for scaling Python and machine learning applications. He is one of the co-founders and CEO of Anyscale, which is the company behind Ray. He did his PhD in machine learning and distributed systems in the computer science department at UC Berkeley. Before that, he majored in math at Harvard.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google