
Abstract: In recent years, reinforcement learning (RL) has become a powerful item in our toolbox of machine learning methods. Its ability to produce end-to-end decision-making solutions via learning by doing within a well-defined problem environment makes RL particularly attractive as an alternative to classic supervised learning methods. However, several issues remain problematic when using RL to solve real-world industry problems: 1) RL algorithms are difficult to understand and therefore hard to customize and hypertune, 2) experiments need to run at scale in order to yield useful results within a reasonable time, and 3) often, a safe-to-use and fast simulator of the particular problem does not exist, however, historical sensor- and actor data are abundantly available.
In this tutorial, we will introduce RLlib (http://rllib.io/), an open-source RL library with a proven track record for solving real-life industry problems at scale. We will walk through different industrial RL use cases and the solutions RLlib offers for those. In particular, we will build a recommender system using offline RL, show how to train policies that master complex multi-agent games, and demonstrate how you can connect external simulators to RLlib at scale for faster learning.
This talk is targeted towards data scientists, research engineers, and software developers who are already familiar with machine learning concepts.
Bio: Avnish Narayan is an ML Engineer at Anyscale where he works on RLlib. He's passionate about exploring where RL can improve upon existing solutions in industrial applications. He previously received his MS in Computer Science at USC, where he did research on the applications of RL in robotic manipulation problems.