Abstract: PPO is the most common reinforcement learning algorithm when sampling from a simulation can be done quickly and inexpensively. Its magic lies in how it translates hard contraints to losses which enables PPO to simply use standard TensorFlow components. TF-Agents does exactly that, allowing us to play with the most interesting parameters without completely implementing PPO from scratch (even though you will have an idea how to do that after this workshop).
In this slideless workshop, we will work our way through a Colab notebook. Along the way, you will understand the basic ideas of the PPO reinforcement learning algorithm and how to apply it to a route planning problem. You should be familiar with the basics of machine learning, notebooks, and ideally have worked with TensorFlow 2 before. To let the Colab notebooks run you also need a laptop with a current Chrome browser and a Google account. If you have attended "Reinforcement Learning with TF-Agents & TensorFlow 2.0: Hands On" in one of the previous ODSC conferences this would be a natural sequel to it.
Bio: Oliver is a software developer from Hamburg Germany and has been a practitioner for more than 3 decades. He specializes in frontend development and machine learning. He is the author of many video courses and textbooks.