
Abstract: PyTorch has become the dominant tool used in machine learning research. It comes bundled with a ton of features, but while the basics are often well explored in many tutorials, some more advanced functionality and workflows might deserve more attention than they usually get. In this session, I’ll be aiming to improve the foundations and understanding of the basics of PyTorch, while also highlighting the parts that are often overlooked or misunderstood. Note that I will assume knowledge of some basic abstractions, but those should not exceed what can be found in the official “Getting started” tutorial.
As for a more concrete plan, the tsession will be centered around extensibility, performance and model packaging, and will be presented as a case study of potential modifications that could be applied to the OpenNMT-py repository. Firstly, because we will care about investigating performance, we’ll kick off the session with a short description of the library’s execution model and best practices for benchmarking PyTorch code. Then, we’ll move on to custom extensions — a feature that allows loading plugins at run-time to, for example, expose third-party libraries or hand-written C or CUDA code. Finally, we will talk a fair amount about the JIT compiler which aims to both optimize the run-time performance of your models and enable easy export and later deployment on mobile devices.
● PyTorch
https://drive.google.com/open?id=1k0qAboExzP0oAG9rNzE1F7w9gxvuJymu
Bio: Adam is an author and maintainer of PyTorch. He has worked with large organizations like Facebook AI Research, NVIDIA and Google, despite the fact that he has graduated from the Master’s program in Computer Science at the University of Warsaw only last year. Currently, he is also finishing his second major in Mathematics. His general interests include graph theory, programming languages, numerical computing and machine learning.