MLOps in DL model development
MLOps in DL model development

Abstract: 

Deep Learning boom is one of the coolest things that happened in software development in the last decade. The field though is young and immature - and thus often lacks established practices, development guidelines and tests. This often leads to inability to reproduce the results of papers and repositories, and also models that run well only in lab conditions, but not in real-life use cases.

We believe that good processes matter in both research and production spaces. Most probably, they are the single thing that defines whether the Artificial Intelligence model makes it to the real-world solution or remains a just a nice concept that is not applicable anywhere. In the talk, Anna will share how her team implements good processes for Neural Network research and production.

The processes and practices are related to the datasets and data splits, model and metric selection, ways to think about the problem, and ways to store the revisions of everything. Anna's team uses a bunch of tools, some of them well-known in other software areas (like Docker), and some that are specific to Machine Learning and AI (like DVC or Weights and Biases). They also use some practices that are very similar to ones in more established software areas - for example, in DevOps.

Anna will share how they apply the tools and practices, as well as her thoughts and experience from her 10-year journey in Computer Vision and Deep Learning in companies like Intel and big open-source projects like OpenCV with millions of users all around the world.

Bio: 

Anna is CTO of OpenCV.AI - a for-profit arm of OpenCV.org, the most popular Computer Vision library in the world. Anna is an expert in Deep Learning for Computer Vision with 10-year experience in the industry. Previously Anna created open-source optimized Machine Learning libraries, and worked on state-of-the-art Deep Learning algorithms for autonomous driving, retail, medicine and AR, most of them specifically optimized for fast inference on small edge devices.