
Abstract: As soon as your machine learning model goes into production everything changes.
You now will need to constantly monitor the performance of your model, evaluate whether it is still sufficient and react accordingly.
This, however, can be a challenge when you have no reliable ground truth to recalibrate its performance.
Typically, you will need to fall back to a surrogate metric that you can measure and that is correlated with the performance of your model.
What those metrics can be and how you track and monitor them is the topic of this workshop.
This workshop consists of two parts:
- Part I: Simulate production on an existing machine learning model and detecting drift
- an OpenAPI machine learning service will be provided
- we will use Evidently, Prometheus and Grafana to monitor and detect the drift
- Part II: Interpreting and Analyzing Drift and what to do about it
- when you have detected drift, you will need to interpret what happened and decide what to do about it
- among the steps you can take is to retrain your model with new data
- we might also have to consider to rethink the model architecture or the data we are using
Our objective is to ensure that you are equipped with the essential knowledge and practical tools to proficiently manage
your machine learning models in a real-world production environment.
All services will be provided as Docker images and can be run locally on your machine.
To have the full experience, you should have Docker and Docker Compose installed.
Notebooks will additionally be provided as a Colab service so you can also run them without a local installation as a fallback.
Bio: Oliver Zeigermann has been developing software with different approaches and programming languages for more than 3 decades. In the past decade, he has been focusing on Machine Learning and its interactions with humans.

Oliver Zeigermann
Title
Machine Learning Architect at Freelancer
