
Abstract: Powering your application with deep learning is no walk in the park, but is certainly attainable with some tricks and good practice. Serving a deep learning model on a production system demands the model to be stable, reproducible, capable of isolation and behave as a stand-alone package. One possible solution to this is a containerized microservice.
Ideally, serving deep learning microservices should be quick and efficient, without having to dive deep into the underlying algorithms and their implementation. Too good to be true? Not anymore! Together, we will demystify the process of developing, training, and deploying deep learning models as a web microservice.
We will kick off with an overview of how deep learning models are best published as Docker images on DockerHub, and are best prepared for deployment in local or cloud environments using Kubernetes or Docker.
We highlight the following benefits of such an approach:
- Standardized REST API implementation and application-friendly output format (JSON)
- Abstracting out the complex pre and post processing portions of the model inputs and outputs.
We then demonstrate these concepts with Model Asset Exchange, an open source framework. All these applications and the framework itself are open source and we conclude by inviting contributions and opening the gates for you to be a part of this amazing initiative!
Bio: Simon is a Developer Advocate at the Center for Open-Source Data & AI Technologies. Previously, he worked as a machine learning consultant in Europe, and was with UC San Francisco before that. Simon holds a Master's degree in Bioinformatics engineering, and a Bachelor's degree in molecular biology.

Simon Plovyt
Title
Developer Advocate | Center for Open-Source Data & AI Technologies
Category
advanced-w19 | ai-for-engineers-w19 | beginner-w19 | deep-learning-w19 | intermediate-w19 | trainings-w19
