Abstract: In this hands-on workshop I will show you an easy method for deploying your own generative AI in production with model checkpoints, open source libraries such as Hugging Face, and MLOps deployment pipelines.
Managed APIs are the easiest way to get up and running with using the newest AI models. But, knowing how to deploy your own API microservice can open up a lot of opportunities for your AI workflow and meet your software and data requirements.
By the end of this workshop, you will have:
your own API endpoint running a generative AI model (Stable Diffusion)
template of a model deployment pipeline to be reused and extended for your own use case
conceptual knowledge of the end-to-end AI development lifecycle
Bio: Tim is leading Graphcore’s Cloud Solutions product to help AI & ML software development teams build AI products and deploy ML capabilities in production. Tim has worn many hats in his career, from being a research engineer, data scientist and leading MLOps teams. Along the way, he’s gained experience across all stages of the development lifecycle, taking AI applications from experimentation to deployment.
Director of Product, AI Cloud Solutions | Graphcore