Generative AI in Practice: How to build your own Stable Diffusion API

Abstract: 

In this hands-on workshop I will show you an easy method for deploying your own generative AI in production with model checkpoints, open source libraries such as Hugging Face, and MLOps deployment pipelines.

Managed APIs are the easiest way to get up and running with using the newest AI models. But, knowing how to deploy your own API microservice can open up a lot of opportunities for your AI workflow and meet your software and data requirements.

By the end of this workshop, you will have:

your own API endpoint running a generative AI model (Stable Diffusion)
template of a model deployment pipeline to be reused and extended for your own use case
conceptual knowledge of the end-to-end AI development lifecycle

Bio: 

Tim is leading Graphcore’s Cloud Solutions product to help AI & ML software development teams build AI products and deploy ML capabilities in production. Tim has worn many hats in his career, from being a research engineer, data scientist and leading MLOps teams. Along the way, he’s gained experience across all stages of the development lifecycle, taking AI applications from experimentation to deployment.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google