
Abstract: Most Machine Learning (ML) models are conceived on a whiteboard or a napkin, and born on a laptop. Quickly enough, developers and data scientists face the double challenge of training and deploying them in a production environment. In this talk rooted in personal experience and customer discussions, we’ll discuss how to gradually scale real-life ML from humble beginnings to world domination, always taking justified and reasonable steps along the way: no over-engineering, no Hype Driven Development, no “why don’t you just use XYZ?”. Using AWS services as a background, we’ll cover virtual machines, containers and managed ML services, and we’ll compare pros, cons and trade-offs across a wide spectrum of topics: scalability of course, but also cost, security, automation, etc. Applicable to pretty much any language and framework, this session should appeal to both beginners who want to start in the right direction, and to experienced practitioners who are looking to scale to the next level.
Bio: Before joining Amazon Web Services, Julien served for 10 years as CTO/VP Engineering in top-tier web startups. Thus, he’s particularly interested in all things architecture, deployment, performance, scalability and data. As a Principal Technical Evangelist, Julien speaks very frequently at conferences and technical workshops, where he meets developers and enterprises to help them bring their ideas to life thanks to the Amazon Web Services infrastructure.

Julien Simon
Title
Principal Evangelist ML/AI EMEA | Amazon
