What to expect when you’re putting AI in production

Abstract: Machine learning and “AI” systems often fail in production in unexpected ways. This talk shares real-world case studies showing why this happens and explains what you can do about it, covering best practices and lessons learned from a decade of experience building and operating such systems at Fortune 500 companies across several industries.

The covered topics include concept drift (identifying and correcting for model decay due to changes in the distribution of data in production), common pitfalls in A/B testing (like the primacy and novelty effects), offline versus online measurements, and systems that learn in production (such as adversarial learning use cases). This talk is intended for executives, technical leaders and product managers who want to learn from others’ mistakes how to best set up their teams & products for success.

Bio: David Talby has been building real-world big data analytics systems in healthcare, finance and e-commerce for over a decade. David has extensive experience in building and operating web-scale data science and business platforms, as well as building world-class, Agile, distributed teams. Prior to joining the startup world, he was with Microsoft’s Bing group, where he led business operations for Bing Shopping in the US and Europe. Earlier, he worked at Amazon both in Seattle and the UK, where he built and ran distributed teams that helped scale Amazon’s financial systems. David holds a PhD in computer science and master’s degrees in both computer science and business administration.