Framework for Model Monitoring at Scale

Abstract: 

Being able to effectively monitor production models in real-time and respond to signals is critical to the success of any organizational data science investment. Yet, this seems to be a mountain too many organizations can’t climb. Why?

The reality is: It’s one thing to build a system that technically checks the monitoring box. It’s quite another to consider all the complexities of maintaining a monitoring program, especially at scale. How can we assert that every model is monitored? Who should own the monitoring system? What metrics are best for early signal detection? The barriers to success are many and varied. They are both technical and process-related.

In this talk, James Pearce, a long-time data science leader, will share first-hand perspectives on the importance of having a model monitoring framework in place, and lessons he’s learned while building out this capability at a large national bank. Josh Poduska, Chief Data Scientist at Domino Data Lab, will demonstrate one approach to incorporating the technical and process-related best practices of model monitoring to help increase your odds of successfully implementing a monitoring program.

Bio: 

Dr. James Pearce is a data science leader with over two decades of experience. He has worked with large Australian banks, insurance companies, credit bureaux, and data consultancies to help businesses and analysts get value out of their data using analytical techniques ranging from statistical analysis through to machine learning and AI. In addition, James is passionate about teaching machine learning and making sense of data. In his other roles, he has developed several market-leading data products, as well as machine learning systems that are used operationally to cut costs and increase revenue. In his spare time, James enjoys writing about analytics and data; he also writes fiction.