Abstract: As machine learning in production becomes the norm, it is critical we help our teams prepare for supporting and iterating on our models. While we have much to learn from the best practices of shipping software, we must also recognize the nuances required to support AI.
What differentiates AI from traditional software engineering? The conversation often focuses on the data, the model quality, and the massive business impact it will have once deployed. But moving beyond the individual models and their first deployment exists the same realities as with all software: the need for iteration, quality and scale. Here, we must blend the expertise of the engineers with the nuanced needs of our data scientists.
In this talk, we will dive into the specifics, choosing targets within the ML-development lifecycle where we have much to learn from each other. How can testing frameworks be used, and further adapted, to help us more safely iterate on our models in production? What role does modularity play in helping us be more agile? What does it mean to monitor models once they are deployed?
For each of these questions we will explore the opportunities at hand for leverage from existing best practices, but discuss how they must be adapted to the realities of AI.
Bio: Sarah Aerni is a Senior Manager of Data Science at Salesforce Einstein, where she leads teams building AI-powered applications across the Salesforce platform. Prior to Salesforce she led the healthcare & life science and Federal teams at Pivotal. Sarah obtained her PhD from Stanford University in Biomedical Informatics, performing research at the interface of biomedicine and machine learning. She also co-founded a company offering expert services in informatics to both academia and industry.