
Abstract: How should machine learning models be evaluated? Specifically, if you have an existing model, need to decide whether to supplant it with a new version, how do you do that?
The most common approach is to compare the two models on a standard suite of metrics, such as F1 score, ROC-AUC, or perplexity. In this talk, I'll discuss why this approach is incomplete, and discuss a different approach for comparing models that SentiLink uses before pushing new models to production: specifically, by manually looking at the ""swap ins and swap outs"", or the cases where one model does especially poorly and the other model especially well.
I'll walk through some real world examples of how SentiLink uses this approach to evaluate models. I'll also give a concrete illustration of using this approach to compare a ""cutting edge"" deep learning model to a more standard deep learning model on a popular NLP dataset, complete with code for attendees to take away.
Bio: Seth Weidman is a data scientist at SentiLink, an Andreesen Horowitz-backed startup based in San Francisco; he works on SentiLink's core models that prevent various form of fraud - especially synthetic identity fraud - and other malicious behavior for banks and lenders. Immediately before SentiLink, Seth did machine learning engineering at Facebook for the data centers team; he also wrote an introductory book on deep learning called Deep Learning from Scratch that was published by O’Reilly in 2019. Seth has degrees in mathematics and economics from the University of Chicago.