Beyond Credit Scoring: Interpretable Models for Responsible Machine Learning


Machine learning (ML) technologies impact our lives in myriad ways, both visible and invisible, and already there is a clear need for ""responsible ML"" practices, which promote the development and application of auditable and accountable machine learning systems. As ML continues to mature—and as individuals, corporations, and governments accelerate their adoption of ML technologies—this need will only grow more urgent. And as those building and releasing ML into the world, we especially need practical approaches that bias ourselves and our models toward responsible use as part of normal operations.

We find that the construction of interpretable ML models offers just that kind of a practical approach: a way of building ML tools that, by design, produces systems capable of being understood and held to account. In this presentation we make the connection between interpretability and responsible machine learning and demonstrate how interpretable modeling can further benefit our customers and ourselves as model developers. We also offer an example of how interpretable ML has helped us build and deploy sensitive fraud-detection models as part of an overarching ethical-ML framework, taking inspiration from credit-scoring models and adapting them to our customer's specific use case.

Interpretable modeling can satisfy our customers, help us build understandable and accurate models, and offer a practical foundation for responsible ML. By virtue of making our models understandable, we remove the barriers that would ordinarily stand between our models’ users and those affected by our models’ decisions, which in turn provides a launching-off point for the responsible application of machine learning in the world.

Background Knowledge:

Attendees should be familiar with logistic regression and Lasso. They should be able to understand R code, but the focus is on the techniques and not the language or code.


Tom Shafer works as a Lead Data Scientist at Elder Research, a recognized leader in data science, machine learning, and artificial intelligence consulting since its founding in 1995. As a lead scientist, Tom contributes technically to a wide variety of projects across the company, mentors data scientists, and helps to direct the company’s technical vision. His current interests focus on Bayesian modeling, interpretable ML, and data science workflow. Before joining Elder Research, Tom completed a PhD in Physics at the University of North Carolina, modeling nuclear radioactive decays using high-performance computing.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google