Interpreting and Explaining XGBoost Models
Interpreting and Explaining XGBoost Models

Abstract: 

Gradient Boosted Trees have become a widely used method for prediction using structured data. They generally provide the best predictive power but are sometimes criticized for being difficult to interpret. However, to some degree, this criticism is misdirected -- rather than being uninterpretable, they simply have more complicated interpretations, reflecting a more sophisticated understanding of the underlying dynamics of the variables.

In this workshop, we will work hands-on using XGBoost with real-world data sets to demonstrate how to approach data sets with the twin goals of prediction and understanding in a manner such that improvements in one area yield improvements in the other. Using modern tooling such as Individual Conditional Expectation (ICE) plots and SHAP, as well as a sense of curiosity, we will extract powerful insights that could not be gained from simpler methods. In particular, attention will be placed on how to approach a data set with the goal of understanding as well as prediction.

Bio: 

Brian Lucena is a Principal at Lucena Consulting and a consulting Data Scientist at Agentero. An applied mathematician in every sense, he is passionate about applying modern machine learning techniques to understand the world and act upon it. In previous roles, he has served as SVP of Analytics at PCCI, Principal Data Scientist at Clover Health, and Chief Mathematician at Guardian Analytics. He has taught at numerous institutions including UC-Berkeley, Brown, USF, and the Metis Data Science Bootcamp.