Abstract: In the simplest sense, building a model is nothing more than learning the best mapping from the input features to the output feature (or target) given the constraints of the model. But instead of simply using the input features in their natural state (i.e. raw features), we can try alternate transformations of those features that could prove useful in allowing the model to find a better mapping. In this session, we share experiences and results of how several flavors of these transformations, which we call feature engineering, can aid the model in learning from the data. Specifically, we focus on techniques that are automated in nature such that the modeler doesn’t need extensive prior knowledge or time to get good results.
Bio: Abhishek Nandy is a data scientist at Liberty Mutual. He’s part of a team of data scientists who research applications of machine learning for insurance pricing. Abhishek holds a PhD in Statistics from the University of Minnesota. Go Gophers! His areas of interest mainly include Machine Learning, Survey Sampling and Statistical Inference.