Data Efficiency Through Transfer Learning

Abstract: In recent years, supervised machine learning models have demonstrated tremendous success in a variety of application domains. Despite the promising results, these successful models are data hungry and their performance relies heavily on the size of training data. In many real-world applications, it is difficult to collect sufficiently large training datasets resulting in running under-performing models or delaying the model deployment until a critical mass of data is collected. Transfer learning solutions help overcome these issues by transferring the knowledge from readily available datasets to a new target task, providing huge value to companies. In this talk, you’ll learn how to apply state-of-the-art academic research in transfer learning to real-world situations to solve various business problems, including the cold-start problem. The first method being a hybrid instance-based transfer learning approach that outperforms a set of baselines including state-of-the-art instance-based transfer learning approaches. Our method uses a probabilistic weighting strategy to fuse information from the source domain to the model learned in the target domain. This method is generic, applicable to multiple source domains, and robust with respect to the negative transfer. The other method being a framework for building differentially private aggregation approaches to enable transferring knowledge from existing models trained on other companies’ datasets to a new company with limited or no labeled data. Applying these methods in your organization will lead to increased customer trust and an advance in revenue for both you and your customers.

Bio: Parinaz Sobhani is the Director of Machine Learning on the Georgian Impact team and is responsible for leading the development of cutting-edge machine learning solutions. Parinaz has more than 10 years of experience developing and designing new models and algorithms for various artificial intelligence tasks. Prior to joining Georgian Partners, she worked at Microsoft Research where she developed end-to-end neural machine translation models. Previous to this she worked for the National Research Council in Canada, where she designed and developed deep neural network models for natural language understanding and sentiment analysis. Parinaz holds a Ph.D. in machine learning and natural language processing from the University of Ottawa with a research focus on opinion mining in social media.