
Abstract: AI is constrained by the data, compute, and talented scientists that build it. The most limiting of these is data, as the most valuable datasets are private and thus very difficult to acquire. Therefore, we need methods for training machine learning models on private data without compromising personal data or the model itself. To this end, many groups across academia and industry are working on techniques such as differential privacy, federated learning, and multi-party computation to provide secure & private AI models. PySyft is an open source framework developed by the OpenMined community that combines these different tools for building secure and private machine learning models. PySyft extends the APIs of popular deep learning frameworks so data scientists and machine learning engineers can immediately begin to build privacy-preserving applications. In this way, we can quickly spread adoption of federated learning and other tools for preserving privacy, unlocking the potential of training AI models on all available data, not just the small fraction that is easily accessible.
Bio: Mat received a PhD in Physics from UC Berkeley where he studied the neural correlations of short-term memory in prefrontal cortex. During that time, he picked up Python, machine learning, and a love for education. He's been at Udacity for over two years, developing content for various data science courses including the Deep Learning Nanodegree program. Mat is also the author of Sampyl, a Python library for Bayesian data analysis, and SeekWell, a library that improves the usage of SQL within Python.

Mat Leonard, PhD
Title
Instructional Designer | Kaggle
