
Abstract: Federated Learning is an emerging paradigm that enables one to perform machine learning without centralizing training data in a single place, allowing local clients to collaboratively train a shared global model.
Federated learning offers a solution for consortia, multi-national enterprises, and networks of edge devices to benefit from training across individual datasets while respecting data privacy concerns, and accommodating network bandwidth limitations and limited device availability.
In this workshop you will learn when and why federated learning should be used, basic algorithms for implementing it, as well as more advanced ones covering a variety of use-cases. Towards the end of the workshop participants will be offered a hands-on experience of training a federated model together.
Bio: Mikhail is a Research Staff Member at IBM Research and MIT-IBM Watson AI Lab in Cambridge, Massachusetts. His research interests are Model fusion and federated learning; Algorithmic fairness; Applications of optimal transport in machine learning; Bayesian (nonparametric) modeling and inference. Before joining IBM, he completed Ph.D. in Statistics at the University of Michigan, where he worked with Long Nguyen. He received his bachelor's degree in applied mathematics and physics from the Moscow Institute of Physics and Technology.

Mikhail Yurochkin, PhD
Title
Research Staff Member | IBM Research and MIT-IBM Watson AI Lab
