Federated Learning: AI for the Privacism Movement

Abstract: 

AI, especially machine learning based on neural nets, is seen as a key technology in the digitization of society and its private and public sectors. But the normal workflow of such AI is normally at odds with data privacy, which is becoming more and more important to consumers and regulators alike. This poses a challenge for the acceptability and wider adoption or AI, including its use in edge environments. Federated learning is seen as an approach for collaborative and distributed AI that has the potential to resolve this challenge while also creating AI models with high utility. This session will introduce its concept and discuss one of its instances, the open-source framework XayNet, in greater detail, first in a hands-off and then in a hands-on manner through a demo of XayNet.

Learning outcomes:

1. Understanding what the privacism movement is and why it matters for DS/AI/ML
2. Comprehending that today’s practice of AI is at odds with that movement and, increasingly, with territorial regulation of data processing and data movement
3. Understanding the concept of federated learning and acknowledging that is has potential to support tomorrow’s regulated AI as well as the privacism movement
4. Gaining familiarity with the technical aspects of federated learning for its standard algorithm, federated averaging
5. Understanding that federated averaging will typically break data privacy regulation and that there is a privacy/utility trade-off in AI use cases
6. Achieving familiarity with a privacy-preserving version of federated learning, the open-source framework https://github.com/XayNetwork/XayNet
7. Understanding why/how the XayNet framework preserves privacy as well as the utility of vertical or horizontal AI use cases
8. Becoming familiar with the usage of the XayNet framework for a “hello world” AI use case in a tutorial demo session, including installation, configuration, and execution.

Session Outline
Lesson 1: What is federated learning and why does it matter? This is a hands-off presentation of about 45 minutes that will essentially cover the learning outcomes 1-4 above. The talk will provide strategic context (e.g. AI on the edge and upcoming regulation of AI), technical background (e.g. the federated averaging algorithm), and legal background (e.g. basic principles of data privacy common to EU GDPR and CCPA).

Lesson 2: Building AI with high utility and data-privacy compliance. This is a hands-off presentation of about 45 minutes that will be a deeper technical dive into this topic and essentially cover the learning outcomes 5-7 above. It will contain a discussion of the familiar trade-off between privacy and utility in AI use cases, show that federated learning can resolve this dilemma (e.g. with experimental evidence on a standard voice recognition benchmark), and develop an approach to federated learning that not only resolves this dilemma but also complies with regulation around data privacy: the open-source framework XayNet. The discussion of XayNet will focus on how its protocol for federated learning preserves privacy (in the legal sense) without compromising the ability for scalable performance.

Lesson 3: How to use the open-source XayNet for federated learning. This is a hands-on demo of about 45 minutes (either live or pre-recorded, with the speaker – one of our senior developers – able to chat during the playing of the video) that will focus on the learning outcome 8 above. The demo will show what needs to be installed to use the framework, how it can be configured for a “hello world” AI use case (the model will be a dummy one as the emphasis is on the above learning outcomes), and what UI support there is for running and monitoring the execution of this federated learning use case. It may also give an outlook on what new features may be coming next in the open-source framework XayNet.

Background Knowledge
The audience has some familiarity with machine learning and its algorithms but neither prior knowledge about federated learning nor of its potential value and technical issues. The audience has some familiarity with programming languages and tools and can follow the gist of a practical demo that shows how to install, configure, and run a federated learning use case within a framework.

Bio: 

Professor Michael Huth (Ph.D.) is Co-Founder and CTO of the technology company XAIN and teaches at Imperial College London. His research focuses on Cybersecurity, Cryptography, Mathematical Modeling, as well as security and privacy in Machine Learning. He served as the technical lead of the Harnessing Economic Value theme at PETRAS IoT Cybersecurity Research Hub in the UK. In 2017, he founded XAIN AG together with Leif-Nissen Lundbæk and Felix Hahmann. The Berlin-based company aims to solve the challenge of combining AI with privacy with an emphasis on Federated Learning. XAIN won the first Porsche Innovation Contest and has already worked successfully with Porsche AG, Daimler AG, Deutsche Bahn, and Siemens.

Professor Huth studied Mathematics at TU Darmstadt and obtained his Ph.D. at Tulane University, New Orleans. He worked at TU Darmstadt, Kansas State University, and spent a research sabbatical at The University of Oxford. Huth has authored several scientific publications and is an experienced speaker on international stages.