
Abstract: The talk unveils the state of art deployment of sustainable federated machine learning models by considering different aspects of ethical AI. The talk will highlight building and monitoring private federated models in a large-scale enterprise while ensuring the sustainability of future smart ecosystems. By the end of the talk, the audience will get the know-how of sustainable federated learning, deployment monitoring metrics, and important KPIs to consider during scaling of such ML models, and deploying them in a distributed architecture.
Abstract— Federated Learning is gaining more prominence and has become increasingly popular in the present age, primarily impacted by the pandemic, where dependence on devices has increased tremendously due to social distancing, lockdown measures, limited human mobility, and accessibility. Its impact on the Industrial Iot has been the largest, given the fact that the healthcare, retail, supply chain, and automotive domain has a lot of sensitive and private data of individuals. Further with deployment and use of IoT sensor devices becoming easier, Federated Learning (FL) based systems have contributed much in human health, predictive maintenance tasks for the auto industry, production process monitoring, and discovering new trends, patterns, and anomalies. The IoT sensor devices used in FL architectures are intelligent and time-sensitive heterogeneous devices that can send notifications to users based on sudden changes in the environments, that might unfavourably impact the underlying situation.
This talk first introduces the audience to a few use-cases where Federated Learning-based systems can be used. In the next phase, our talk demonstrates how automated deployment and monitoring becomes useful in designing robust AI/ML models due to uncertainties like covid. Here we introduce the concept of ‘Concept Drift’ in ML models and highlight how autoML and drift detection strategies play a vital role in a Federated Learning environment, particularly with data aggregated from varied devices with different system configurations. It also addresses issues centered around drift on local devices and techniques aimed to minimize the effect on the performance of models. In the next phase of our talk, we further illustrate with examples how to architect a real-time monitoring pipeline in a three-layer network edge. In addition, it provides a detailed overview of different model KPI metrics and deployment best practices that can be used to test the robustness, and ethical aspects of an ML model.
Background Knowledge
Basic Familiarity with Python
Bio: Sharmistha Chatterjee is a Data Science Evangelist with 16+ years of professional experience in the field of Machine Learning (AI research and productionizing scalable solutions) and Cloud applications. She has worked in both Fortune 500 companies, as well as in very early-stage startups. She is currently working as a Senior Manager of Data Sciences at Publicis Sapient where she leads the digital transformation of clients across industry verticals. She is an active blogger, an international speaker at various tech conferences, and a 2X Google Developer Expert in Machine Learning and Google Cloud. She is also the Hackernoon Tech award winner for 2020, been listed as 40 under 40 Data Scientist by AIM and & 21 tech trailblazers 2021 by Google. She is involved in mentoring startups in Google Startup for Accelerators Program. She has also completed Business in AI from the London School of Business recently.

Sharmistha Chatterjee
Title
Senior Manager Data Sciences | Publicis Sapient
