The problem that we face now with artificial intelligence (AI) is that many methods work, and we believe that for whatever is done under the hoods – we don’t elaborate on the details. Yet it’s very important to understand how the prediction is done, not only to understand the architecture of the method.

That’s why explainable AI (xAI) is becoming a hot topic today. 

Who uses xAI?

There are many scenarios in which xAI proves useful:

  • Domain experts, like doctors, trust the model they use to gain scientific knowledge. 
  • Regulatory agencies certify the model’s compliance with legislation in force. 
  • Managers assess regulatory compliance and analyze the model’s corporate applications. 
  • Data scientists use the model to improve product efficiency or develop new functionalities. 
  • Each other user affected by the model’s decision usually wants to understand it and verify its fairness.

Explainable AI goals

There are many goals for an xAI model to fulfill. 

It’s important that domain experts using a model can trust it. Another very important goal is the possibility to transfer the knowledge gained from a given model to other problems or challenges.

It’s crucial that we understand the models well enough to know what’s done with the data during training and the prediction process so that we can ensure its privacy. It’s related to the topic of fairness, as models shouldn’t affect any minorities – a model should be fair and ethical. 

Two other goals are the robustness and informativeness of a model. We should be confident that the prediction is valuable and is related to the user’s decision. Finally, the model should be accessible to non-technical people so they understand how it works, and, in some cases – can interact with it. 

https://odsc.com/europe/#register

It’s worth mentioning that:

  • Not each goal is met by every method, and each goal has a different target audience. 
  • Not each explainable AI model needs to fulfill all goals, and not each model that meets one of the goals above is an xAI model.

Levels of transparency within Explainable AI

A model can be transparent on three levels: 

  1. A model reaches the first level of transparency when it’s fully simulatable, which means it can be fully simulated by a human. 
  2. The second level of transparency is reached when a model can be decomposed, which means it can be divided into explainable parts. It’s when we can understand how each part works and how it processes data. 
  3. The last level of transparency is algorithmic transparency, which means that it’s possible to comprehend how the model produces the output. 

How to explain a model?

There are a couple of ways to explain how a model works. 

Typically, we use text form – including symbols, formulas, or charts. It’s an easy way for humans to interpret and understand. We simply take a subspace of the model and explain it in different ways. 

Explaining through an example is another easy-to-understand method. Here, we take some input data and explain what happens during the process step by step. If the model is complex, we explain the way it works on a simplified model. One white-box method that is well-known and easy to interpret is the decision tree. We can draw a tree representing the model and explain all the decisions that are made on each node. 

Bibliography on Explainable AI:

  1. Christoph Molnar, Interpretable Machine Learning. A Guide for Making Black Box Models Explainable, 2021
  2. Denis Rothman, Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI, Packt 2020
  3. W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, K. Müller, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer 2019


About the author/ODSC Europe 2021 Speaker on Explainable AI: Karol Przystalski

Karol Przystalski obtained a PhD degree in Computer Science in 2015 at the Jagiellonian University in Cracow. He is the CTO and founder of Codete where he’s leading and mentoring teams as they work with Fortune 500 companies on data science projects. He also built a research lab for machine learning methods and big data solutions at Codete. Karol gives speeches and trainings in data science with a focus on applied machine learning in German, Polish, and English.