Towards Interpretable Deep Learning

Abstract: Deep neural networks (DNNs) are reaching or even exceeding the human level on an increasing number of complex tasks. However, due to their complex non-linear structure, these models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. This lack of transparency can be a major drawback in practice. In my talk I will present a general technique, Layer-wise Relevance Propagation (LRP), for interpreting DNNs by explaining their predictions. I will demonstrate the effectivity of LRP when applied to various datatypes (images, text, audio, video, EEG/fMRI signals) and neural architectures (ConvNets, LSTMs), and will summarize what we have learned so far by peering inside these black boxes.

Bio: Wojciech Samek is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied Computer Science at Humboldt University of Berlin, Germany, Heriot-Watt University, UK, and University of Edinburgh, UK, from 2004 to 2010 and received the Dr. rer. nat. degree (summa cum laude) from the Technical University of Berlin, Germany, in 2014. In 2009, he was visiting researcher at NASA Ames Research Center, Mountain View, CA, and, in 2012 and 2013, he had several short-term research stays at ATR International, Kyoto, Japan. He was awarded scholarships from the European Union's Erasmus Mundus programme, the German National Academic Foundation and the DFG Research Training Group GRK 1589/1. He is associated with the Berlin Big Data Center, is a member of the editorial board of Digital Signal Processing and PLOS ONE, and was organizer of several deep learning workshops. He has authored more than 80 journal and conference papers, predominantly in the areas deep learning, interpretable artificial intelligence, robust signal processing and computer vision.

Open Data Science Conference