
Abstract: The recent application of deep neural networks to long-standing problems has brought a break-through in performance and prediction power. However, high accuracy often comes at the price of loss of interpretability, i.e. many of these models are black-boxes that fail to provide explanations on their predictions. This tutorial focuses on illustrating some of the recent advancements in the field of interpretable artificial intelligence. We will show some common techniques that can be used to explain predictions on pretrained models and that can be used to shed light on their inner mechanisms. The tutorial is aimed to strike the right balance between theoretical input and practical exercises. The tutorial has been designed to provide the participants not only with the theory behind deep learning interpretability, but also to offer a set of frameworks, tools and real-life examples that they can implement in their own projects.
Bio: Matteo is a Research Staff Member in Cognitive Health Care and Life Sciences at IBM Research Zürich. He's currently working on the development of multimodal deep learning models for drug discovery using chemical features and omic data. He also researches in multimodal learning techniques for the analysis of pediatric cancers in a H2020 EU project, iPC, with the aim of creating treatment models for patients. He received his degree in Mathematical Engineering from Politecnico di Milano in 2013. After getting his MSc he worked in a startup, Moxoff spa, as a software engineer and analyst for scientific computing. In 2019 he obtained his doctoral degree at the end of a joint PhD program between IBM Research and the Institute of Molecular Systems Biology, ETH Zürich, with a thesis on multimodal learning approaches for precision medicine.

Matteo Manica, PhD
Title
Research Staff Member | Cognitive Health Care & Life Sciences, IBM Research Zürich
Category
advanced-europe19 | deep-learning-europe19 | intermediate-europe19 | trainings-europe19
