Abstract: This talk ventures into the nascent field of interpretable machine learning. Predictive models have begun to aid human decisions in a variety of domains. The recent rise of deep learning is increasingly pushing the boundaries of accuracy such models can achieve. At the same time, these deep learning systems have also brought the notion of models-as-black-boxes to the forefront. A major hurdle in their increased adoption is the challenge of providing human-interpretable predictions. There are some domains that have clear requirements around model explanation while there are others where interpretability is more about model diagnostics. Besides, there are prominent forces from outside the field of machine learning (eg the general data protection regulation in the EU, the new York City council's law on automated decision systems etc) which necessitate a discussion on the topic. We are going to discuss the following -
What is the need and scope of interpretability in statistical models?
Is there a possible common ground among the many interpretations of interpretability?
What are some issues and concerns around this topic?
We shall also present a quick survey of existing tools and techniques to create interpretable models and future directions and desiderata for interpretability.
Bio: Sneha Jha is a Senior Researcher at Nuance Communications and works at the intersection of natural language processing, machine learning and healthcare. At Nuance, she primarily works on clinical NLP, information extraction, interpretability of statistical models and knowledge engineering for rule-based expert systems. She has a keen interest in the role of technology in policy, law and ethics.
Senior Researcher | Nuance Communications