Practical, Rigorous Explainability in AI
Practical, Rigorous Explainability in AI

Abstract: 

Explainability is emerging as a critical aspect of AI and in particular Deep Learning Models. It is useful for compliance, decision maker acceptance and also for improved continuous training and stabilization of the model. We present a mathematically rigorous definition of explainability and how it is applied to several vision tasks in the healthcare and video analytics domains. A basic need of commercial AI systems is to be able to learn new special cases, which did not exist in the training dataset, from very few new samples. This too is accomplished using our formalism.

Bio: 

Tsvi Lev is the General Manager of NEC Corporation's Israeli Research Center, the first Open Innovation center of NEC globally. The center is engaged in applications of AI to Cyber, Physical Safety and Medical. Before that Tsvi was VP of Strategy at Amdocs, leading the Amdocs team in the AT&T Open Innovation Organization – the AT&T Foundry. Previously, Tsvi managed the R&D Center of Samsung Electronics in Israel, a center devoted to innovation and cutting-edge development for Samsung’s future consumer electronics and cloud products. Tsvi was an Entrepreneur in the Mobile Multimedia and Computer Vision space and sold his company to Emblaze Systems. A Talpiot Graduate, Tsvi holds a Masters in Theoretical Physics, is the author of numerous patents, and a frequent speaker at events related to technological innovation.