
Abstract: Applied explainability is a new field that enables data scientists to utilize explainability techniques in order to improve their datasets, models, and testing procedures.
In this session, you will learn about innovative applied explainability techniques that will allow you to overcome familiar challenges in neural network development, such as creating balanced datasets, testing, edge case detection, troubleshooting, and auditing your models. We will deep dive into specific use cases so you can learn how to apply these techniques to your own models to remove uncertainty.
Bio: Yotam is a machine learning and deep learning expert with extensive hands-on experience in neural network development. Prior to co-founding Tensorleap, Yotam developed and led AI and Big Data projects from research to production for companies in the automotive and other sectors, as well as developing machine learning algorithms for large government projects, including the Soreq Nuclear Research Center (Israel).