Explainable AI for Training with Weakly Annotated Data
Explainable AI for Training with Weakly Annotated Data

Abstract: 

AI has become a promising predictive engine on the verge of near-human accuracy for important medical applications like the automatic detection of critical findings in medical images for assisting radiologists in clinical tasks such as triaging time-sensitive cases, screening for incidental findings and reducing burnout.

Deep learning technologies, however, commonly suffer from a lack of explainability, which is an important aspect for the acceptance of AI into the highly regulated and high-stakes healthcare industry. For example, in addition to accurately classifying an image as containing a critical finding such as pneumothorax, it’s important to also localize where the pneumothorax is in the image to explain to the radiologist the reason for the algorithm’s prediction.

To this end, state-of-the-art supervised deep learning algorithms can accurately localize objects in images by training on large amounts of locally annotated, pixel-level labels of the object locations. However, unlike natural images where local annotations of everyday objects can be more easily crowd-sourced, in the medical domain, acquiring reliably labeled data for large datasets is an expensive undertaking requiring detailed pixel-level annotations for a multitude of findings agreed upon by multiple trained medical experts. This becomes a nearly impossible requirement and a major barrier for training competitive deep learning algorithms that can scale to the enormous number of different critical findings that can be present in medical images.

In this talk, we address these shortcomings with an interpretable AI algorithm that can classify and localize critical findings in medical images without the need of expensive pixel-level annotations, providing a general solution for training with weakly annotated data that has the potential to be adopted to a host of applications in the healthcare domain.

Bio: 

Evan Schwab is an AI Research Scientist at Philips Research North America located in Cambridge, MA where he works on building novel deep learning algorithms for the analysis of medical images. Evan is interested in developing interpretable AI frameworks that can adhere to the constraints of complicated healthcare systems to assist radiologists in their clinical workflows. Evan received his Ph.D. from The Johns Hopkins University in 2017 under the advisement of Dr. Rene Vidal in Machine Learning with a focus on Neuroimaging and his B.A. from Cornell University in 2010 in Mathematics.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google