
Abstract: In this talk we explore how in context encoding algorithms starting with the breakthroughs over the past 18 months impact real world use case applications. We will do a deep dive into transformer based architecture with technical focus on the registerables implementation paradigm and compare performance with prior art for automated “Fake News” evaluation using contemporary deep learning article encoding. We explore how these techniques provide unique interpretability for the FakeNews use case and close with a discussion of extensions of these techniques to time series forecasting and telemetry monitoring.
Bio: Mike serves as Chief ML Scientist and Head of Machine Learning for SIG, UC Berkeley Data Science faculty, and Director of Phronesis ML Labs. He has led teams of Data Scientists in the bay area as Head of Data Science at Uber ATG, Chief Data Scientist for InterTrust and Takt, Director of Data Science for MetaScale/Sears, and CSO for Galvanize where he founded the galvanizeU-UNH accredited Masters in Data Science degree and oversaw the company's transformation from co-working space to Data Science organization. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University before teaching at the University of Pittsburgh

Michael Tamir, PhD
Title
Chief ML Scientist & Head of Machine Learning/AI | SIG
Category
advanced-w19 | deep-learning-w19 | intermediate-w19 | talks-w19
