Abstract: Many of us have used libraries like Keras and TensorFlow to train Deep Learning models. But very few of us fully understand what is going on "under the hood."
In this talk, we'll walk through how to create Deep Neural Networks powerful enough to solve complex image classification tasks, from scratch, using Python. We'll do everything from coding the layers of our network using classes, to implementing the backpropogation algorithm so the layers work correctly together, and implementing a number of different neural net training optimization techniques such as Dropout, Momentum, and Weight Regularization. We'll end up with a Jupyter notebook running a flexible deep learning framework live that can then be extended to create arbitrarily deep networks - all from scratch.
Attendees will leave this talk with a deeper understanding of how and why neural nets work. This will help them when solving deep learning problems, as well as giving them confidence to explain the inner workings of neural nets to others - whether at conferences, as part of their jobs, or during job interviews.
Bio: Seth loves teaching and learning cutting edge machine learning concepts, applying them to solve companies' problems, and teaching others to do the same. Seth discovered Data Science and machine learning while working in consulting in early 2014. After taking virtually every course Udacity and Coursera had to offer on Data Science, he joined Trunk Club as their first Data Scientist in December 2015. There, he worked on lead scoring, recommenders, and other projects, before joining Metis in April 2017 as a Senior Data Scientist, teaching the Chicago full time course. Over the past six months, he has developed a passion for neural nets and deep learning, working on writing a neural net library from scratch and sharing what he has learned with others via blog posts (on sethweidman.com), as well as speaking at Meetups and conferences.