Abstract: Often we work with sequential data such as text and we’d like to infer a second sequence from the first. For example, in machine translation problems we read in a sequence of words in one language then generate the corresponding sequence in a second language. Sequence to sequence models are also useful for other tasks such as natural language generation and time series modeling. In this talk, I'll demonstrate how to build sequence to sequence models in PyTorch, an expressive framework for building deep learning models in Python and C++. I'll also cover techniques such as attention, teacher forcing, and curriculum learning which improve the performance of our models and decrease training time.
Bio: Mat received a PhD in Physics from UC Berkeley where he studied the neural correlations of short-term memory in prefrontal cortex. During that time, he picked up Python, machine learning, and a love for education. He's been at Udacity for over two years, developing content for various data science courses including the Deep Learning Nanodegree program. Mat is also the author of Sampyl, a Python library for Bayesian data analysis, and SeekWell, a library that improves the usage of SQL within Python.