Abstract: Big changes are underway in the world of Natural Language Processing (NLP).
The long reign of word embeddings as NLP’s core representation technique has been challenged recently by an exciting new line of pre-trained deep learning models: among them, ELMo, ULMFiT, and the OpenAI transformer.
These research works made headlines by demonstrating that pretrained language models can be used to achieve state-of-the-art results on a wide range of NLP tasks, such as Natural Language Inference (NLI), Natural Language Understanding (NLU), Machine Translation, Question Answering or Language Generation.
These new methods gave birth to the notion of Transfer Learning in NLP and may have the same wide-ranging impact on the field as pre-trained ImageNet models had on computer vision.
This workshop gives an overview of these SOTA deep learning models for NLP, and offers tutorials to implement them in TensorFlow 2.0.
Bio: Alice Martin holds a MsC in Applied Mathematics and Financial Engineering from one of the French top-league Engineering School.
She is currently working in the lab of Applied Mathematics of Ecole Polytechnique (Paris, France) as a Machine Learning Engineer and a PhD Candidate in Machine Learning.
Her research work focuses on new perspectives on deep reinforcement learning applied to natural language processing, with application to goal-oriented dialogue systems with visual context. She is also a teacher in the fields of reinforcement learning, deep learning and unsupervised learning; she has taught classes in Berlin (Data Science Retreat) and Morocco (Emines Engineering School).
Prior to her work at Ecole polytechnique, she worked on a Machine Learning project to predict disease progression for Parkinson Disease Patients. She started a career 4 yours ago working as a Sales and Marketing Analyst in California in the Aerospace industry.