🤗 Transformers & 🤗 Datasets for Research and Production

Abstract: 

The half-day training will train attendees on how to use Hugging Face's Hub as well as the Transformers and Datasets library to efficiently prototype and productize machine learning models.
The training will cover the following topics:

1. Open-Source Philosophy
- Design principles of Transformers and Datasets
- Community Support
- How to contribute ?
2. From Research to Prototyping:
- Find models and datasets for your target task
- Analyse, experiment with models in Transformers
- Training: Pre-processing, Modeling, Post-processing in Transformers
- Set-up training in Transformers
- Compare different models
3. From Prototype to Production:
- Optimize your model
- Experiment with different setups
- XLA, ONNX, Infinity

Background Knowledge
Python, Numpy or PyTorch or Tensorflow, Transfer Learning in Machine Learning

Bio: 

Patrick von Platen is a research engineer at Hugging Face and one of the core maintainers of the popular Transformers library.

He specializes in speech recognition, encoder-decoder models and long-range sequence modeling.

Before joining Hugging Face, Patrick conducted research in speech recognition at Uber AI, Cambridge University, and RWTH Aachen University.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google