Scalable Natural Language Processing Using BERT, OpenVINO, AI Kit, and Open Data Hub

Abstract: 

Bidirectional Encoder Representations from Transformers (BERT) is currently one of the most widely used NLP models. The combination of OpenDataHub, Intel® oneAPI AI Analytics Toolkit (AI Kit), and OpenVINO Toolkit helps operationalize models like BERT following MLOps best practices. As a starting point, OpenDataHub provides a notebook as a service environment through it's JupyterHub implementation. We will show how data scientists, using custom resources, can initiate training of BERT models using AI Kit images with Intel optimized deep learning frameworks like PyTorch and Tensorflow. OpenVINO integrations with OpenDataHub augment its image catalog to include pre-validated notebook images that can be used to optimize or optionally fine-tune for lower precision models like BERT. Finally, we detail how to operationalize optimized and scalable inference on a multi-node Xeon CPU cluster using OpenVINO model server and Istio service mesh.

Bio: 

Ryan is a Product Manager for OpenVINO Developer Tools at Intel. He is passionate about making AI accessible to everyone and improving our lives with technology. In his spare time, he enjoys alpine skiing, traveling and training his dog.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google