A Walkthrough of Low-Code Deep Learning with KNIME

Abstract: 

Deep learning is the cool kid in school –everybody talks about it and wants to be friends with it. There’s hardly anything we cannot do with advanced deep learning architectures, from generating synthetic images and natural language descriptions to autonomous vehicle systems.

The complexity of these tasks cultivated the myth that building deep learning applications requires equally complex coding in scripting languages where packages such as TensorFlow and Keras can be leveraged. Visual programming-based software like KNIME Analytics Platform debunks this myth by removing the coding barrier and offering a friendly UI for no-code/low-code deep learning pipelines.

In this tutorial, we will illustrate the evolution of deep learning architectures and how KNIME Analytics Platform is naturally designed to keep up with these transformations. We will start off by introducing simple ANNs for a classification task. While easy to grasp, ANNs are not suitable to effectively work with sequential (e.g., texts and time series) or visual data (e.g, images and videos). Other, more complex architectures proved superior. We will zoom in on RNNs with LSTM units for text generation and time series forecasting; CNNs for image classification and styling; and GANs for synthetic image generation.

Lately, the need for ever accurate and multi-task models has led to the proliferation of LLMs. These models have billions of parameters and are the result of a long, and data & resource-intensive training process. In 2018, Google’s BERT, achieved SOTA performance in multiple NLU benchmarks. Similarly, in 2023, OpenAI’s GPT-3 and GPT-4 have taken the data science community by storm. For data practitioners, it is extremely unfeasible to reinvent the whole wheel from scratch. Hence, we will show how to adopt transfer learning, and consume models via REST APIs.

Throughout our journey, we will rely on KNIME’s Keras and TensorFlow integrations to define, train and deploy deep learning models; on the BERT nodes for transfer learning; and on the REST Client extension to issue HTTP requests and display the results in interactive Data Apps. Finally, whenever KNIME nodes are not readily available, we always have the option to develop Python-based nodes, such as Word2Vec and for GNNs.

Session Outline:

Topic: Intro to Deep Learning in KNIME
Description:
What is it?
KNIME Integrations for Deep Learning (Keras and TensorFlow)
Topic: Simple Artificial Neural Networks
Description:
Feed-Forward Neural Network: Single vs Multi-layer Perceptron

Live Walkthrough: Feed-Forward Neural Network for Binary Classification
RNNs with LSTM units: Sequential Data (e.g., Text and Time Series),
Text Generation & Demand Forecasting
CNNs: Visual Data (e.g., Image and Videos)
Image Classification & Image Neuro-Styling
GANs: Synthetic Image Generation

Live Walkthrough: GANs Data App for Image Generation
LLMs - BERT:Transfer Learning
BERT Nodes
Sentiment Analysis
LLMs - GPT-3: Model consumption via REST APIs

Live Walkthrough: ChatGPT as a KNIME Data App
Python-based development: Word2Vec and GNNs
Wrap up: Summary and Q&A

All example workflows will be shared with the attendees.

Bio: 

Emilio Silvestri is a Junior Data Scientist on the Evangelism Team at KNIME. He has a Master's Degree in Computer Science at the University of Konstanz, with a special focus on Data Science and Artificial Intelligence. He is a certified KNIME Trainer and works for the KNIME Education Team to onboard and upskill people in their data science journey with courses and webinars.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google