Abstract: ReAct is an approach that uses human reasoning traces to create action plans which determine the best action to take from a selection of available tools that are external to the LLM. This methodology mimics human chain of thought processes combined with the ability to engage with an external environment to solve problems and reduce the likelihood of hallucinations and reasoning errors.
In this workshop, you will learn how to employ the ReAct technique to allow an LLM to determine where to find information to service different types of user queries, using LangChain to orchestrate the process. You’ll see how to it uses Retrieval Augmented Generation (RAG) to answer questions based on external data, as well as other tools for performing more specialized tasks to enrich the output of your LLM.
All demo code and presentation material will be provided, as well as a temporary Amazon SageMaker Studio environment to build and deploy in.
Module 1: Overview of Retrieval Augmented Generation (RAG)
Module 2: Introduction to ReAct, and LangChain
Module 3: Building a ReAct workflow with LangChain
Prerequisite and Background Knowledge Needed:
Basic Python knowledge
Basic GenAI/ML Understanding
Bio: Giuseppe Zappia is a Principal Solutions Architect at Amazon Web Services (AWS), where he focuses on helping customers leverage AWS to achieve their desired outcomes. He has over 22 years of experience in software development, systems design, and cloud architecture. Giuseppe has a focused area of depth in machine learning and is an active contributor to ML initiatives at Amazon, through published content and speaking events. He is a regular at the AWS ML/Analytics meetup group in Denver, a speaker at re:Invent 2022, and a guest on the Generative AI on AWS YouTube episode “Retrieval-Augmented Generation (RAG) using LangChain and Pinecone - The RAG Special Episode” . When not working on technical projects, Giuseppe is either deep into video games or building his next LEGO set.