Moving Beyond Statistical Parrots – Large Language Models and their Tooling

Abstract: 

Large language models like GPT-4 and Codex have demonstrated immense capabilities in generating fluent text. However, simply scaling up data and compute results in statistical parroting without true intelligence. This talk explores frameworks and techniques to move beyond statistical mimicry. We discuss leveraging tools to retrieve knowledge, prompt engineering to steer models, monitoring systems to detect biases, and cloud offerings to deploy conversational agents. This talk explores the emerging ecosystem of frameworks, services, and tooling that propel large language models and enable developers to build impactful applications powered by large language models. Complex mechanisms like function calling and Retrieval Augmented Generation, navigating towards meaningful outputs and applications requires an overarching focus on strong model governance frameworks that can ensure that biases and harmful ideologies embedded in the training data are duly mitigated, paving the way towards beneficial application development. Developers play a crucial role in this process and should be empowered with tools and knowledge to steer these models appropriately. Intentional use of these elements not only optimizes model governance but also enriches the experience for developers, allowing them to dig deeper and create substantial applications that are not mere parroting, but stockholders of genuine value. From deploying conversational agents to crafting impactful applications across a swath of industries, such as healthcare and education, the comprehensive understanding and utilization of the vast array of LLM mechanisms can truly push the boundaries of NLP and AI, helping to usher in the age of AI in everyday life.

Session Outline:

"1. Understanding the Limits of Scaling Language Models:
- Grasp why simply increasing the size of datasets and compute resources for language models like GPT-4 and Codex does not equate to genuine intelligence.
- Learn the concept of statistical parroting and its implications for the development of AI.

2. Exploring Advanced Techniques to Enhance Language Model Performance:
- Gain insights into the techniques such as Retrieval Augmented Generation (RAG) and how they improve the relevance and quality of language model outputs.
- Discover how prompt engineering can be used to guide language models more effectively.

3. Model Governance and Bias Mitigation:
- Understand the importance of model governance frameworks in ensuring ethical AI development.
- Learn strategies for monitoring systems to detect and mitigate biases in machine-generated content.

4. Deployment of Language Models in Application Development:
- Learn how to utilize cloud-based services to deploy conversational agents and other applications powered by large language models.
- Investigate the architectural considerations for building scalable, robust NLP applications.

5. Practical Application Across Industries:
- Explore specific use cases of how large language models can be leveraged in industries like healthcare and education.
- Understand the potential impact and value large language models can bring to different domains.

6. Empowerment of Developers in Steering AI:
- Acknowledge the critical role developers play in shaping the future of NLP applications.
- Empower attendees with the knowledge of tools and best practices for guiding model behavior towards beneficial outcomes.

Open Source Tools:

1. LangChain:
- Demonstrations using LangChain's orchestration framework for generating text and Codex for explaining code-centric tasks.
- Examples of how to interact with various models via APIs.

2. Hugging Face's Transformers Library:
- Walkthrough on utilizing the Transformers library to implement and fine-tune large language models.
- Examples and practice sessions focused on RAG and other NLP tasks.

3. MLflow or Weights & Biases:
- Introduction to tools like MLflow or Weights & Biases for tracking experiments, managing machine learning lifecycle, and model governance.

4. Fairness and Bias Assessment Tools:
- Overview of tools such as AI Fairness 360 or Fairlearn that help detect and mitigate biases in machine learning models.

5. Cloud Services for AI Deployment:
- Guidance on deploying models using cloud services like Google Cloud AI Platform, Azure Machine Learning, or AWS SageMaker.
- Discussions on setting up secure and scalable infrastructure for real-time AI applications.

6. Jupyter Notebooks:
- Usage of Jupyter Notebooks to provide attendees with hands-on experience in writing and running Python code for language models.
- Collaboration and sharing techniques for experimental findings and results.

By the end of the session, attendees will be equipped with a robust understanding of the complexities of large language models, practical skills in deploying and governing these models, and inspiration to apply this powerful technology in various industry applications.

Bio: 

Ben Auffarth is a seasoned data science leader with a background and Ph.D. in computational neuroscience. Ben has analyzed terabytes of data, simulated brain activity on supercomputers with up to 64k cores, designed and conducted wet lab experiments, built production systems processing underwriting applications, and trained neural networks on millions of documents. He’s highly regarded in the London data science community and the best-selling author of the books Generative AI with LangChain, Machine Learning for Time Series, and Artificial Intelligence with Python Cookbook.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google