As we continue to develop new AI applications that touch every aspect of our lives, it’s essential that we approach them thoughtfully to ensure that they benefit everyone in society. That’s where the field of responsible AI has a role. At ODSC East, we’ll learn about the ways that responsible AI can help minimize harm and maximize the benefits of data science and AI applications in our entire track devoted to responsible AI. Check out a few of our upcoming sessions below.

Get your ODSC East 2024 pass today!

In-Person and Virtual Conference

April 23rd to 25th, 2024

Join us for a deep dive into the latest data science and AI trends, tools, and techniques, from LLMs to data analytics and from machine learning to responsible AI.

How AI Impacts the Online Information Ecosystem

Noah Giansiracusa, PhD | Associate Professor of Mathematics and Data Science | Bentley University

Explore the ways, both good and bad, that AI is influencing the online information ecosystem. In this session, you’ll discuss both concrete examples and high-level concepts, with a focus on the creation of mis/disinfo (LLMs, deepfake video/audio), propagation of mis/disinfo (search rankings, social media algorithms), the funding of disinfo (the targeted advertising industry), and AI-assisted fact-checking and bot detection/deletion.

Resisting AI

Dr. Dan McQuillan | Lecturer in Creative and Social Computing | Goldsmiths, University of London

In this session, you’ll discuss the operations of AI as social and political in addition to technical, with a focus on their capability for amplifying social harms and different ways for attendees to understand their own work.

You’ll touch on AI’s direct social impacts, how it influences bureaucratic and institutional structures, and its impact on the environmental and global level.

Social and Ethical Implications of Generative AI

Abeba Birhane | Senior Fellow in Trustworthy AI at the Mozilla Foundation | Adjunct Lecturer/Assistant Professor at Trinity College Dublin

Don’t miss this Keynote talk from Abeba Birhane. In this session, she will address the necessity for AI systems to be fair, accurate, and robust. You’ll delve into concerns arising from large datasets, their downstream impact, and discuss approaches for improvements and structural change.

Advancing Ethical Natural Language Processing: Towards Culture-Sensitive Language Models

Gopalan Oppiliappan | Head, AI Centre of Excellence | Intel

Though essential for many applications, NLP systems also raise concerns about equitable representation and cultural understanding. In this session, you’ll explore Culture-Sensitive Language Models (LLMs), which aim to address these issues by diversifying training data to encompass a wide range of cultures, implementing bias detection and mitigation strategies, and fostering collaboration with cultural experts to enhance contextual understanding.

Strategies for Implementing Responsible AI Governance and Risk Management

Beatrice Botti | VP – Chief Privacy Officer |Double Verify

This session will take you on a deep dive into the essential principles and practices that mitigate risk, ensure compliance, and build trust in your AI initiatives. You’ll explore practical tools and frameworks for responsible AI implementation and empower data scientists and business leaders to build trustworthy AI systems by exploring the true harms and risks of AI systems, unpacking the principles of trustworthy AI, and discovering the key elements for robust governance and risk management practices.

Language Modeling, Ethical Considerations of Generative AI, and Responsible AI

Madiha Shakil Mirza | NLP Engineer | Avanade

During this session, you’ll cover a range of topics, including

  • Technological evolution in AI and NLP which led to Generative AI
  • What is Generative AI
  • How Generative AI differs from Machine Learning and Deep Learning
  • How Large Language Models are built
  • The difference between Statistical Language Models vs Neural Language Models
  • Ethical considerations for Generative AI such as Bias, Privacy, Copyright, Intellectual Property Rights, Misinformation, and Environmental Impact
  • Responsible AI and how we can play our part to ensure that Large Language Models are developed and used responsibly

Nurturing Responsibility in our AI Endeavors

Rishu Gandhi | Senior Data Engineer in Cybersecurity | Wells Fargo

Explore Responsible AI through the lens of real-world examples and actionable insights to see why it’s become a necessity for industry practices. Don’t miss this discussion on building sustainable and accountable AI products.

2024 Data Engineering Summit tickets available now!

In-Person Data Engineering Conference

April 23rd to 24th, 2024 – Boston, MA

At our second annual Data Engineering Summit, Ai+ and ODSC are partnering to bring together the leading experts in data engineering and thousands of practitioners to explore different strategies for making data actionable.

Generative AI for Social Good

Colleen Molloy Farrelly | Chief Mathematician | Post Urban Ventures

Discuss the current ecosystem of generative AI methods, including image and text generation, with a focus on social good applications, including medical imaging applications, diversity training applications, public health initiatives, and underrepresented language applications. You’ll start with an overview of common generative AI algorithms for image and text generation before launching into a series of case studies with more specific algorithm overviews and their successes on social good projects.

Overcoming the Limitations of LLM Safety Parameters with Human Testing and Monitoring

Josh Poduska | AI Advisor | Applause

Peter Pham | Senior Program Manager | Applause

As Large Language Models develop and evolve, it’s become even more essential that we dedicate resources to ensuring safety, fairness, and responsibility. This session will examine a new approach to address these concerns by leveraging the power of human testing and monitoring from a diverse global population. This strategy will use a combination of crowd-sourced and professional testers from various locations, countries, cultures, and life experiences. Our approach thoroughly scrutinizes LLM and LLM application input and output spaces. It ensures responsible and safe product delivery.


Experience these, and many more expert-led sessions on Responsible AI for our rapidly developing world of AI and LLMs at ODSC East in just a few weeks this April 23-25. Act now to get your in-person or virtual pass before they are gone.