ODSC West 2020

Virtual AI Expo

2 Demo Theaters   l   30 Partners   l   Virtual Networking

28 -29 October, 2020  •  Virtual Event

REGISTRATION ENDS IN          

Days
Use Cases Talks
Partners
Networking Events
Attendees

AI EXPO PARTNERS

Limited Booths Availability

Join ODSC West Expo

REQUEST BROCHURE

UNDERSTAND THE APPLICATION OF AI IN THE REAL WORLD

Wanna to keep up with the latest AI developments, trends and insights? Dealing with build or buy solution dilemma to grow your business? Seeking to interact with data-obsessed peers and build your network?

Look no further: ODSC AI Expo Hall is the right destination for you

2 Demo Theaters With our free Expert-led demos and Expo Hall Tutorials, learn how these platforms and products can accelerate adoption of data science and AI within your organization.  Understand the various AI adoption pathways in detail to decide on build vs buy decisions.
30 Virtual Booths  Visit our 30 partner booths to learn the latest solutions in AI for the Enterprise from the most imporant players in the AI space. Technologies being showcased;  Auto ML, Data Labeling, DevOps, DataOps, Deep Learning, Cloud Computing, Image, Voice and Facial Recognition.
Networking Meet and network with 6,500 Data scientists and join the most influential Data Science community worldwide. Make professional relationships that can last forever by being part this unique community.

AI Expo Pass – Free

  • Access to ODSC AI Expo (Wed-Thur)

  • Access to 16 live Demo Talks (Wed-Thur)

  • Access to main ODSC Keynotes (Wed-Fri)

  • Access to Networking Events (Wed-Thur)

General Pass $159

  • Access to all main conference Talks and Keynotes (Thur-Fri)

  • Access to ODSC AI Expo (Wed-Thur)

  • Access to 16 live ODSC Demo Talks (Wed-Thur)

  • Access to Virtual Events

  • Full Access to Networking Events

One Day Pass $299

  • Access to One Day Hands-on Training and Workshops

  • Access to Keynotes and Talks (Thur-Fri)
  • Access to ODSC AI Expo (Wed-Thur)

  • Access to 16 live ODSC Demo Talks (Wed-Thur)

  • Access to Virtual Events

  • Full Access to Networking Events

Demo Theater Speakers


  • What to Expect When You’re Expecting ML: Lessons Learned from the New SDLC – Diego Oppenheimer, CEO and Cofounder of Algorithmia

  • Machine learning to detect cyber attacks : A Case Study  – Harini Kannan Data Scientist at Capsule8

  • How to Sell AI to a Fortune 500 Customer –  Allison Sawyer, Partner at The League of Worthwhile Ventures

  • Running effective Machine Learning teams: common issues, challenges and solutions – Gideon Mendels CEO & Cofounder Comet.ml

  • The New AI Factory Model: How to Scale Quality Training DataMatthew McMullen, Growth Strategist at CloudFactory

  • Growing Data Science at Scale – Sr. Director of Data Science at Pluralsight


West Demo Theater
10:30 - 10:55
Automated Model Management with ML Works

With Data being at the heart of major decision making in today’s world there exists a need for a solution which can help companies streamline their data science (DS) practice. ML Works – an E2E model management accelerator by Tredence is a one stop solution for the same. It enables accelerated growth in each stage of ML Model Lifecycle from Build to Monitor.
ML Works accelerator provides a model development framework & a feature store for better collaboration across teams. It also includes a visual provenance graph for end to end model visibility and pipeline traceability, as well as Explainable AI module to make the ML output more accessible. It also allows continuous monitoring of production models for accuracy and relevance, with auto-triggered alerts in the event of model and data drift and much more.

Automated Model Management with ML Works image
Pavan Nanjundaiah
Head of Engineering | Tredence Inc.
10:30 - 10:55
What if AI Could Craft the Next Generation of your AI?

Taking an AI model from the lab to production is extremely challenging. In fact, recent reports and surveys estimate that only 20%-30% of the deep neural modelling attempts find their way to productive deployment. One of the major bottlenecks in the path from the lab to production is the poor latency or throughput performance of these neural models, which immediately translates to excessively high cost-to-serve.

In this talk, we present an innovative solution to this problem, driven by Deci AI’s deep learning platform. Our platform tackles the challenge by using an AI-based neural architecture search (NAS). It is capable of crafting and improving almost any given deep neural network, thus allowing networks to achieve production performance grade without compromising accuracy. Our proprietary Automated Neural Architecture Construction engine (AutoNAC) unlocks a whole set of AI opportunities for cloud, on-prem, and edge deployments.

The session will begin with a presentation by Deci’s CEO, Yonatan Geifman, PhD, who will introduce AutoNAC, and provide a peek into its algorithmic principles. Following this, Sefi Bell-Kligler, Deci’s Director of AI, will present an end-to-end technical demo showcasing real-world cases; these examples demonstrate Deci’s platform, featuring its AutoNAC algorithmic optimization, and the associated user journeys.

What if AI Could Craft the Next Generation of your AI? image
Yonatan Geifman, PhD
CEO | Deci AI
11:00 - 11:25
Next-Generation Big Data Pipelines With Prefect and Dask

Data pipelines are crucial to an organization’s data science efforts. They ensure data is collected and organized in a timely and accurate manner, and is made available for analysis and modeling. In many cases, these pipelines require parallel computing. That might be because they involve “big compute” (many tasks to execute in parallel) or “big data” (large datasets which have to be processed in chunks). In this talk we’ll introduce the next-generation stack for big data pipelines built upon Prefect and Dask, and compare it to popular tools like Spark, Airflow, and the Hadoop ecosystem. We’ll discuss pros and cons of each, then take a deep dive into Prefect and Dask.

Dask is a Python-native parallel computing framework that can distribute computation of arbitrary Python functions up to high-level DataFrame and Array objects. It also has machine learning modules that are optimized to take advantage of these distributed data structures. Prefect is a workflow management system created by engineers who contributed to Airflow, and was specifically designed to address some of Airflow’s shortcomings. It is built around the “negative engineering” paradigm – it takes care of all the little things that might go wrong in a data pipeline. Then when computations need to be distributed, Prefect integrates seamlessly with Dask clusters through its executor interface.

Next-Generation Big Data Pipelines With Prefect and Dask image
Aaron Richter, PhD
Senior Data Scientist | Saturn Cloud
11:00 - 11:25
Leverage Data Lineage to Maximize the Benefits of AI and Big Data

Attend this session to understand how data lineage is helping companies to:

• Decipher complex algorithms in systems that supply data to the “data lake”
• Increase trust in data for analysts and data citizens/scientists
• Manage the impact of application changes on downstream analytic systems
• Simplify the consumption and understanding of data flow throughout the enterprise

Learn through various use cases how companies are taking advantage of lineage to realize these benefits.

Leverage Data Lineage to Maximize the Benefits of AI and Big Data image
Ernie Ostic
Senior Vice President of Product | MANTA
11:00 - 11:25
Personalize.AI: Transforming businesses through personalization

Digitization has led to enormous amounts of content across different channels, significantly impacting customer’s attention span. To improve customer attention, e-commerce organizations are focusing on providing higher user experience during the shopping journey through hyper-personalization across content and offers. These hyper-personalization efforts can improve customer experience and organization top-line revenue by up to ~15%.

In this session, ZS will demo its Personalize.AITM (P.AI) application which integrates directly with an organization’s data ecosystem to provide customer-level recommendations. The P.AI system combines heterogeneous datasets including customer-level interactions, demographic data, loyalty program, marketing offers and item properties to optimize the customer experience by providing personalization. We’ll share how the solution provides a scalable ecosystem with 6 core capabilities: promotion design, auto-feature engineering, advanced micros segmentation, item-offer recommendation, dynamic test-control, and customer targeting.

Learn how P.AI application is applied to the E-commerce industry and how it has helped transform customer experience for firms.

Personalize.AI: Transforming businesses through personalization image
Gopi Vikranth
Associate Principal | ZS
Personalize.AI: Transforming businesses through personalization image
Dr. Prakash
Associate Principal | ZS
11:30 - 11:55
How to Increase ML Server Utilization With MLOps Visualization Dashboards

Learn how to increase your ML server utilization with visibility dashboards and data-driven infrastructure management. In this webinar we will discuss key solutions for fragmented MLOps processes, and how to reduce technical and computational debt. Companies invest millions of dollars on compute that has the potential to dramatically accelerate AI workloads and improve performance, but end up only utilizing a small fraction of it, sometimes as low as 20% of these powerful resources.

In this talk we will introduce a way to streamline your MLOps process, monitor all GPU, CPU and memory resources, and maintain high utilization of your resources. You will learn key strategies to increase utilization for your ML/DL infrastructure with MLOps and resource management best practices. We will discuss the benefits of a hybrid cloud infrastructure, and how to maximize utilization with MLOps visibility dashboards.

What you’ll learn:
– How to increase utilization by up to 80% with infrastructure visibility
– How to monitor utilization, capacity and allocation of ML servers across all runs
– MLOps strategies to reduce computational debt in your infrastructure
– Benefits and strategies for managing a Hybrid Cloud environment
– How to use data-driven ML infrastructure and capacity planning

How to Increase ML Server Utilization With MLOps Visualization Dashboards image
Yochay Ettun
CEO and Co-founder | cnvrg.io
11:30 - 11:55
Improving Your Data Visualization Flow with Altair and Vega-Lite

Your boss just asked you to “whip up” a chart for a stakeholder review in an hour. You know you made that really slick visual a couple months ago, it would be a perfect fit for your current dataset. However, you can only find the final PNG, not the original notebook with your carefully crafted matplotlib magic. After a quarter hour’s search, you give up and start pummeling Stack Overflow, hoping to find that thread that had all the answers last time…

Sound familiar? If you’re a Pythonista whose data visualization process could use a makeover, then this talk is for you. We’ll identify the elements of an effective data visualization flow and explore how the Altair and Vega-Lite stack can improve your own data visualization practice.

The visualization demonstration will feature data sourced from S&P Global’s curated content sets, now available through the Snowflake Cloud Data Platform.

Improving Your Data Visualization Flow with Altair and Vega-Lite image
Rachel House
Senior Data Scientist | S&P Global
11:30 - 11:55
Integrating Open Source Modeling with SAS Model Manager

When it comes to data science, it is important to have the freedom to choose different approaches, languages, tools, techniques and deploying environments, because this is what drives innovation and creativity. In this session, you will see how to leverage Python for use in SAS Model Manager, a software tool of data scientists and IT developers to Govern, Deploy, Control and Monitor the entire Open Source Models Ecosystem. We will showcase PythonZip-ModelManager (pzmm): a module that allows users to import, deploy, score, and monitor already built Python models in SAS Model Manager directly from Python. With SAS Model Manager, you can then compare and contrast between any type of model (including Python, R, and SAS based models). We will further display how SAS Model Manager also includes easily consumable APIs, which encompass many of the needed actions of a successful Open Source Models Ecosystem.

Integrating Open Source Modeling with SAS Model Manager image
Scott Lindauer
Software Developer | SAS
Integrating Open Source Modeling with SAS Model Manager image
Diana Shaw
Sr. Product Manager | SAS
12:00 - 12:25
HPCC Systems – The Kit and Kaboodle for Big Data and Data Science

Learn why the truly open source HPCC Systems platform is better at Big Data and offers an end-to-end solution for Developers and Data Scientists. Learn how ECL can empower you to build powerful data queries with ease. HPCC Systems, a comprehensive and dedicated data lake platform makes combining different types of data easier and faster than competing platforms — even data stored in massive, mixed schema data lakes — and it scales very quickly as your data needs grow. Topics include HPCC Architecture, Embedded Languages and external datastores, Machine Learning Library, Visualization, Application Security and more.

HPCC Systems – The Kit and Kaboodle for Big Data and Data Science image
Bob Foreman
Senior Software Engineer | LexisNexis Risk Solutions
HPCC Systems – The Kit and Kaboodle for Big Data and Data Science image
Hugo Watanuki
Senior Technical Support Engineer | LexisNexis Risk Solutions
12:00 - 12:25
An Overview of Algorithmia: How to Deploy, Manage, and Scale Your Machine Learning Model Portfolio

Learn about Algorithmia’s machine learning operations and management platform that empowers teams to deploy models, connect to various data sources, automatically scale model inference, and manage the ML lifecycle in a centralized model catalog. We’ll demonstrate the process of deploying a credit risk model for real-time inference as well as a sentiment analysis ML pipeline to Algorithmia that uses different languages and frameworks. We’ll demonstrate data science workflows for deploying a model, adding an OCR tool to the pipeline, testing a new version of the model, then calling the model from different languages. We’ll also discuss how Algorithmia handles the underlying MLOps infrastructure and operations related to security, scalability, and governance.

An Overview of Algorithmia: How to Deploy, Manage, and Scale Your Machine Learning Model Portfolio image
Kristopher Overholt
Sales and Solution Engineer | Algorithmia
14:30 - 14:55
Meet the New Hot Analytics Stack – Apache Kafka, Spark and Druid

Come learn why thousands of companies use Apache Druid and Imply (powered by Apache Druid) for hot analytics alongside their data warehouses.

Learn from the experts at Imply, the creators of Apache Druid, as they show you how to:
– Graphically load streaming data from Kafka and Spark, and create dimensions and metrics
– Combine streaming and historical data from your data lake or data warehouse
– Get alerts, build and explore real-time dashboards, and see AI-powered explanations
– Perform drag-and-drop visual data exploration with sub-second response times

Meet the New Hot Analytics Stack – Apache Kafka, Spark and Druid image
Danny Leybzon
Senior Field Engineer | Imply
14:30 - 14:55
[Deep Learning] Fresh Data in Days Instead of Months

91% of teams take at least 1 month to create their first “Seed” Dataset – and many months to get into Beta. Learn what the top bottlenecks and blockers causing this are. Learn how to solve them – reducing the process from months to days.

According to a July 2020 survey, 97% of projects iterate on their Datasets. 85% construct more than 1 Dataset. Yet only 12% use software to manage this process.

First we will explore the common blockers with creating and updating a single Major Dataset. Then we will zoom out and look at how many Datasets are needed to ship a Beta. We will introduce a novel non-blocking way to create Datasets in days instead of months. 10x your Development speed by digitizing the manual processes and using a novel Deep Learning specific Database.

[Deep Learning] Fresh Data in Days Instead of Months image
Anthony Sarkis
Founder | Diffgram
Select date to see events.

  • What to Expect When You’re Expecting ML: Lessons Learned from the New SDLC – Diego Oppenheimer, CEO and Cofounder of Algorithmia

  • Machine learning to detect cyber attacks : A Case Study  – Harini Kannan Data Scientist at Capsule8

  • How to Sell AI to a Fortune 500 Customer –  Allison Sawyer, Partner at The League of Worthwhile Ventures

  • Running effective Machine Learning teams: common issues, challenges and solutions – Gideon Mendels CEO & Cofounder Comet.ml

  • The New AI Factory Model: How to Scale Quality Training DataMatthew McMullen, Growth Strategist at CloudFactory

  • Growing Data Science at Scale – Sr. Director of Data Science at Pluralsight


AI for Finance

AI for Marketing

AI for Healthcare

AI for Energy

AI for Biotech & Pharma

AI for Retail

AI for Climate

AI for Machines

AI Cyber & Fraud

AI for Manufacturing