AI EXPO PARTNERS
AI Expo Pass – Free
General Pass $159
One Day Pass $299
Demo Theater Speakers
Taking an AI model from the lab to production is extremely challenging. In fact, recent reports and surveys estimate that only 20%-30% of the deep neural modelling attempts find their way to productive deployment. One of the major bottlenecks in the path from the lab to production is the poor latency or throughput performance of these neural models, which immediately translates to excessively high cost-to-serve.
In this talk, we present an innovative solution to this problem, driven by Deci AI’s deep learning platform. Our platform tackles the challenge by using an AI-based neural architecture search (NAS). It is capable of crafting and improving almost any given deep neural network, thus allowing networks to achieve production performance grade without compromising accuracy. Our proprietary Automated Neural Architecture Construction engine (AutoNAC) unlocks a whole set of AI opportunities for cloud, on-prem, and edge deployments.
The session will begin with a presentation by Deci’s CEO, Yonatan Geifman, PhD, who will introduce AutoNAC, and provide a peek into its algorithmic principles. Following this, Sefi Bell-Kligler, Deci’s Director of AI, will present an end-to-end technical demo showcasing real-world cases; these examples demonstrate Deci’s platform, featuring its AutoNAC algorithmic optimization, and the associated user journeys.
With Data being at the heart of major decision making in today’s world there exists a need for a solution which can help companies streamline their data science (DS) practice. ML Works – an E2E model management accelerator by Tredence is a one stop solution for the same. It enables accelerated growth in each stage of ML Model Lifecycle from Build to Monitor.
ML Works accelerator provides a model development framework & a feature store for better collaboration across teams. It also includes a visual provenance graph for end to end model visibility and pipeline traceability, as well as Explainable AI module to make the ML output more accessible. It also allows continuous monitoring of production models for accuracy and relevance, with auto-triggered alerts in the event of model and data drift and much more.
Data pipelines are crucial to an organization’s data science efforts. They ensure data is collected and organized in a timely and accurate manner, and is made available for analysis and modeling. In many cases, these pipelines require parallel computing. That might be because they involve “big compute” (many tasks to execute in parallel) or “big data” (large datasets which have to be processed in chunks). In this talk we’ll introduce the next-generation stack for big data pipelines built upon Prefect and Dask, and compare it to popular tools like Spark, Airflow, and the Hadoop ecosystem. We’ll discuss pros and cons of each, then take a deep dive into Prefect and Dask.
Dask is a Python-native parallel computing framework that can distribute computation of arbitrary Python functions up to high-level DataFrame and Array objects. It also has machine learning modules that are optimized to take advantage of these distributed data structures. Prefect is a workflow management system created by engineers who contributed to Airflow, and was specifically designed to address some of Airflow’s shortcomings. It is built around the “negative engineering” paradigm – it takes care of all the little things that might go wrong in a data pipeline. Then when computations need to be distributed, Prefect integrates seamlessly with Dask clusters through its executor interface.
Attend this session to understand how data lineage is helping companies to:
• Decipher complex algorithms in systems that supply data to the “data lake”
• Increase trust in data for analysts and data citizens/scientists
• Manage the impact of application changes on downstream analytic systems
• Simplify the consumption and understanding of data flow throughout the enterprise
Learn through various use cases how companies are taking advantage of lineage to realize these benefits.
Digitization has led to enormous amounts of content across different channels, significantly impacting customer’s attention span. To improve customer attention, e-commerce organizations are focusing on providing higher user experience during the shopping journey through hyper-personalization across content and offers. These hyper-personalization efforts can improve customer experience and organization top-line revenue by up to ~15%.
In this session, ZS will demo its Personalize.AITM (P.AI) application which integrates directly with an organization’s data ecosystem to provide customer-level recommendations. The P.AI system combines heterogeneous datasets including customer-level interactions, demographic data, loyalty program, marketing offers and item properties to optimize the customer experience by providing personalization. We’ll share how the solution provides a scalable ecosystem with 6 core capabilities: promotion design, auto-feature engineering, advanced micros segmentation, item-offer recommendation, dynamic test-control, and customer targeting.
Learn how P.AI application is applied to the E-commerce industry and how it has helped transform customer experience for firms.
Learn how to increase your ML server utilization with visibility dashboards and data-driven infrastructure management. In this webinar we will discuss key solutions for fragmented MLOps processes, and how to reduce technical and computational debt. Companies invest millions of dollars on compute that has the potential to dramatically accelerate AI workloads and improve performance, but end up only utilizing a small fraction of it, sometimes as low as 20% of these powerful resources.
In this talk we will introduce a way to streamline your MLOps process, monitor all GPU, CPU and memory resources, and maintain high utilization of your resources. You will learn key strategies to increase utilization for your ML/DL infrastructure with MLOps and resource management best practices. We will discuss the benefits of a hybrid cloud infrastructure, and how to maximize utilization with MLOps visibility dashboards.
What you’ll learn:
– How to increase utilization by up to 80% with infrastructure visibility
– How to monitor utilization, capacity and allocation of ML servers across all runs
– MLOps strategies to reduce computational debt in your infrastructure
– Benefits and strategies for managing a Hybrid Cloud environment
– How to use data-driven ML infrastructure and capacity planning
When it comes to data science, it is important to have the freedom to choose different approaches, languages, tools, techniques and deploying environments, because this is what drives innovation and creativity. In this session, you will see how to leverage Python for use in SAS Model Manager, a software tool of data scientists and IT developers to Govern, Deploy, Control and Monitor the entire Open Source Models Ecosystem. We will showcase PythonZip-ModelManager (pzmm): a module that allows users to import, deploy, score, and monitor already built Python models in SAS Model Manager directly from Python. With SAS Model Manager, you can then compare and contrast between any type of model (including Python, R, and SAS based models). We will further display how SAS Model Manager also includes easily consumable APIs, which encompass many of the needed actions of a successful Open Source Models Ecosystem.
Learn why the truly open source HPCC Systems platform is better at Big Data and offers an end-to-end solution for Developers and Data Scientists. Learn how ECL can empower you to build powerful data queries with ease. HPCC Systems, a comprehensive and dedicated data lake platform makes combining different types of data easier and faster than competing platforms — even data stored in massive, mixed schema data lakes — and it scales very quickly as your data needs grow. Topics include HPCC Architecture, Embedded Languages and external datastores, Machine Learning Library, Visualization, Application Security and more.
Learn about Algorithmia’s machine learning operations and management platform that empowers teams to deploy models, connect to various data sources, automatically scale model inference, and manage the ML lifecycle in a centralized model catalog. We’ll demonstrate the process of deploying a credit risk model for real-time inference as well as a sentiment analysis ML pipeline to Algorithmia that uses different languages and frameworks. We’ll demonstrate data science workflows for deploying a model, adding an OCR tool to the pipeline, testing a new version of the model, then calling the model from different languages. We’ll also discuss how Algorithmia handles the underlying MLOps infrastructure and operations related to security, scalability, and governance.
Come learn why thousands of companies use Apache Druid and Imply (powered by Apache Druid) for hot analytics alongside their data warehouses.
Learn from the experts at Imply, the creators of Apache Druid, as they show you how to:
– Graphically load streaming data from Kafka and Spark, and create dimensions and metrics
– Combine streaming and historical data from your data lake or data warehouse
– Get alerts, build and explore real-time dashboards, and see AI-powered explanations
– Perform drag-and-drop visual data exploration with sub-second response times
91% of teams take at least 1 month to create their first “Seed” Dataset – and many months to get into Beta. Learn what the top bottlenecks and blockers causing this are. Learn how to solve them – reducing the process from months to days.
According to a July 2020 survey, 97% of projects iterate on their Datasets. 85% construct more than 1 Dataset. Yet only 12% use software to manage this process.
First we will explore the common blockers with creating and updating a single Major Dataset. Then we will zoom out and look at how many Datasets are needed to ship a Beta. We will introduce a novel non-blocking way to create Datasets in days instead of months. 10x your Development speed by digitizing the manual processes and using a novel Deep Learning specific Database.
AI for Finance
AI for Marketing
AI for Healthcare
AI for Energy
AI for Biotech & Pharma
AI for Retail
AI for Climate
AI for Machines
AI Cyber & Fraud
AI for Manufacturing