ODSC Europe 2019

Preliminary Daily Session Schedule

Full schedule launching soon

Schedule Guide for Pass Holders

Europe Talks/Workshops schedule includes Thursday Nov 21st and Friday Nov 22nd. It is available to Silver, Gold, Platinum, and Diamond pass holders

Europe Trainings schedule includes Tuesday November 19th and Wednesday November 20th. It is available to Training,  Gold ( Wed Nov 20th only), Platinum, and Diamond pass holders

Speaker and speaker schedule times are subject to change.

Full Schedule coming soon!

Register for ODSC EUROPE 2019

Register Now
Europe Talks & Workshops
Europe Trainings
Automatic Speech Recognition: a Paradigm Change in Motion

Workshop | Machine Learning | Deep Learning | Intermediate


From its advent more than 40 years ago, robust and high-performing approaches to automatic speech recognition (ASR) have been following a statistical approach based on Bayes decision rule. For decades, state-of-the-art ASR systems were based on standard signal processing for feature extraction, hidden Markov modeling, complex data-driven acoustic and language models, and advanced search concepts based on dynamic programming. This classical approach to automatic speech recognition has not been challenged significantly until recently. Even when artificial neural networks started to considerably boost ASR performance, the general architecture of state-of-the-art ASR systems was not altered considerably…more details

Automatic Speech Recognition: a Paradigm Change in Motion image
Ralf Schlüter, PhD
Academic Director | RWTH Aachen University
How to Make Machine Learning Fair and Accountable

Workshop | Machine Learning | Beginner-Intermediate


The suitability of Machine Learning models are traditionally measured on its accuracy. Metrics like RMSE, MAPE, AUC, ROC, Gini etc largely decide the ‘fate’ of Machine Learning models. However, if one digs deeper the ‘fate’ of Machine Learning models going beyond a few accuracies driven metrics to its capability of being Fair, Accountable, Transparent and Explainable a.k.a FATE.
Machine Learning, as the name implies, learns whatever it is taught. It’s a ramification of what it is fed. It’s a fallacy that ML don’t have perspective, it has the same perspective that the data has which was used to make it learn what its preaches today. In simple words, algorithms can echo prejudices that data explicitly or implicitly has…more details

How to Make Machine Learning Fair and Accountable image
Sray Agarwal
Specialist - Data Science | Publicis Sapient
Not Always a Black Box: Explainability Applications for a Real Estate Problem

Talk | Machine Learning | Beginner-Intermediate


Many machine learning models are opaque in the way they make a prediction. Even with quite common ensemble models such as random forest and gradient boosting, it is difficult to explain why the model made a particular decision. For certain business contexts, this poses a challenge. We faced such a hurdle when working on various machine learning models in the real estate domain. Among the things we were interested in predicting were the rent and price of a property, and duration of time during which it will be without a tenant if it were to become vacant. The business owners who were the users of the models were not satisfied with the common feature importance plot obtained with tree-based models.
In the talk I plan to briefly go over the business problem itself and the approach we took to solve it, as well as explain what Shapley value is, and how it can be used in many applications…more details

Not Always a Black Box: Explainability Applications for a Real Estate Problem image
Violeta Misheva, PhD
Data Scientist | ABN AMRO Bank N.V.
Integrating Real-Time Video Analysis with Clinical Data to Enable Digital Diagnostics

Tutorial | Machine Learning | Research Frontiers | Intermediate


The rapid digitization of healthcare has accelerated the adoption of artificial intelligence for clinical applications. A major opportunity for predictive clinical analytics is the ability to provide faster diagnoses and treatment plans tailored to individuals. Applications have been developed for multiple care settings, with many novel point-of-care diagnostics that can assess for diseases ranging from malaria to skin cancer. In this talk, we will demonstrate how to apply machine learning algorithms to identify regions of interest for detection of the pupil for point-of-care drug screening. We will highlight state-of-the-art technology to support real-time image analysis, approaches for the use of real-world clinical and patient-generated data, and best practices for translating novel AI algorithms from controlled research and development environments into real-world, clinical settings…more details

Integrating Real-Time Video Analysis with Clinical Data to Enable Digital Diagnostics image
Wade Schulz, MD, PhD
Assistant Professor and Director of Computational Health | Yale School of Medicine
From Zero to Airflow: Bootstrapping into a Best-in-Class Risk Analytics Platform

Talk | Machine Learning | Open Source Data Science | Intermediate


To effectively compete in the Fintech space decisions have to be lightning fast and accurate. There is an inherent tradeoff between the two, and to push the needle on both requires a union of excellent models and top notch infrastructure. At BlueVine, we chose to leverage Apache Airflow as an engine for the wide array of models, heuristics and supplemental processes that form our analytics ecosystem.

In this presentation we will describe the entire process from end-to-end as a case study of how BlueVine got the implementation of Airflow. We will cover: what was planned, which unexpected problems were encountered, and what effect it had on the relationship between the various data teams. We will also detail the mechanism that is in place today, note some real world insights about the strengths and limitations of Airflow, and track several key metrics that were affected by the switch…more details

From Zero to Airflow: Bootstrapping into a Best-in-Class Risk Analytics Platform image
Ido Shlomo
Data Science Manager | BlueVine
Augmented Programming

Talk | DevOps & Managment | Deep Learning | Intermediate-Advanced

In this talk, Gideon Mann, Head of Data Science in the Office of the CTO at Bloomberg, will look at some areas of the Code-Build-Test lifecycle where machine learning research has been applied, at Bloomberg and elsewhere. In the development of software code, potential machine learning applications include neural program synthesis (where code is generated automatically) or neural decomposition (where compiled code is reverse-engineered and re-written automatically in another programming language). During the build process, machine learning can be used to automatically optimize code or to perform fuzz testing, an automated testing approach to identify program exceptions like crashes or memory leaks. And finally, once code has been deployed, machine learning can be used for automated trace debugging and/or configuration management…more details

Augmented Programming image
Gideon Mann, PhD
Head of Data Science | Bloomberg, LP
AI for Market Intelligence: Challenges and Opportunities

Talk | AI in Business | Intermediate


Market intelligence is the set of processes designed to provide insights into the dynamics of markets, at the micro and macro level, to investors and analysts. The processes involve gathering data on companies’ ecosystems such as corporate, financial, operational data, from a variety of formats and sources, processing the data for quality and accuracy, extracting needed information and signals, and presenting those insights in a timely and relevant manner to the user.

The increasing use of AI and machine learning and its successes in automating the data acquisition and processing pipelines and providing Augmented Intelligence as decisional aid to the analyst and investor, hide the tremendous challenges faced by data scientists in creating data products for accurate market intelligence. In this talk, we provide an overview of the challenges to overcome in leveraging AI for market intelligence and highlight the opportunities that AI offers in the data and connected economy for generating improved market insights.

AI for Market Intelligence: Challenges and Opportunities image
Dr. Alain Biem
Head of Data Science | S&P Global Market Intelligence
Taking Recommendation Systems to The Masses

Talk | Deep Learning | AI for Engineers | Intermediate-Advanced


Motivated by our extensive experience in productization of recommendation systems in a variety of real-world application domains, in this talk, we will review complete pipelines of building recommendation systems. We will start by introducing some standard factorization machine algorithms. Thereafter, we will address some of the latest advances in deep learning algorithms in the area, with an emphasis on knowledge graph models. Then we will analyse different methodologies for computing these algorithms at scale, reviewing some available techniques for hyperparameter tuning. Finally, we will discuss how these systems can be brought successfully into production…more details

Taking Recommendation Systems to The Masses image
Miguel Gonzalez-Fierro, PhD
Sr. Data Scientist | Microsoft
Taking Recommendation Systems to The Masses image
Andreas Argyriou, PhD
Sr. Data Scientist | Microsoft
Ethical AI: A Practical Guideline For Data Scientists

Talk | Deep Learning | Machine Learning | Intermediate


Most of today’s Ethical AI debate at events revolves around how to do business in an ethical manner (company values and ethics boards), how to build teams to inforce ethical practices (e.g. diversity), and how to work in cross-domain settings (legal/tech/HR/etc.). Although all very important, the core factor of Ethical AI is not nearly getting enough attention.

Applying ethical AI in day to day operations is a highly technical undertaking. Some of the main aspects of Ethical AI (algorithmic fairness and bias, interpretability, robustness, privacy by design) have to be taken into account from the very beginning of the data science process, e.g. when defining a classifier’s loss function…more details

Ethical AI: A Practical Guideline For Data Scientists image
Vincent Spruyt
Chief Innovation Officer (former chief data scientist) | Sentiance
Deep Learning for Self Driving

Talk | AI in Business | All Levels


At the Uber ATG R&D centre, we are working on advanced state-of-the-art models for solving a large range of problems in self driving – perception and prediction, motion planning, mapping and localization, sensor simulation, and more. All that work is publicly available through academic conferences and venues. In this talk I will cover some exciting recent advances and also discuss the path to production – how we go from research prototypes to deployed systems on vehicle.

Deep Learning for Self Driving image
Inmar Givoni, PhD
Senior Autonomy Engineering Manager | Uber
Choosing The Right Deep Learning Framework: A Deep Learning Approach

Workshop | Deep Learning | Open Source Data Science | Beginner-Intermediate


As a developer advocate for IBM, I use machine learning in order to understand machine learning developers. I spend time building models to identify the problems they solve and the tools that they use to do so. This talk will present how deep learning can be used to predict the deep learning framework that most closely resembles the style in which a machine learning developer programs. Many of these frameworks are very new (for instance TensorFlow is in the process of putting out just its second major release). As the field of deep learning and the frameworks enabling them to continue to rapidly change, the community using a particular one will be much more consistent. This talk will end by allowing developers to test them models themselves by uploading examples of their work via Jupyter Notebooks and predict the deep learning community and framework that is right for them.

Choosing The Right Deep Learning Framework: A Deep Learning Approach image
Nick Acosta
Developer Advocate | IBM
Machine Learning Interpretability Toolkit

Workshop | Machine Learning | Data Visualization | Intermediate


With the recent popularity of machine learning algorithms such as neural networks and ensemble methods, etc., machine learning models become more like a ‘black box’, harder to understand and interpret. To gain the end user’s trust, there is a strong need to develop tools and methodologies to help the user to understand and explain how predictions are made. Data scientists also need to have the necessary insights to learn how the model can be improved. Much research has gone into model interpretability and recently several open sources tools, including LIME, SHAP, and GAMs, etc., have been published on GitHub. In this talk, we present Microsoft’s brand new Machine Learning Interpretability toolkit which incorporates the cutting-edge technologies developed by Microsoft and leverages proven third-party libraries. It creates a common API and data structure across the integrated libraries and integrates Azure Machine Learning services. Using this toolkit, data scientists can explain machine learning models using state-of-art technologies in an easy-to-use and scalable fashion at training and inferencing time.

Machine Learning Interpretability Toolkit image
Mehrnoosh Sameki, PhD
Technical Program Manager | Microsoft
Generative Adversarial Networks for Finance

Talk | Deep Learning | Quant Finance | Intermediate-Advanced


The Gaussian assumption in the Black-Scholes formula for option pricing has proven its limits. Although it is a good approximation, market returns do not adhere exactly to a gaussian curve. This is all the more important since pricing options correctly is a very competitive task. Today, Generative Adversarial Networks (GANs) are the new golden standard for simulation. It has worked wonders in image generation, but can it be applied to option pricing? Here is the story of how 2 data scientists (inc. a former trader) deployed a GAN for option pricing in real-time, in 10 days.