Practical Adversarial Learning: How to Evaluate, Test, and Build Better Models

Abstract: 

Machine learning has rapidly evolved to become an industry wide toolkit for solving a variety of automated tasks by extracting patterns from data. However, many pitfalls remain that can leave the learned model vulnerable to both mistakes and malfeasance, which adversaries can exploit to craft attacks. While the overall craft of training production-grade learned models from large datasets has largely been largely solved, there remains little consistency in how we validate the quality of these models and check them for vulnerabilities. In this training, we will introduce you to the techniques that have been developed for constructing adversarial examples for a model, tools that can find potential vulnerabilities in them, and training procedures that can produce better models.

Session Outline:

Lesson 1: Model Analysis

Familiarize yourself techniques and toolkits that can be used for validating models. Within this lesson, you will use state-of-the-art model validation tools to find both benign and adversarial vulnerabilities within a model.

Lesson 2: Constructing Adversarial Examples

Learn how to use common techniques for constructing adversarial examples that can expose the vulnerabilities of your model to potential adversaries. We will learn about different attack scenarios as well as a suite of methods that can be used to produce attacks.

Lesson 3: Adversarially Resilient Models and Detecting Attacks

Now that we’ve seen how adversarial examples can be crafted, we’ll learn how to build models that are resilient to these attacks and how we can monitor to detect possible attacks. With this final learning, you’ll have the tools needed to build and protect models and, combined with the previous lessons, you can will be confident in your model.

Background Knowledge:

* Participants should bring a laptop
* We'll be using Python 3.9+
* Proficiency with numpy/pandas is strongly encouraged

Bio: 

Dr. Blaine Nelson earned his B.S. (University of South Carolina), M.S. and Ph.D (UC Berkeley) degrees in Computer Science. He was a Humboldt Postdoctoral Research Fellow at the University of Tübingen (2011-13) and a Postdoctoral Researcher at the University of Potsdam (2013-14) in Germany. As a graduate student and post-doc, Dr. Nelson co-established the foundations of adversarial machine learning. He has twice co-chaired the ACM CCS workshop on Artificial Intelligence & Security, and co-coordinated the Dagstuhl Perspectives Workshop on Machine Learning Methods for Computer Security (2012).

Following his post-doctoral work, Dr. Nelson worked as a software engineer in Google's fraud detection group (2014-2016) where he built models and designed infrastructure for large scale machine learning. He then became a senior software engineer at Google's counter-abuse technology team (2016-2021) where he designed and built a large scale machine learning workflow system. Currently, Dr. Nelson is a principal machine learning engineer at Robust Intelligence where he works in a multi-faceted role to build infrastructure for testing the reliability and security of machine learned models by finding potential flaws or vulnerabilities in their behavior.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google