Adversarial Robustness: How to Make Artificial Intelligence Models Attack-proof!


We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and elaborate on other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.

Session Outline
- How ML classifiers can be tricked
- In what ways are ML models vulnerable
- How can you defend ML models from evasion attacks
- How can you certify ML model robustness

Background Knowledge
Intended for ML Engineers, Data Scientists, MLOps Engineers, and SecOps Engineers but any somewhat experienced programmer with an interest in AI/ML can attend


Serg Masís is a Data Scientist in agriculture with a lengthy background in entrepreneurship and web/app development, and the author of the bestselling book "Interpretable Machine Learning with Python". Passionate about machine learning interpretability, responsible AI, behavioral economics, and causal inference.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google