Abstract: The development of advanced deep neural language models has revolutionized the performance of various natural language processing (NLP) tasks. However, these models are increasingly intricate and less comprehensible, making them particularly vulnerable to failure when exposed to input data that is different from the data used for training. This brittleness of neural language models presents a significant challenge, as their complexity continues to increase. Unless this issue is addressed, progress in NLP could be hindered and the potential benefits of these models may not be fully realized.
This workshop will equip participants with the skills and knowledge to conduct adversarial evaluation of NLP systems. Through active exercises and examples, we will discuss how to identify and address system weaknesses and explore how this approach can improve accuracy, reduce risk, and uncover potential blind spots. Participants will gain a greater understanding of how to use adversarial evaluation to detect and prevent errors in their NLP systems.
Lesson 1: Introduction to Adversarial NLP Evaluation (30 minutes)
The objective of this lesson is to gain an understanding of adversarial NLP evaluation and its need. After completion of this lesson, students will be able to explain the concept of adversarial NLP evaluation, comprehend the need for such evaluations, and identify different approaches to perform adversarial evaluation.
Lesson 2: Constructing Adversarial Examples for NLP Tasks (30 minutes)
The goal of this lesson is to teach students how to specify adversarial examples for NLP tasks such as relation extraction and natural language inference. Students will learn techniques for generating adversarial data, such as leveraging linguistic principles, semantic modeling and natural language generation. These techniques will allow students to create perturbations of existing data that can be used to test NLP models and make them more robust to real-world variations.
Lesson 3: Using Adversarial Examples to Evaluate NLP Robustness (30 minutes)
The goal of this lesson is to explore how to use adversarial data to evaluate NLP systems. We will be looking at metrics and methods to measure robustness and fragility, as well as how to identify systematic errors and blind spots. We will be focusing again on relation extraction and natural language inference as case studies, also discussing how to interpret the evaluation results and how to use them to inform further development of NLP systems.
* Familiarity with Natural Language Processing (NLP) systems and algorithms.
* Ability to use Python and Jupyter Notebooks
Bio: Panos Alexopoulos has been working since 2006 at the intersection of data, semantics, and software, building intelligent systems that deliver value to business and society. Born and raised in Athens, Greece, he currently works as Head of Ontology at Textkernel, in Amsterdam, Netherlands, where he leads a team of Data Professionals in developing and delivering a large cross-lingual Knowledge Graph in the HR and Recruitment domain. Panos holds a PhD in Knowledge Engineering and Management from National Technical University of Athens, and has published more than 60 papers at international conferences, journals and books. He is the author of the book “Semantic Modeling for Data – Avoiding Pitfalls and Breaking Dilemmas” (O’Reilly, 2020), and a regular speaker and trainer in both academic and industry venues.