
Abstract: With the wide adoption of generative artificial intelligence (AI), more than ever, ensuring the robustness of machine learning models is becoming crucial. One of the most concerning security threats to machine learning (ML) is the potential for adversarial attacks, a technique to exploit vulnerabilities of models to cause incorrect output. They are unapparent to humans but sufficient for machine learning models to misclassify the data, potentially harming the end users. Therefore, including adversarial training in the ML lifecycle is important to consider as you build out your model and prepare it for production usage. Join this talk to learn how the symbiosis between the ML security open source projects like Adversarial Robustness Toolbox and an ML operations (MLOps) project, Kubeflow, can streamline your machine learning workflow and improve your model's robustness and security. With numerous benefits of the MLOps to assist in generating and defending your model, accelerate your development of secure machine learning models.
Background Knowledge:
tools that will be used: https://github.com/kubeflow
https://github.com/Trusted-AI/adversarial-robustness-toolbox
Bio: Anna Jung is a Senior ML Open Source Engineer at VMware, leading the open source team as part of the VMware AI Labs. She currently contributes to various upstream ML-related open source projects focusing on the project's overall health, adoption, and innovation. She believes in the importance of giving back to the community and is passionate about increasing diversity in open source. When away from the keyboard, Anna is often at film festivals supporting independent filmmakers.