
Abstract: This talk will cover the basics of facial recognition and the importance of having diverse datasets when building out a model. We’ll explore racial bias in datasets using real world examples and cover a use case for developing an OpenFace model for a celebrity look-a-like app.
Whether it’s police-surveillance software scanning for criminals to retail stores that want to deliver a personalized shopping experience, facial recognition software is increasingly being used across industries. However, as this software becomes used more broadly, it’s crucial that diverse datasets are used to curb the racial bias that occurs in facial recognition.
Using OpenFace as an example face recognition model, this talk will cover the basics of facial recognition and why it’s important to have diverse datasets when building out a model. We’ll explore racial bias in datasets using real world examples and cover a use case for developing an OpenFace model for a celebrity look-a-like app and how it can fail with homogenous datasets.
Bio: Stephanie Kim is a developer advocate at Algorithmia, where she enjoys writing accessible documentation, tutorials, and scripts to help data scientists learn how to productionize their models quickly and painlessly. Stephanie is the founder of Seattle PyLadies and a co-organizer of the Seattle Building Intelligent Applications Meetup. She enjoys machine learning projects, particularly ones where she gets to dive into unstructured text data to discover friction points in the UI or find out what users are thinking with natural language processing techniques. She loves to learn, write and talk about data science, machine learning and deep learning especially regarding racial bias in AI, and writing helpful and fun articles that make machine learning accessible to anyone.

Stephanie Kim
Title
Developer Evangelist/Software Engineer at Algorithmia
