Abstract: While there's a lot of work done on defining the risks, goals, and policies for Responsible AI, less is known about what you can apply today to build safe, fair, and reliable models. This session introduces open-source tools and examples of using them in real-world projects - for addressing four common challenges.
The first is robustness - testing & improving a model's ability to handle accidental or intentional minor changes in input that can uncover model fragility and failure points. The second is the detection & fixing of labeling errors, which provide an upper limit to accuracy and exist in most widely used datasets. The third is bias - testing that a model performs equally across gender, age, race, ethnicity, or other critical groups. The fourth is data leakage, in particular in combination with leakage caused by using personally identifiable information in training data.
This session is intended for data science practitioners and leaders who need to know what they can & should do today to build AI systems that work safety & correctly in the real world.
Basic familiarity with machine learning is assumed.
Bio: David Talby is the Chief Technology Officer at John Snow Labs, helping companies apply artificial intelligence to solve real-world problems in healthcare and life science. David is the creator of Spark NLP – the world’s most widely used natural language processing library in the enterprise.
He has extensive experience building and running web-scale software platforms and teams – in startups, for Microsoft’s Bing in the US and Europe, and to scale Amazon’s financial systems in Seattle and the UK.
David holds a Ph.D. in Computer Science and Master’s degrees in both Computer Science and Business Administration. He was named USA CTO of the Year by the Global 100 Awards and GameChangers Awards in 2022.