Community-Specific AI: Building Solutions for Any Audience

Abstract: With half of the world population online, and spending over 5 hours a day there, online communities are flourishing. It is now easier than ever for niche communities to form: gamers can find other players and form teams, dating adults can find better matches, students of particular subjects can find teachers and help each other. With faster networks images, audio, and video, are increasingly complimenting text, creating a richer experience.

However, as these communities grow wider and deeper, they can become a target for toxic behavior. Forums for underage users can be subverted with users attempting illicit solicitations and exploitation. Chat rooms can see participants engaging in cyberbullying and toxic language. According to Pew Research, over half of online users have seen offensive name calling and intentional embarrassment, while a quarter have witnessed physical threats and even prolonged harassment. This clearly has to stop. Businesses must now manage their communities in such a way that first and foremost protects their users, while also guarding their brand’s reputation.

The traditional way to address this issue was through human moderation. Companies like Google and Facebook hire thousands of moderators to respond to flagged content and unwanted activities while respecting the user’s desire for sharing and self-expression. While achieving its goal, this approach is unscalable, and beyond the means of most other businesses. More recently, advancements in technologies such as Natural Language Processing (NLP) has signaled great promise. But off-the-shelf solutions typically lack the power to represent the unique shared terminology and conversational pattern (eg. dating chats vs gaming chats) that each community exhibits, limiting their usefulness.

At Spectrum Labs, we develop community-specific AI solutions that aim to identify and adapt to the toxic online behaviors. We aid our clients in detecting inappropriate content and deliver insights around how their users interact with their products and with each other, multiplying the impact of moderators not only in responding quickly but also proactively. There are no off-the-shelf solutions, with each community enjoying a unique set of models that can properly address and adapt its unique needs. In our talk, we review how we tackle the problems of identifying toxic content (eg. hate speech, cyberbullying, illicit solicitations) while handling the issues of the cold-start and audience-specific language. We'll cover topics that will be of interest to teams that need to tackle multiple NLP problems across various domains where the speech and content patterns may change significantly.

Bio: Jonathan Purnell is the VP of Data Science at Spectrum, building tools to recognize and respond to harmful user-generated content and behaviors. Previously a Data Scientist for Krux and Salesforce DMP, Jon delivered Internet-scale distributed products using innovative machine learning techniques, including Deep Learning and NLP. Before that, he was an Applied Scientist at Bing Ads and a collaborative researcher with BBN Technologies (a division of Raytheon). He holds a Ph.D. in computer science focused on machine learning.