How Natural Language Processing Helps Promote Inclusivity In Online Communities

To create healthy online communities, businesses need better strategies for curating harmful posts. In this VB on-demand event, Cohere and his KI/ML experts from Google Cloud provide insight into new tools that are transforming moderation.

Players experience a surprising amount of online harassment. According to a recent study, five of her six adults (ages 18-45) were plagued by her multiplayer games online, meaning he had more than 80 million players online. Three of her five of the younger players (ages 13-17) were harassed. Identity-based harassment is on the rise, as are cases of white racial rhetoric.

This is happening in an increasingly noisy online world where 2.5 trillion bytes of data are generated every day, and content moderation has always been a tricky, human-based task, making it more difficult than ever. .

“Competing arguments suggest it’s not a rise in harassment, it’s just more visible because gaming and social media have become more popular — but what it really means is that more people than ever are experiencing toxicity”’ It’s causing a lot of harm to people and it’s causing a lot of harm in the way it creates negative PR for gaming and other social communities. It’s also asking developers to balance moderation and monetization, so now developers are trying to play catch up.”

Mike Lavia, enterprise sales lead at Cohere

Human-based methods are insufficient

The traditional way of dealing with content moderation has been to examine people’s content, verify if it violates trust and safety rules, and flag it as toxic or non-toxic. Humans are still primarily used because we believe humans are probably the most accurate at identifying content, especially in images and videos. But training people on trust and safety guidelines and identifying harmful behaviors takes a long time, Lavia says.

“The way people communicate and use language on social media and games has changed rapidly over the last couple of years, especially here he is, with constant global upheaval affecting conversations.” she says Lavia. “By the time humans are trained to understand toxic patterns, they may be obsolete, and things will start slipping through the cracks.”

Natural language processing (NLP), or the ability of computers to understand human language, has made great strides in recent years and is emerging as a revolutionary way to detect toxicity in text in real time. A powerful model for understanding human language is finally available to developers and really affordable in terms of cost, resources, and scalability for integration into existing workflows and technology stacks.

How language models evolve in real time

The outside world doesn’t just stop there, it’s constantly affecting online communities and conversations, so as part of your moderation, you need to stay up to date with current events. A base model is trained on terabytes of data by scraping the web, and fine-tuning keeps the model relevant to your community, world, and business. Companies bring their own IP data to fine-tune the model to understand their specific business and mission.

“Here we can extend the model to understand the business and do our job at a very powerful level, and the model can be updated very quickly,” he says. “And over time, you can create thresholds and start retraining and start new thresholds, so you can create new intentions for toxicity.”

You can tag conversations about Russia and Ukraine that aren’t necessarily harmful, but worth following. If a user is tagged multiple times in a session, it will be tagged, monitored, and reported as appropriate.

“Previous models couldn’t see it,” he says. “By retraining the model to accommodate this type of training data, we can start monitoring and identifying this type of content. It’s very easy to retrain and continually retrain over time as needed.”

Misinformation, political talk, current affairs, and all sorts of topics that don’t fit the community can be flagged, causing the kind of division that alienates users.

“The extremely high churn rate seen on Facebook, Twitter, and some games on his platform is largely due to this toxic environment,” he says. “It’s hard to talk about inclusiveness without talking about toxicity because toxicity prevents inclusiveness. We need to figure out what the happy medium is during moderation.”

To learn more about how NLP models work, how developers can use them, how to cost-effectively build and scale inclusive communities, and more, don’t miss this on-demand event. !