How to Create a Winning Content Moderation Strategy

How to Create a Winning Content Moderation Strategy

Technology is developing at a dizzying pace.  Around for one-quarter of a century, we are in the youth of the Internet, which has given many people the ability to connect one-to-one via instant messaging, websites for any niche interest you can imagine and social media.  Facebook alone has 2 billion active users per month, as an example.

While many herald the social media as the greatest gift from the internet gods, it can also be a dangerous place that allows sexual predators to lurk behind fake profiles or enable individuals to spread messages of hate. Conversely, it’s reunited long-lost friends, enabled new connections and helped charities raise money.

The challenge lies in that content is being produced at a rapid pace and in massive amounts on an hourly basis.  For example, according to Smart Insights, the following occurs within any given 60 seconds in a day:

  •      YouTube: 500 hours of video uploaded
  •      Facebook: 3.3 million posts
  •      WordPress: 1,440 posts
  •      Instagram: 65,972 photos uploaded

With so much content being uploaded daily, including live video, content as disturbing as suicide or murder has already made its way into the newsfeed of Facebook users, even the underage ones. And, Facebook or any popular social media doesn’t want to be responsible for that.

As a result, an army of humans has been hired to keep scandalous images and videos out of the news feeds of users to the tune of “well over 100,000 (employees) – that is, about twice the total head count of Google and nearly 14 times that of Facebook,” according to Wired.

Facebook’s formula for content reviewers, according to a recent article in ProPublica, is reported to be:

Image courtesy of Pro Publica.

While it’s safe to say that something like child pornography easily falls into the protected category and would for most major social media, it gets much trickier when the issue is more political.

For example, Google announced in June that it had implemented a strategy to combat sharing of terrorist propaganda on YouTube after they were hit with an advertising scandal earlier this year.  When extremist content was positioned next to ads for AT&T and Verizon on YouTube, the companies pulled ad dollars from the video sharing site unless it could be guaranteed that wouldn’t happen again in the future. While programmatic advertising is the responsible party here, at the end of the day, it still happened on YouTube’s turf.  And, in this case, it wasn’t just bad ad placement, it resulted in a decline in ad revenues.

Until artificial intelligence can more accurately identify controversial images or live videos in real-time and remove them accurately without censoring too much “safe” content, such images still may surface on social media in the meantime.  Until artificial intelligence has more input from humans and data to learn from, a hybrid approach of the two will remain.

In the meantime, the number of humans assigned to monitor content will heavily outweigh those assigned to artificial intelligence.  Currently, there are two approaches to content moderation; active and reactive.  Active content moderation is the most costly approach since it requires content to be monitored in real time, which requires a hefty staff of humans considering the above statistics.

Reactive content monitoring is an approach that allows other users on a platform to flag the content for review.  Then, a human review it and a decision is made whether to pull the image or the user entirely from the platform.  By then, it might have been seen and shared by thousands or millions.

For many companies, it can be difficult to determine an approach to handling massive amounts of user generated content.  Whether you decide to reactively or actively monitor content, you also have to decide which tasks make sense for artificial intelligence and which make sense for humans.  In many cases today, a hybrid approach makes sense.

Because every company has unique business processes and needs, many aren’t sure which content moderation approach makes sense, which is possible given the current set of software tools the company has, nor where to apply artificial intelligence or human capital.  In many cases, we’ve helped companies determine when it makes the most sense to use AI to:

  •      Apply text filters to identify obscene language
  •      Identify whether there is intent to solicit
  •      Apply deep learning algorithms to existing data sets
  •      Apply image filters

If you don’t have the internal bandwidth needed to plan, test and implement a content moderation solution, we can help.  By applying the right balance of artificial intelligence and human capital, we’ve helped companies take a modern approach to content moderation that satisfies both audiences and your business objectives.