As hate content spiked, cost cuts at Facebook hit its review team

The company did not, however, respond to a specific query on the total expenditure undertaken by Facebook for hate speech review annually, and how this figure had changed since 2019. (Image credit: Reuters)

By Aashish Aryan , Pranav Mukul , Karunjit Singh

As inflammatory and divisive content increased across most markets including India, it was the global team responsible for reviewing hate speech at Facebook that faced cost cuts. This is flagged in internal documents reviewed by The Indian Express.

To reduce expenses, three potential levers were proposed internally at the social media company — reviewing fewer user reports, reviewing fewer proactively detected pieces of content, and reviewing fewer appeals, according to an internal strategy note dated August 6, 2019. In effect, the clean-up was the casualty.

As The Indian Express reported on Thursday, it was in July 2020 that an internal document pointed to a “marked increase” in “anti-Muslim” rhetoric on the platform in the preceding 18 months in India, Facebook’s biggest market by the number of users.

“Everyone understands that the cost reductions are coming no matter what we do: teams will be taking a haircut on their CO capacity…,” said the August 6, 2019 note, titled ‘Cost-control: a hate speech exploration’. CO — community operations — refers to the contract labour force at Facebook.

“The question is not how to cut capacity, but how far we can cut without eliminating our ability to review user reports and do proactive work,” the note said.

The note discussed specific ways to employ the three levers to cut costs — including ignoring “benign user reports” and asking users to “be more thoughtful before submitting a request for re-review”.

The need to review fewer user reports stemmed from the fact that while Facebook reviewed the majority of user reports, it found that the action rate on reactively reported content was “at best 25%”.

The document pointed out that nearly three-quarters of the costs incurred on reviewing content were on account of reactive capacity — meaning the capacity used to review content that was already flagged by users or third parties. Only 25% of the review costs were incurred on proactive capacity.

“In H1 (first half, January-June 2019), we worked hard in accordance with the ‘Hate 2019 H1 capacity reduction plan’ to significantly increase the volume of actions we can take while maintaining the same levels of capacity… We need to significantly increase the rigour with which we make decisions on how to spend our human review capacity across the board, and indeed in case of hate speech we need to cut a significant amount of our current capacity in order to fund new initiatives,” the strategy note said.

According to this plan, by the end of June 2019, Facebook planned to reduce by 15 per cent the dollar cost of total hate review capacity. As per this document, the company was spending over $2 million per week on reviewing hate content.

In response to a request for comment, a spokesperson for Meta Platforms Inc — Facebook was rebranded as Meta on October 28 — told The Indian Express: “This document does not advocate for any budget cuts to remove hate speech, nor have we made any. In fact, we’ve increased the number of hours our teams spend on addressing hate speech every year. The document shows how we were considering ways to make our work more efficient to remove more hate speech at scale.

“In the past decade, we’ve created technology to proactively detect, prioritize and remove content that breaks our rules, rather than having to rely only on individual reports created by users. Every company regularly considers how to execute their business priorities more efficiently so they can do it even better, which is all that this document reflects.”

The spokesperson added: “In the last two years, we’ve hired more people with language, country and topic expertise. Adding more language expertise has been a key focus area for us. They are part of the over 40,000 people we have working on safety and security, including global content review teams in over 20 sites around the world reviewing content in over 70 languages, including 20 Indian languages.”

The company did not, however, respond to a specific query on the total expenditure undertaken by Facebook for hate speech review annually, and how this figure had changed since 2019.

These reports are part of documents disclosed to the United States Securities and Exchange Commission (SEC) and provided to Congress in redacted form by the legal counsel of former Facebook employee and whistleblower Frances Haugen.

The redacted versions received by Congress have been reviewed by a consortium of global news organisations including The Indian Express.

In addition to the cost-control hurdles that the content review team ran into, there is evidence that Facebook recognised the conflicts between teams that dealt with civic quality metrics such as misinformation, hate speech, spam, etc, and those that were designing the algorithms to make the news feeds of Facebook users more relevant.

In a May 3, 2019 post on an internal group named “Election Integrity Discussions”, a Facebook employee noted that “feed relevance work may have inadvertently pushed some civic quality cuts 60% in the wrong direction”, indicating that changes to the news feed algorithm had increased the prevalence of problematic content.

This story first appeared on indianexpress.com

Related Posts

Stay Connected

0FansLike
77,071FollowersFollow
660SubscribersSubscribe

Recent Stories