A demonstrator attends a protest against riots following clashes between people demonstrating for and against a new citizenship law in New Delhi, India, March 3, 2020. REUTERS/Adnan Abidi TPX IMAGES OF THE DAY – RC29CF9XWQCM

By Raksha Kumar

Facebook removed from its platform a US sitting president whose words inspired people to violence in January 2021. But it hasn’t done the same with an Indian Hindu priest accused of posting hateful messages against Muslims. Several pages and profiles of Yati Narsinghanand continue to exist on the platform even as a judge rejected his bail plea by saying he is “repeatedly making comments to incite communal passions and spoiling religious harmony through social media and there is a strong possibility of serious crimes being committed in the area”.

Why the double standards? “Facebook, and western companies in general, follow two sets of rules: one for [their] home country and the other for the rest of the world,” said Dr Zafarul-Islam Khan, former chairman of the Delhi Minorities Commission, a governmental body. In July 2020 the Commission released a report on how hate speech on social media, and specifically Facebook, fuelled the religious violence in North East Delhi during that year.

“[It is] racism of a sort for sure, as it is treating the lives and dignity of India’s minorities with little or no value,” said Teesta Setalvad, secretary of human rights organisation Citizens for Justice and Peace, which tracks hate speech and has sent several complaints to Facebook over the years.

They have had very few successes. Setalvad and her colleagues had been flagging hate speeches made by T. Raja Singh, a politician from the ruling party since 2019. In September 2020 Facebook banned him for calling Muslims traitors, threatening to raze mosques, and saying Rohingya immigrants should be shot.

Facebook’s own team had declared him ‘dangerous’ in March 2020. But a fan page with more than 219,000 followers and another one with more than 17,000 followers were allowed to operate and generate content for several more months.

The removal of Singh’s account is a significant step towards addressing the problem of hate speech on virtual platforms in India, just as conservative radio host Alex Jones and other white supremacist groups have been permanently banned from the platform. But activists believe the politician’s case seems to be a one-off.

Online calls lead to real life violence

According to the National Crime Records Bureau, the number of religious riots in India doubled in 2020 despite strict lockdowns. 857 cases of communal or religious rioting were registered in 2020. 438 were registered in 2019. “Social media platforms, particularly Facebook, share a lot of responsibility in making hate normal, popular and accessible everywhere in the country. I fear an attempt to unleash genocide may take place anytime before the general elections in 2022,” said Dr Khan speaking in January at a press briefing organised by a group of Facebook critics known as the Real Facebook Oversight Board, set up by a group of activists, lawyers, journalists and concerned citizens to monitor the US elections of 2020.

The group is named after the Oversight Board, a body created by Facebook in May 2020 to make binding, independent decisions regarding content on Facebook and Instagram. Its 20 members are empowered to select cases for review and to uphold or reverse Facebook’s content decisions. Both the board and its administration are funded by an independent trust and supported by a company that is separate from Facebook. The board has made 21 decisions so far, including this one overturning Facebook’s decision to remove a video from a Punjabi-language media company shared by one of its user in November 2020.

In 2020, Facebook commissioned the law firm Foley Hoag to conduct a Human Rights Impact Assessment (HRIA) to evaluate its role in spreading hate speech and incitement to violence on its services in India, the company’s largest market . A year and a half later, the HRIA findings haven’t been published yet.

On 3 January, over 20 organisations sent a letter to Miranda Sissions, Director for Human Rights at Facebook, to release the long-delayed HRIA and address serious concerns about the company’s human rights record in India. “The power of Facebook is Mark [Zuckerberg],” said Brian Boland, a former Facebook executive. “Mark’s hands are on everything, Mark’s decisions lead everything. For there to be changes around any of this it needs to come from Mark. This is the kind of report that, at some point, will be elevated to him.” Boland said that if internal teams at Facebook worked to put such a report in front of Zuckerberg faster, one can expect an answer on the report.

I reached out to Ms Sissions while reporting this article. We shall update the piece when she responds.

Double standards

Not only does Facebook take violence and hate speech in the western countries more seriously. It also has different standards for the powerful in India.

According to the Guardian, Facebook was planning to take down a network of fake accounts in the run-up to the 2020 elections in Delhi, but stopped when they figured it was from a BJP politician.

In October 2021, an investigation by Article-14, an independent website focused on research and reportage, brought to Facebook’s attention a bunch of fake accounts. “We reported a profile called Sanatani Tiger and the hateful content he was putting out, and Facebook said the profile was not fake. A bunch of profiles with obviously fake profiles, such as Pagal Ladka (Mad Guy), Ram Bhakt (Devotee of Ram), Sanatani Yodha (Religious Warrior), Kattar Hindu (Hardcore Hindu) continue unhindered,” they wrote. Within 12 hours of publishing the piece, the reporter’s Facebook account fell prey to a hacking attempt.

Anti-Muslim rhetoric on Facebook’s platforms is not unique to India. In April 2021, Australian Muslim Advocacy Network (Aman) lodged a complaint with the Australian Human Rights Commission claiming that Facebook allows pages that state they are “anti-Islam” and hosts hateful content about Middle Eastern, African, South Asian and Asian people.

The patterns are similar across different countries because hateful content attracts more user engagement, according to Former Facebook employee Frances Haugen. User engagement equals money. “The only way we will begin to get Facebook to make different decisions is if the company faces different incentives for how to conduct its business,” she said.

Meta’s investments in India 

Facebook Group’s India revenues increased by almost 40% to around $1.2 billion in 2020-21 compared to the previous year. Facebook reported a total of $85.9 billion in revenue and an operating profit of $32.6 billion in 2020.

Close to 300 million people in India use Facebook, almost the entire population of the United States. In addition, there are 487 million WhatsApp users and about 201 million Instagram users in the country. In short, India is arguably Meta’s most important market, with a young population rapidly taking to digital technology.

When Facebook started in India in 2010, there were only 20 employees. In 2019 Facebook India had grown to five offices in Hyderabad, Delhi, Gurgaon, Mumbai, and Bangalore. They had also expanded their teams to include, sales, marketing, partnerships, policy and others that impact different areas of the business.

In September 2021 Facebook said they had spent 13 billion dollars on keeping their users safe in the previous five years. “Over the same five years…. they spent 50 billion on stock buyback,” said former Facebook executive Brian Boland at a press briefing on 18 January.

Lack of transparency 

As long as Facebook’s disclosures are voluntary and they find results they don’t like, “they stonewall, they delay”, said whistleblower Frances Haugen, who added that people and governments should push for mandatory transparency.

Facebook is not required to post the number of hate speech posts or the safety systems in place for each language. Unless the company is required to publish its hate speech classifiers in Hindi, for example, India will not get the safety that it deserves, Haugen said.

A year into the violent conflict between the Ethiopian government and rebellious forces from the northern Tigray region, thousands have died and millions displaced. On 13 January, Facebook said it would “assess the feasibility” of commissioning an independent human rights assessment into its work in Ethiopia.

In December 2021, the company released a human rights impact report of its platforms in the Philippines. “Many stakeholders felt that Meta’s efforts were largely a “band-aid solution” and that real change needed to occur at the level of the business model,” its conclusion reads. The report included a list of recommendations, but it is unclear what the company has done so far.

It took 25 days after the military took over Myanmar by force and arrested all its democratic leaders for Facebook to ban all its accounts. What will it take for Meta to release its report on human rights violations in India? What will happen once it is released?

“Comprehending the difference between hate speech and free speech requires a candid engagement with an understanding of India’s diversity and India’s track record of vicious communal violence,” said Setalvad, who added that Facebook currently lacks such an understanding.

“Even if the awaited report is released by Facebook, nothing much, beyond some cosmetic steps, will take place in India,” said Dr Khan. “[The amount of money] Facebook makes in India and other Third World countries is more important [for Facebook] than ethics and morality.”

This story first appeared on reutersinstitute.politics.ox.ac.uk