A secret blacklist of individuals and groups used to regulate speech across Facebook’s platforms, instituted by the company in 2012 in the face of growing public concerns over terrorists being recruited through the site – was recently published in its entirety by The Intercept, on October 12, 2021.
The 100-page Dangerous Individuals and Organisations (DIO) list contains more than 4000 people and groups, both categorised into Terror, Crime, and Hate, with two additional categories for groups, i.e., Violent Non-State Actor and Militarised Social Movement. Supposedly, no one on the DIO list is allowed to maintain a presence on Facebook platforms, nor are individuals allowed to represent themselves as members of the listed groups.
Having operated over the years with deliberate obfuscation of its processes and motivations, including ignoring its Oversight Board’s recommendations to make this list public, Facebook’s content moderation – and the DIO policy specifically – has been criticised as an unaccountable system disproportionately punishing certain communities.
The leaked list’s demographic composition reveals the ways in which Facebook principally echoes American values and conceptions of danger. The disproportionate focus to groups in the Middle East and South Asia as compared with white nationalist groups in the DIO list has been described as reflecting American anxieties, political concerns, and foreign policy values post-9/11 – despite the majority of its users residing outside the US.
The list consists of a three-tier system where the top two tiers (comprising Terror, Crime, and Hate, and Violent Non-State Actors) are mostly made up of non-white groups, heavily focused on Muslim regions and communities – even, inexplicably, Palestinian relief funds.
Tier 3, developed in 2020 and comprising Militarised Social Movements, is mostly right-wing American militias. This effectively creates disparate systems of regulation with heavier penalties applied to the first two tiers, in terms of monitoring and censoring users who discuss and voice support for them.
The implications of Facebook’s dubious, rather ephemeral approach to content moderation is particularly visible in India, its largest market with over 340 million users. The DIO list mentions around 10 India-specific groups, almost all marked as terror groups. These largely comprise a range of communist, insurgent and separatist groups – Communist Party of India-Maoist, Kangleipak Communist Party, Khalistan Tiger Force, Khalistan Zindabad Force, Nationalist Socialist Council of Nagaland-Isak-Muivah, People’s Revolutionary Party of Kangleipak, United Liberation Front of Asom, People’s Liberation Army of Manipur, and the Garo National Liberation Army.
Apart from the Indian groups, several Islamic militant organisations from other countries such as Islamic State and Al Qaeda are listed as having local and sub-groups in India.
However, what is striking is its near-complete omission of right-wing Hindutva terror outfits and Sangh-affiliated organisations and individuals, with the only one mentioned being Sanatan Sanstha, an extremist Hindu nationalist organisation well established in multiple states with followers accused of murdering journalist Gauri Lankesh and rationalists Narendra Dabholkar and M.M Kalburgi. However, even this limited implementation is less than effective – despite Facebook quietly banning three Sanatan Sanstha pages in October 2020, TIME magazine had uncovered a network of more than 30 pages (with over 2.7 million followers) related to the group that were still running in April 2021.
Just days before the DIO list was revealed to the public, the disclosures of whistle-blower and former Facebook employee Frances Haugen made global headlines, making multiple references to the promotion of ethnic violence in India – citing internal company documents about fear-mongering, anti-Muslim narratives posted by “Rashtriya Swayamsevak Sangh users, groups and pages” aimed at pro-Hindu audiences, stating that “political sensitivities” prevented Facebook from providing a designation to this group.
Haugen’s complaints to the US Securities and Exchange Commission (SEC) against Facebook specify that the company lacks algorithmic systems to adequately track hate speech in Hindi and Bengali, but make clear that Facebook is extremely aware of not only the types and sources of incendiary speech against Muslims in India, but also the structural factors that cause its spread.
In the Indian context, political and economic motivations dictate the ways in which Facebook’s platforms police extremism. After several reports in recent years documented the rise of social media fuelling hate and offline violence against minority communities, in August 2020, a Wall Street Journal article revealed how Facebook India’s top public policy executive Ankhi Das thwarted Facebook employees who tried to label a BJP politician as a ‘dangerous individual’ and ban him from the website, after he posted content asking for Rohingya Muslim immigrants to be shot at.
She also opposed applying hate speech rules to “at least three other Hindu nationalist individuals and groups flagged internally for promoting or participating in violence”, and was quoted saying that “punishing violations by politicians from the ruling party would damage the company’s business prospects in India”.
Follow-up reports further implicated Facebook’s internal decision-making mechanisms regarding hate speech and incitement by other BJP lawmakers as well as the RSS-affiliated Bajrang Dal – the latter was flagged by Facebook’s security team for supporting violence against minorities but ultimately not blacklisted, revealed by Wall Street Journal as a decision to save business interests from being “endangered” and the possibility of physical retaliation against company personnel.
Incidentally, while the DIO list is heavily sourced – nearly 1,000 entries, particularly from Muslim countries – from the Specially Designated Global Terrorist (SGDT) list maintained by the US Treasury Department, it doesn’t include right-wing vigilante groups such as the Vishwa Hindu Parishad or Bajrang Dal, which are also classified as religious militant organisations by the CIA.
Against this backdrop, Haugen’s disclosures accuse the company of being uninvested in building structural, technical systems to regulate Indian languages, particularly considering that divisive content drives high user engagement and is thus profitable – despite the role Facebook played in facilitating ethnic cleansing in Sri Lanka and Myanmar.
While the DIO list has taken on an almost totemic power since its inception, consistently referred to by Facebook to defend its regulation processes and preserve its monopoly over social networking infrastructures, the whistle-blower complaints revealed that Facebook is only able to crack down on 3-5% of hate speech and 0.2% of ‘violence and inciting’, and spends 87% of its resources for tackling misinformation on users in the US, who consist of less than 10% of its total users. Perhaps then the DIO list is best understood through its symbolic capacities, as a tool intended to minimise legal and PR liabilities. It is expansive but demonstrably evasive – it includes dead figures such as Benito Mussolini and Joseph Goebbels but leaves out present threats.
Crucially, the DIO list lends credibility to criticisms about the efficacy of moderating speech in countries whose linguistic, political and cultural contexts remain underrepresented in Facebook’s technical and regulatory systems. The Intercept cited several experts who evaluated the leaked material, stating that the lack of nuance amounts to harsh suppression of speech about the Middle East. For India, the seemingly deliberate omission of known perpetrators of inciting violence from the list reveals Facebooks’ dissonance from its socio-political reality and lack of intent to deal with existing threats to Indian democracy.
This story first appeared on thewire.in