In India, chat app WhatsApp lacks competitors and is used by 400m active users, meaning it is likely used by almost all mobile phone owners. But after it was widely reported that misinformation and rumours spread on the platform have fueled violence and even murders, many are deeply concerned about use of the app in its biggest market. Here, LSE’s Shakuntala Banaji and Ram Bhat explain the findings of their recent research into how Whatsapp is used to spread misinformation in India, and make recommendations to improve how the platform and others tackle this problem.

In late 2018, WhatsApp awarded us one of 20 misinformation and social science research awards for an independent study of the different types of misinformation leading to mob lynching in India, the users of WhatsApp who pass on this disinformation and misinformation, and the ways in which citizens and experts imagine solutions to this problem. We considered questions such as:

The research process

After ensuring that we had ethical permission, that we would have full intellectual control of our data, and that WhatsApp or its parent company Facebook would not interfere with our findings and conclusions in any way, our team conducted in depth interviews and focus groups with experts from civil society institutions, lawyers, journalists and law enforcement agencies, and with 275 ordinary WhatsApp users in India across four very diverse states: Karnataka, Maharashtra, Madhya Pradesh and Uttar Pradesh. Like many other places in the country, these states had witnessed multiple incidents of vigilante mob violence and lynching. Many of these incidents, which were particularly related to allegations of cow theft or other bovine matters, and to supposed child kidnapping with a view to kidney theft, had been mobilised for and/or filmed and circulated afterwards, on WhatsApp.

In parallel, we reviewed more than 1000 separate forwarded messages shared by WhatsApp users, including materials reviewed by prominent fact-checking institutions in India. What we found was painful and shocking in its graphic depiction of violence across multiple domains: from accidents, natural disasters and politically motivated lynchings to Islamophobic hate speech, religious nationalist propaganda and misogyny. Putting together our analysis of types of mis- or disinformation with the analysis of our focus group and interview data led us to reach some startling conclusions about the types of content circulating amongst ordinary citizens and about the demographics, views and values of the citizens who circulate this content.

The national context

In the last five years, under the Hindu Nationalist BJP government and in states where the BJP and allies rule, vigilante mobs using targeted violence against Muslims, Dalits, Christians, Adivasis and women have increased exponentially. In an equally disturbing aside, mobilisations of lynch mobs based on WhatsApp rumours about strangers alleged to be child kidnappers or kidney snatchers have also resulted in more than forty murders since 2017. WhatsApp, Tik-Tok, Youtube, ShareChat, Facebook, Twitter, Instagram and host of other lesser known digital social media applications have been heavily implicated in the circulation of what appears to be systematic disinformation by politically motivated groups and malicious persons wishing to destabilise communities. Three state governments – Manipur, Rajasthan and West Bengal have passed anti-lynching laws, and there is increasing pressure from the Indian Government on intermediaries to assume greater liability for enabling mob violence – including the responsibility for pre-emptive filtering of content, tracing the originators of misinformation and the removal of encryption. So, how does our report connect to this context?

Our findings

Contrary to popular wisdom, our research suggests that there is no straightforward causal connection between levels of functional media literacy and the sharing of misinformation. When we started out we encountered a widely held assumption that if only users would know (or be trained) to distinguish “facts” from “fakes”, then they would stop sharing misinformation. In other words, the assumption is that most users who share misinformation are ‘tricked’ into doing so or do so because they are ignorant and digitally illiterate. On the contrary, we found that users who make or pass on hate speech, disinformation and misinformation have a diverse set of motivations and some quite surprisingly contemporary digital skillsets. So, if not digital illiteracy and ignorance, what are the reasons for the spread of deadly misinformation across India?

First, we discovered deeply held and widespread prejudices, loaded with resentment, suspicion, disgust, contempt and hatred against minority groups, especially Muslims and Dalits, amongst a significant section of upper and middle caste Hindu WhatsApp users. These widespread prejudices appear to be lent legitimacy and supported by several factors, including anti-Muslim statements from BJP leaders and the swift attacks by fellow citizens (physical and online) against anyone who is critical of the BJP, of BJP leaders, of militarism in India or who argues for secularism/human rights to be extended to India’s minorities.

Furthermore, we found an intimate connection between the formats and prejudices aired constantly on mainstream media (especially television news and popular films) and the misinformation circulating on WhatsApp: the lascivious Muslim terrorist or rapist, the devout Hindu leader, the suspicious Dalit hanging around to steal or harm cows, the meat-eating Congress voting liberal, the foreign-identified anti-nationals – all were common tropes on WhatsApp, in mainstream media and in the conversations and minds of many WhatsApp users.

Thus, different news channels, Hindu nationalist supporters on social media and active WhatsApp user groups full of forwarded messages all promote the same ‘narrative’ in different ways, across different platforms.

A significant number of women we spoke to mentioned the devastating consequences of gendered and sexist usage of WhatsApp. These included self-censorship or strong policing of women’s activities (including online/mobile use) by family members, intimidation and bullying of women in groups, blackmail, depression, and feelings of humiliation and powerlessness that can lead to suicide.

Some interviewees reported feeling angry, disturbed, shocked, uneasy, frightened when encountering some of the content they forwarded, and these reactions apparently compelled users to engage in passive practices such as bulk-deleting of messages or bulk forwarding of messages due to the fatigue produced by the sheer amount of information received on a daily basis.

Finally we also found that sharing of misinformation was closely tied to beliefs about users’ own national and civic duty (including the notion of protecting a religious community). In this vein, older users often forwarded misinformation based on their respect for and trust in the sender, rather than on the content of the message. Some users asserted that during ‘highly charged’ occasions (elections, cross-border conflicts), it was their duty to share a specific ideological position ensconced in a text message or image rather than to quibble about its truth status. For example, heavy circulation of disinformation followed an attack in February 2019, when a suicide bomber killed 40 members of the Central Reserve Police Force in the Pulwama district of Jammu and Kashmir.

Our typology of misinformation identifies four primary categories:

  • targeting opposition political parties and leaders (especially ex-prime minister Manmohan Singh and members of the Gandhi family)
  • targeting individuals/institutions from vulnerable groups, especially Muslims and Dalits
  • targeting women
  • miscellaneous – quasi religious greetings, landscapes, accidents, health and other kinds of news, false historical allusions and more.

Each of these categories of misinformation produces specific targets and has varying and deadly consequences for vulnerable groups.

What we describe as functional media literacy – the ability to carry out Google searches, in-app editing, uploading, downloading, and changes to messages – is no guarantee against the sharing of misinformation. On the contrary, those involved in producing, curating and bulk-sharing disinformation appear to be highly skilled and media savvy upper caste male users in semi-urban or urban areas.

Those from lower classes and lower castes, as well as women users, are at best sharers of misinformation, while the most oppressed groups have very little access to the social infrastructures and resources to be part of social media networks. Ergo: misinformation, disinformation and fake news will continue to circulate and to lead to the murders of people in marginalised and discriminated groups until far more is done than mere efforts towards showing users how to spot a manipulated image.

We recommend

  • Hate speech and targeting of specific groups such as Muslims and Dalits must be treated seriously, on par with how seriously technology companies treat instances of child exploitation. Accordingly, there needs to be greater investment (financial and otherwise) in making it easier to trace and punish repeat offenders, even those with political protection, and for users to report specific types of hate speech (including violence against women).
  • In order to prevent bulk-sharing (especially of misinformation), WhatsApp should restrict the sharing of messages to groups, with the default being that one can only share to one group at a time. (In our view, this curtailment would do less harm to human-rights oriented users than it would do good in preventing propagandists from spreading hatred and mobilising murderous mobs).
  • WhatsApp should introduce a ‘beacon’ feature where users in a specific area can be warned about potential loss to life and property. We also recommend that WhatsApp work more closely with other stakeholders such as Google to ban use of unauthorised versions of WhatsApp where restrictions of bulk sharing can be bypassed.
  • WhatsApp and grassroots civil society organisations work more closely to invest and promote efforts to promote critical media literacy where media representation (both in mainstream and digital media) is deeply entwined with power struggles along the lines of caste, class, gender, language etc. The struggle to make meaning involves production, distribution and consumption, and must include both producers and audiences.
  • Rather than positioning themselves as neutral arbiters of speech, technology companies must adopt a more ethical and proactive position in support of human rights, secularism and other constitutional values.

This story first appeared on London School of Economics blog. The full research report can be accessed here