By HRW Team
India’s general elections are scheduled to begin on April 19, 2024, and last six weeks. The results are to be announced on June 4. Voters will elect 543 members for the lower house of Parliament for five-year terms. The party or coalition of parties that wins a majority of seats will nominate a candidate for prime minister and form a government.
What are India’s human rights obligations around the 2024 general elections?
What international human rights laws or standards apply to the use of technology in elections?
What role are online platforms expected to play in the Indian elections?
What authority does the government have to block online content?
Does the use of personal data in the context of the election pose risks?
What responsibilities do tech companies have?
Have social media companies met their human rights responsibilities in previous Indian elections?
What are online platforms doing to protect human rights during the 2024 elections?
What else should tech companies be doing to respect the right to participate in the elections?
What are India’s human rights obligations around the 2024 general elections?
India is subject to human rights obligations under international human rights treaties and customary law and is obligated to conduct elections fairly and freely, including by ensuring that citizens are able to vote without undue influence or coercion. In addition to ensuring the right to participate in public affairs, India is also obligated to secure other rights when it comes to elections. These include the rights to freedom of expression, peaceful assembly, association, and privacy; the right of people to run for public office with the freedom to convey their ideas; and the obligation to ensure that voters are able to vote free of abusive or manipulative interference.
India is party to the International Covenant on Civil and Political Rights (ICCPR), the International Convention on the Elimination of All Forms of Discrimination against Women, and the International Convention on the Elimination of All Forms of Racial Discrimination, among other core human rights treaties.
What international human rights laws or standards apply to the use of technology in elections?
The United Nations Human Rights Council and General Assembly have recognized that human rights protections apply online. The UN Human Rights Committee, which monitors compliance with the ICCPR, has recognized that multiple human rights are engaged during elections, and are integral to the right to participate in public affairs. Governments should ensure that these rights are protected online and offline in the context of elections.
The UN Special Rapporteur on freedom of opinion and expression has highlighted internet shutdowns, initiatives to combat “fake news” and disinformation, attacks on election infrastructure, and interference with voter records and voters’ data as key technology-related threats to elections. Internet shutdowns are incompatible with international human rights law, and governments should refrain from imposing them. Restrictions on online advocacy of democratic values and human rights are never permissible under international standards.
The UN General Assembly has issued resolutions recognizing the important role that social media platforms can have during elections and expressed concern regarding the manipulative use of these platforms to spread disinformation, which can undermine informed decision-making by the electorate. The resolutions have also highlighted the growing prevalence of internet shutdowns as a means of disrupting access to online information during elections
Freedom of expression experts from the UN, the Organization for Security and Co-operation in Europe, and the Organization of American States have also jointly denounced the adoption of general or ambiguous laws on false information, underscoring the increased likelihood that such laws will be misused to curtail rights during elections.
What role are online platforms expected to play in the Indian elections?
Technology is expected to play a significant role in India’s upcoming election. Indian political parties campaign extensively through digital platforms. Ahead of the upcoming elections, political advertising on Google surged in the first three months of 2024. The governing Bharatiya Janata Party (BJP) has been the largest advertiser among political parties on both Google and Meta over the past three months and has built a massive messaging operation through WhatsApp. “Diffuse actors” with no institutional or organizational affiliations also play a significant, but less transparent, role in disseminating and amplifying political speech on social networks to mobilize voters in India.
As whistleblower reports have made clear, Meta, the parent company of Facebook, Instagram, and WhatsApp, has been selective in curbing – and in some cases has amplified – hate speech, misinformation, and inflammatory posts, particularly anti-Muslim hate speech and misinformation, in India, which are likely to play a part in electoral campaigning. Networks of inauthentic accounts, some reported to be associated with government authorities, have also been shown to spread misinformation and hateful content.
The widespread availability of generative Artificial Intelligence (AI) tools that are low-cost and require little technical expertise to use raises new challenges for India’s 2024 elections. India’s information technology minister, Ashwini Vaishnaw, called AI-generated audiovisual content a “threat to democracy.” In the context of elections, generative AI can be used to create deceptive videos, audio messages, and images impersonating a candidate, official, or media outlets that are then disseminated quickly across social media platforms, undermining the integrity of the election or inciting violence, hatred, or discrimination against religious minorities. In the lead-up to the 2024 elections, several parties are using AI in their campaigns.
What authority does the government have to block online content?
Indian authorities have exerted more control over online spaces to shut down criticism and dissent in recent years. They have banned at least 509 apps, according to media reports, including TikTok following escalating tensions with China.
The government’s legal authority for blocking the internet and other online content comes mainly from the Information Technology Act and related rules. Additionally, the Election Commission of India (ECI) forbids “any activity which may aggravate existing differences or create mutual hatred or cause tension between different castes and communities, religious or linguistic.”
Indian authorities have a history of applying these laws to block online content critical of the government. In February 2024, the authorities arbitrarily used their powers to block online content and accounts of its critics and journalists on social media platforms. For example, the Global Government Affairs team at X (formerly known as Twitter) stated that the Indian government issued “executive orders” requiring them to take down specific accounts on February 21. Most of these accounts belong to journalists who reported on peaceful protests held by farmers, farmers union leaders, and others supporting the farmers’ actions.
The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, ostensibly aimed at curbing misuse of social media, including to spread “fake news,” in fact enhance government control over online platforms. In April 2023, the government amended the 2021 IT rules, authorizing the authorities to set up a “fact checking” unit with arbitrary, overbroad, and unchecked censorship powers to order online intermediaries to take down content deemed false or misleading about “any business” of the government. The Ministry of Electronics and Information Technology established the fact-checking unit on March 20. However, the Supreme Court put the fact-checking unit on hold until the Bombay High Court decides its constitutionality.
The authorities frequently use internet shutdowns to stem political protests and criticism of the government or as a default policing action, violating domestic and international legal standards that require such shutdowns to further a legitimate aim, and to be necessary, proportionate, and lawful. Shutting down the internet ahead of or during elections risks accelerating the spread of electoral disinformation and incitement to violence, hatred, or discrimination, and it hinders the reporting of human rights violations.
Does the use of personal data in the context of the election pose risks?
Misuse of personal data is a major concern in India’s elections. Personal data can contain sensitive and revealing insights about people’s identity, age, religion, caste, location, behavior, associations, activities, and political beliefs.
India has developed an extensive digital public infrastructure through which Indians access social-protection programs. At the heart of this is “Aadhaar,” the world’s largest biometric identity database, which is required to access all government programs. The Indian government has collected massive amounts of personal data in the absence of adequate data protection laws to properly protect privacy rights.
In August 2023, the Indian government adopted a personal data protection law, but it is not yet operational. The law fails to protect citizens from privacy violations, and instead grants the government sweeping powers to exempt itself from compliance, enabling unchecked data collection and state surveillance. In particular, large amounts of government-held personal data are being made available to the ruling BJP, which potentially allows the party to develop targeted campaigns before the 2024 general elections. Human Rights Watch has documented in other contexts that government authorities repurposed personal data collected for administration of public services to spread campaign messages, and further tilt an already uneven playing field in favor of the ruling party.
In recent years, there have been instances of Aadhaar data being made publicly available due to weak information security practices, which can have serious implications for privacy and misuse in the context of elections. For example, in 2019, the personal data of over 78 million residents in two Indian states was misused to build a mobile app for Telugu Desam Party, a regional political party with influence in the states of Andhra Pradesh and Telangana. This data reportedly included voters’ Aadhaar number, demographic details, party affiliation, and beneficiary details of government schemes, among other information.
Additionally, the Indian government has been proposing to link voter ID cards (and the voter database) with Aadhaar since 2015. In December 2021, Parliament passed the Election Laws Amendment Bill, which created a legal framework for integrating the two systems. However, civil society and experts warned that this could lead to voter fraud, disenfranchisement based on identity, targeted advertisements, and commercial exploitation of sensitive personal data.
In September 2023, the Election Commission of India (ECI) told the Supreme Court that it would clarify that the submission of Aadhaar numbers is not mandatory. However, by February 2023, The Hindu had reported that, according to the ECI, roughly 60 percent of voters had already linked their Aadhaar numbers to their voter IDs. Furthermore, voter registration forms lack a clear option for voters to abstain from providing their Aadhaar number.
There have already been reports of misuse of personal data in the campaign period that started on March 16. On March 21, the ECI told the government to stop sending messages promoting government policies to voters as it was a violation of the campaign guidelines. The message and accompanying letter from Prime Minister Narendra Modi that prompted the ECI intervention listed several government programs and sparked concerns over data privacy, as well as abuse of government communications for political purposes.
What responsibilities do tech companies have?
Under the UN Guiding Principles on Business and Human Rights, companies have a responsibility to respect human rights. This requires them to avoid causing or contributing to human rights impacts, remedy such impacts when they occur, and prevent or mitigate human rights risks linked to their operations. Specifically, companies need to identify human rights risks in their own operations, products, services, and business relationships; in consultation with rights groups, including human rights defenders and journalists at risk; and develop plans and processes to prevent and mitigate these risks.
In the context of elections, tech companies have the responsibility to conduct ongoing human rights due diligence, and to revisit existing due diligence measures to take into account the heightened risks to human rights that elections present. As part of this process, companies should address any aspects of their products, services, and business practices that may cause, contribute, or be linked with undermining free and fair elections, including threats to the right to vote or to participate freely in elections. The risks include the spread of electoral disinformation, manipulative interference with voters’ ability to form independent opinions, and spread of content that could incite hatred or violence.
Companies should clearly define what constitutes political advertising, so that it is clear to voters who is behind a particular campaign message, and put in place adequate measures to comply with campaign regulations. Actions that companies take should be in line with international human rights standards and conducted in a consistent, transparent, and accountable manner.
Companies should publish all available measures that the public can take to report electoral disinformation and content that could incite hatred or violence in all state languages, in multiple formats, including easy-to-access formats, to reach users across India, both literate and otherwise.
In 2019, the Election Commission of India and social media platforms created a Voluntary Code of Ethics for General Elections aimed to increase transparency in paid political advertising, bringing political ads on platforms like Facebook and Google under the purview of the campaign guidelines for parties and candidates. However, the code was drafted without transparency, public input, or civil society engagement. It lacks a clear definition of what constitutes “political advertising,” making detailed comparisons of political ad spending across different platforms difficult.
Additionally, there is no provision for monitoring by the Election Commission of India or an independent organization of the platforms’ compliance with the code. Guidance from the Electronics and Information Technology Ministry requires companies to label AI-generated content and inform users about the possible inherent fallibility or unreliability of the output generated by their AI tools.
Have social media companies met their human rights responsibilities in previous Indian elections?
Indian authorities have applied significant formal and informal pressure on tech companies, both to suppress critical speech and to keep online speech by government-aligned actors that would otherwise violate the companies’ policies.
In 2022, an in-depth investigation by the Reporters’ Collective and ad.watch of advertisements in India, spanning February 2019 to November 2020, raised questions about whether Facebook was giving the BJP cheaper ad rates compared with those offered to its opponents during 9 out of 10 elections analyzed. This study also found that Meta was allowing the BJP to create proxy advertisements, going against the company’s own rules. When Facebook did crack down on surrogate advertisements, it mostly targeted advertisers promoting the opposition Congress Party.
Collectively, such actions can have the effect of contributing to an uneven playing field by giving the BJP an unfair advantage in political campaigning online. In March 2022, Meta denied in broad terms accusations of favoring the BJP, and repeated previous statements that its policies apply uniformly “without regard to anyone’s political positions or party affiliations.”
Moreover, according to a report in the Washington Post based on an investigation by an outside law firm that Meta contracted in 2019, Meta did not stop hate speech and incitement of violence ahead of a riot in Delhi in 2020 in which at least 53 people died. In comments to the Post, Meta referenced its policies on hate speech and incitement, saying it enforced them globally, but Meta has refused to publish this human rights impact assessment, showing a continued disregard for the serious human rights concerns that civil society groups have been raising for years.
What are online platforms doing to protect human rights during the 2024 elections?
In response to public pressure, some platforms and messaging apps in recent years have announced steps they are taking to prepare for elections. Of the major tech companies, Google and Meta announced specific measures in preparation for India’s 2024 elections.
Meta said in March that it will activate an India-specific Elections Operations Center to bring together experts from across the company. The company says its efforts will center around combating misinformation and false news, addressing viral messaging on its subsidiary WhatsApp, making political advertising more transparent, combating election interference, and encouraging civic engagement.
Meta says it will remove AI-generated content that violates its policies, and that AI-generated content can also be reviewed and rated by fact-checking partners. Fact checkers can rate a piece of content as “Altered,” which includes “faked, manipulated or transformed audio, video, or photos.” Once content is labeled “altered” its distribution is limited. Meta is also requiring advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases. However, relying on self-disclosure means that content, including images, videos, or audio recordings that is altered with AI, can spread before they are properly identified.
Meta announced in March that it had joined forces with the Misinformation Combat Alliance (MCA), a cross-industry alliance working to combat misinformation and fake news, to introduce a WhatsApp helpline to deal with AI-generated misinformation, especially synthetic media (AI-generated audio and visual content), creating an avenue for reporting and verifying suspicious media.
As part of this initiative, it is working with MCA to conduct training sessions for law enforcement officials and other stakeholders on advanced techniques for combating misinformation, including identifying synthetic audiovisual material. However, training law enforcement officials has significant limitations in India because of the long-pending reforms needed to separate law enforcement from political interference, and control to protect its independence.
Meta noted that it is closely engaged with the Election Commission of India via the 2019 Voluntary Code of Ethics, and gives the commission a high priority channel to flag unlawful content.
Google announced in March that it would elevate authoritative electoral information in searches and on its subsidiary YouTube, and provide transparency around election ads. The company said it would combat misinformation, including by working with fact-checkers and using AI models to fight abuse at scale. However automated content moderation often falls short by missing necessary context and nuance and is unlikely to capture all content, particularly in non-English and low-resource languages. Google also announced it has begun to roll out restrictions on the types of election-related queries for which its Gemini generative AI chatbot will return responses.
X has general policies around elections, but has not released specific information on its efforts around India’s election to inform citizens of measures they can take to safeguard their election rights, including reporting misinformation and manipulative use of AI. X’s general approach to elections focuses on elevating credible information, promoting safety on the platform, promoting transparency, and collaborating with partners. X’s policies state that it prohibits the use of its services for manipulating or interfering in elections or other civic processes.This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.
Under the policies, X may label and reduce the visibility of posts containing false or misleading information about civic processes in order to provide additional context. Severe or repeated violations of this policy by specific accounts may lead to permanent suspension.
Some generative-AI-focused tech companies announced their approach to elections generally. The ChatGPT creator Open AI said in January 2024 that they “don’t allow people to build applications for political campaigning and lobbying” with their technology. However, analysis by the Washington Post in August 2023 showed that OpenAI was failing to implement its March 2023 policy for prohibiting political messaging on its products. The Post noted that an OpenAI representative told them the company was “exploring tools to detect when people are using ChatGPT to generate campaign materials,” and that its rules reflected an evolution in how the company thinks about politics and elections.
Anthropic, an AI company, similarly stated that, effective September 15, 2023, its generative AI products should not be used for political campaigning and lobbying, and said in February that it was using technical evaluations to detect potential “election misuses,” including when systems deliver misinformation and bias.
Stability AI, a generative AI company, also has an “Acceptable Use Policy” that asks users not to use its technology to violate the law or others’ rights, impersonate another individual, or generate or promote disinformation. AI-generated audio can be harder to identify than visual content for fact-checkers. The audio generator developer Eleven Labs has stated it aims to prevent mimicking prominent politicians’ voices using its technology. Though it is focusing first on the US and UK, it says it is “working to expand this safeguard to other languages and election cycles.”
In February, companies that create or disseminate AI-generated content initiated the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” a set of voluntary commitments to manage the risks globally arising from deceptive AI election content. However, voluntary commitments are a floor, not a ceiling, and lack enforcement mechanisms needed for genuine accountability.
Digital rights organizations have called on the Election Commission of India to take urgent measures on generative AI and manipulated media content to uphold electoral integrity.
What else should tech companies be doing to respect the right to participate in the elections?
India presents a challenging environment for social media platforms and messaging apps, so companies need to urgently adopt effective steps to respect human rights in India. They should make the human rights of people in India a priority, including at the expense of profits. This means treating all parties and candidates equitably, and not bending to central government pressure or giving government or the ruling BJP special allowances, especially when it comes to spreading speech that incites violence or hatred.
Despite the 2021 IT Rules, and other restrictive legislation in India, companies should continue to resist pressure from the authorities when responding to requests to remove content or provide access to data. This is particularly important for content shared by civil society groups, which is crucial for election monitoring and the removal or blocking of which might have an adverse impact on election results.
Companies should also be transparent regarding data access requests and government takedowns, including by linking to the Lumen database, a Harvard University hosted database of takedown notices and other legal removal requests and demands; and reporting how they responded, whether the response consisted of proactively reporting a violation to law enforcement, or any other steps taken in compliance with Indian law.
Companies that provide the tools that generate AI images, videos, audio products, and text should demonstrate that they have thought through how their tools can be used and abused in the context of India’s elections and specifically outline how they will mitigate those risks, in consultation with human rights and technology experts.
Ahead of elections, and in between election cycles, companies should demonstrate that they have adequately invested in responsible moderation, both human and automated; as well as carry out rigorous human rights impact assessments for product and policy development, engage in ongoing assessment and reassessment, and consult with civil society in a meaningful way.
This story was originally published in hrw.org.