This article is written by Ishika Kushwah, a third-year B.A. LL.B (H) student, Sage University, Indore, during her internship at LeDroit India.
Abstract
“Deepfakes” have become a danger to truth, trust, and human dignity due to the quick development of generative Artificial Intelligence (AI), especially in India’s high internet penetration and low digital literacy. This paper examines the multifaceted threat posed by deepfakes, which can be used to commit serious gender-based violence (GBV), undermine democratic integrity, and facilitate large-scale financial fraud. The existing reactive legal framework, which is based on a patchwork of legislation such as the Digital Personal Data Protection (DPDP) Act, 2023, the Bharatiya Nyaya Sanhita (BNS), 2023, and the Information Technology (IT) Act, 2000, is critically examined in this article.
The lack of a deepfake-specific definition, the conflict between planned obligatory labelling regimes and established intermediate safe harbour (Section 79) law, and the significant evidentiary obstacles imposed by the Bharatiya Sakshya Adhiniyam (BSA), 2023 are some of the major legal gaps that have been discovered. In order to strengthen the public against generative AI-driven deception, the paper ultimately calls for a shift from this legal patchwork to a cogent, technologically informed governance framework that emphasises upstream accountability for AI developers, proportionate penalties for malicious actors, and, most importantly, a massive national mission for digital literacy.
Keywords
Deepfakes, AI Ethics, India Legal Framework, Information Technology (IT) Act, 2000, Digital Personal Data Protection (DPDP) Act, 2023, Bharatiya Nyaya Sanhita (BNS), Intermediary Liability (Section 79), Non-Consensual Intimate Imagery (NCII), Generative AI, Digital Literacy, Mandatory Labelling, Legal Gaps
Introduction: The Deepfake and AI Ethics Landscape
A significant change in how people interact with digital media has been brought about by the quick development of generative Artificial Intelligence (AI) systems. AI presents an existential danger to truth and trust in the form of “deepfakes,” even if it has enormous potential for social and economic advancement. Deepfakes, which are extremely realistic, artificial, or altered audio-visual content produced by deep learning algorithms, have quickly evolved from a niche online curiosity to a common tool for gender-based abuse, financial fraud, and political manipulation.
The deepfake problem is especially severe in India, a country with a high internet penetration rate (85.5% of households possess a smartphone), but it is frequently accused of having inadequate digital literacy. The simplicity, speed, and anonymity with which malevolent synthetic content can be produced and distributed directly conflict with the constitutional and social obligation to defend democratic integrity, individual dignity, and data privacy. Hyper-realistic AI-driven fraud was not intended to be addressed by India’s fundamental legal framework, which is mostly based on the Information Technology (IT) Act of 2000 and the newly amended criminal and data protection statutes.
In addition to critically analysing the current patchwork of applicable laws, such as the IT Act, the Bharatiya Nyaya Sanhita (BNS) 2023, and the Digital Personal Data Protection (DPDP) Act 2023, this article explores the distinctive aspects of the deepfake threat in India and identifies the crucial gaps and legal obstacles impeding effective enforcement. Lastly, it assesses the suggested regulatory remedies required to create a moral and responsible AI environment, such as required labelling and increased intermediary liability.
The Threat of Deepfakes in the Indian Context
The threat posed by deepfakes in India is multi-dimensional, impacting democratic processes, financial stability, and individual safety, particularly for women. Recent high-profile incidents underscore the urgency of a cohesive legal response.
- Undermining Democratic Integrity
The integrity of public debate and the electoral process are directly threatened by deepfakes. Voters may be misled, and public confidence in reliable sources undermined if generative AI is abused to construct political leaders’ speeches, remarks, or actions. The ability of synthetic media to target regional voter sentiments was demonstrated during the 2020 Delhi assembly elections, when a deepfake video of a well-known politician speaking a distinct regional accent went viral.
More recently, in the lead-up to the general elections of 2024, satirical and manipulated deepfakes featuring clips of deceased leaders being “resurrected” or public figures appearing in deceptive circumstances brought attention to the hazy distinction between satire and malicious misinformation, potentially distorting public opinion. The Election Commission of India (ECI) has had to issue advisories mandating the swift removal of such content during the Model Code of Conduct period, illustrating the legal system’s reactive struggle against rapid technological advancement.
- Financial Fraud and Corporate Scams
Deepfakes are now a crucial component of sophisticated fraud and cybercrime. Finance Minister Nirmala Sitharaman warned the public about possible financial scams and the decline in public confidence in official announcements by openly exposing the abuse of AI to mimic her voice and create phoney endorsement videos. The threat is just as serious on the corporate side.
A financial officer at a British engineering company with operations in India was tricked into sending almost $25 million in early 2024 after engaging with AI-generated deepfakes posing as reliable coworkers in a video conference. The transition from straightforward phishing to hyper-realistic deception that preys on human trust is confirmed by these occurrences. Additionally, there has been an increase in AI-generated endorsement frauds that use well-known figures like Virat Kohli, Ratan Tata, and N. R. Narayana Murthy to promote fraudulent investment schemes; some refuted films are over 80% AI-generated.
- Gender-Based Violence and Privacy Violations
The production and distribution of non-consensual explicit imagery (NCII), often known as “DeepNude” content, is arguably the most harmful example of deepfake abuse. The faces of women, both common people and well-known actors like Rashmika Mandanna, are regularly superimposed over compromising videos using this technique.
This is a serious invasion of privacy and dignity that can result in cyberbullying, significant emotional suffering, and reputational harm. The scope of this privacy danger is enormous, given India’s size and social media usage rates. It targets vulnerable groups and takes advantage of the non-consensual use of personal image data, which is protected by Article 21 of the Constitution. The alarming 280% year-over-year increase in deepfake events recorded in Q1 2024 highlights how quickly this threat is growing.
Current Indian Legal Framework for Deepfakes
India does not possess a singular, dedicated ‘Deepfake Law.’ Instead, law enforcement relies on a patchwork of existing and newly introduced legislation to prosecute the consequences of deepfake creation and dissemination.
1. The Information Technology (IT) Act, 2000 and Rules
The IT Act forms the backbone of cyber law, but its application to deepfakes is primarily through analogous offences:
Section 66C (Identity Theft): Punishes the fraudulent or dishonest use of another person’s electronic signature, password, or any unique identifying feature. This is applicable when a deepfake is used for identity impersonation.
Section 66D (Cheating by Personation): Deals with cheating by personation using a computer resource, directly relevant to deepfake scams like the corporate fraud cases.
Section 66E (Violation of Privacy): Criminalises the publishing or transmitting of images of a person’s private area without their consent, particularly useful against DeepNude, though the application to synthetic imagery requires judicial interpretation.
Sections 67 and 67A (Obscenity and Explicit Material): Penalise the publication or transmission of obscene material and sexually explicit material in electronic form, which applies directly to NCII deepfakes.
Section 79 (Intermediary Liability): Provides ‘safe harbour’ protection to social media platforms (Intermediaries) from liability for third-party content, provided they observe due diligence, which includes removing content violating Rule 3(1)(b) of the IT Rules, 2021. The Ministry of Electronics and Information Technology (MeitY) has leveraged this by issuing advisories mandating platforms to remove deepfakes within 36 hours of receiving a complaint, failing which they risk losing their safe harbour immunity.
2. The New Criminal Codes (BNS and BSA 2023)
The recent overhaul of India’s colonial-era criminal justice statutes introduces specific provisions that can be mapped onto deepfake offences:
Bharatiya Nyaya Sanhita (BNS), 2023:
Section 353 (Public Mischief/Misinformation): Penalises making statements, rumours, or reports with the intent to cause public mischief or alarm, directly addressing the use of deepfakes for political misinformation or inciting panic.
Section 319 (Cheating by Personation) & Section 336 (Forgery): These sections provide updated frameworks for prosecuting digital impersonation and the fabrication of electronic documents or records (like deepfake evidence).
Section 111 (Organised Crime): Can be invoked against organised criminal groups that utilise deepfakes for large-scale financial fraud or scams.
Bharatiya Sakshya Adhiniyam (BSA), 2023: The BSA governs the admissibility of electronic records.
Section 63 (Admissibility of Electronic Records): This is critical. It requires an authentication certificate for electronic records, including hash and source verification details. While intended to strengthen digital evidence, this imposes a very high evidentiary bar on victims and law enforcement, as tracing the source and proving the “authenticity” of a manipulated or synthetic file can be technically near-impossible, creating a significant judicial bottleneck.
3. The Digital Personal Data Protection (DPDP) Act, 2023
The DPDP Act offers a powerful civil liability framework. Deepfakes fundamentally rely on processing personal data (images, voice, biometric identifiers) without consent.
Section 6 (Consent): Mandates clear, informed, and unambiguous consent from the Data Principal for the processing of their personal data. Non-consensual creation or use of a person’s visual likeness or voice through deepfake technology constitutes a breach of data fiduciary duty.
Penalties: The Act prescribes steep financial penalties (up to ₹250 crore) for breaches of fiduciary duty, complementing criminal law by creating a strong deterrent against commercial entities or platforms that fail to protect user data from deepfake misuse.
Gaps and Judicial Challenges
Despite the robust framework of intersecting statutes, the existing legal landscape suffers from critical gaps, largely due to a legislative and judicial lag in recognising the nuanced nature of generative AI.
- The Absence of a Deepfake-Specific Definition
The core statutes (IT Act, BNS) were framed before the advent of sophisticated generative AI. As a result, they lack a specific legal definition for “deepfake” or “synthetically generated information.” Although the MeitY draft amendments to the IT Rules, 2021, propose a definition, the absence of this term in primary legislation creates ambiguity, forcing prosecutors to apply provisions intended for traditional identity theft or forgery, which may not fully cover the scope of harm or intent associated with AI-driven deception.
2. The Intermediary Liability (Section 79) Conflict
The most significant legal tension arises from the government’s push for proactive content governance via amendments to the IT Rules, 2021. The draft rules propose to impose on Significant Social Media Intermediaries (SSMIs) a duty to deploy technical measures for verifying user declarations regarding AI-generated content and to add labels/metadata (watermarking).
The Conflict: Section 79 of the IT Act grants safe harbour immunity only when an intermediary does not “initiate the transmission, select the receiver of the transmission, and modify the content.” By mandating platforms to modify content (by adding labels) and to select/verify content before publication (by checking user declarations), the proposed rules legally stretch the definition of an intermediary, potentially nullifying their safe harbour protection.
The Consequence: This ambiguity creates a chilling effect, potentially leading to over-censorship (or over-removal) by platforms attempting to avoid liability, thereby infringing upon the fundamental right to free speech and expression (Article 19(1)(a)), a principle upheld in cases like Shreya Singhal v. Union of India (2015), which cautions against a “general monitoring obligation.”
3. Evidentiary and Forensic Roadblocks
The reliance on the Bharatiya Sakshya Adhiniyam (BSA) 2023 for authenticating electronic records presents a severe judicial challenge. The technical requirements for proving the chain of custody, source of creation, and integrity (hash verification) of a deepfake are extremely high. Deepfakes are, by design, intended to destroy these forensic trails. Even when detection tools claim high accuracy, the judiciary and police forces often lack the specialised training and equipment to distinguish AI fingerprints (such as unnatural blinking patterns, lack of background noise in audio, or inconsistent light variations) from genuine content, undermining the efficacy of prosecution even when guilt is clear.
4. Jurisdictional Ambiguity
Deepfakes are often created by an anonymous user in one country, hosted on a server in a second country, and downloaded/consumed by a victim in India. This global chain of dissemination creates complex jurisdictional challenges. Prosecuting the original creator requires international cooperation under treaties like the Budapest Convention (which India is not a signatory to) or bilateral agreements, often making enforcement difficult for domestic law enforcement agencies.
V. Proposed Regulatory and Ethical Solutions
Recognising the inadequacy of the current framework, India has moved toward implementing specific AI governance rules, primarily through amendments to the IT Rules, 2021, and proposing ethical obligations on AI developers.
1. Mandatory Labelling and Traceability Regime
The draft amendments to the IT Rules, 2021, released in late 2025, propose a comprehensive mandatory labelling regime focused on transparency and traceability:
Definition: The rules propose legally defining “synthetically generated information” as any content created or altered using a computer resource that “reasonably appears to be authentic or true.”
Label Requirements: All such synthetic content must be clearly and unambiguously marked. For visual content, the label must cover at least 10% of the total display area; for audio content, the disclaimer must be audible during at least 10% of the initial duration.
Permanent Metadata: The draft mandates embedding unique, permanent metadata or watermarks into the synthetic content, ensuring traceability across different platforms and limiting the ability of intermediaries to remove or alter these identifiers.
User Declaration: Platforms are obliged to obtain a declaration from the user at the time of upload stating whether the content was created or modified using AI.
While technologically ambitious, this regime aligns with international norms (like the EU’s AI Act) and addresses accountability by attempting to tie content back to its originator. However, as noted, its implementation risks legal conflict with the established Section 79 jurisprudence.
2. Ethical and Due Diligence Obligations
Beyond mandatory labelling, the proposed governance structure pushes ethical responsibility upstream to the creators of AI models:
Model Developer Accountability: There is an implicit expectation for companies building Generative AI models (text, image, audio) to implement “compliance by design,” embedding labelling features and audit trails directly into their products. Companies like Google, Meta, and others, as steering committee members of initiatives like the Coalition for Content Provenance and Authenticity (C2PA), are already working towards open technical standards for content provenance and authenticity, suggesting a hybrid model of regulation and industry self-governance (co-regulation).
Focus on Intent and Harm: Future legislation, possibly via the forthcoming Digital India Act, must clearly define penalties based on the intent behind the deepfake (malicious fraud vs. protected parody) and the actual harm caused, ensuring a proportional response that safeguards free expression. Proposed legislation, such as the Preventing Deepfakes of Intimate Images Act, aims to criminalise non-consensual intimate digital depictions with malicious intent.
3. The Imperative of Digital Literacy
Legal or technical solutions alone cannot solve the deepfake crisis. The problem is fundamentally a social one, predicated on the human tendency to believe visual evidence.
National Mission: The most critical ethical and policy response must be a massive national mission for digital literacy. Educating the population—especially the high percentage of smartphone users—on the subtle markers of AI-generated content, the concept of content provenance, and the existence of deepfake scams is essential to create a resilient public sphere. Without this, mandatory labels risk being a “socially naïve” solution that fails to tackle the root cause of misinformation consumption.
Conclusion
The deepfake challenge forces India’s legal system into a rapid and complex evolution. The current reliance on the Information Technology Act, augmented by the Bharatiya Nyaya Sanhita and the Digital Personal Data Protection Act, provides a reactive legal toolkit capable of punishing the consequences of deepfakes (fraud, defamation, NCII) but is ill-equipped to govern the technology itself. The true legal and ethical test lies in reconciling the need for transparency and accountability with the foundational principles of intermediary non-liability and the right to free speech.
As the government moves to implement the detailed but controversial IT Rules amendments, a balanced approach is paramount. Future AI governance in India must evolve from a legal patchwork into a coherent framework—likely through the Digital India Act—that is technologically informed, ethically grounded, and focuses on three pillars: upstream accountability for AI developers, swift, proportionate penalties for malicious actors, and, most critically, a robust national investment in digital literacy to fortify the public against the digital deception that defines the age of generative AI.