Deepfakes: Legal Solutions for Misinformation

This article is written by Silvia M Jacob, BA LLB. 5th year at Kristu Jayanti College of Law, Bangalore, during her internship at LeDroit India.

Introduction

Artificial Intelligence, mostly known as AI, was once a text generative platform that provided an answer to a question. However, as we grow further, AI has become capable of not only generating images but also videos and audios that look and sound similar to the individual. 

Now, think of a scenario where you scroll through social media and you see yourself on it, speaking something that you never said! Dangerous right? This is what Deepfake is. 

Well-known luminaries like Ranveer Singh, Amitabh Bachchan, Narayan Murthy and several more have been the victims of Deepfake, portrayed as if they said something they didn’t

The rise of Deepfake technology, which uses artificial intelligence to create hyper-realistic but fake media content, has introduced new challenges in the fight against misinformation, making it difficult for the legal system to function. Deepfakes enable users to manipulate facts, impersonate individuals, and disseminate false narratives through them, thereby hampering the lives and trust of the public. 

This article explores the threat of Deepfakes and the legal landscape aimed at combating their misuse.

Synopsis:

  • Understanding Deepfakes
  • How Deepfakes spread misinformation
  • Existing legal challenges:
  • Viable solutions
  • Conclusion

Understanding Deepfakes:

As the world advances in technology, Artificial Intelligence also develops. There was a period where we contemplated Artificial Intelligence and its potential to dominate the earth.

Now with the emergence of AI, we are not that far away from that truth, if not in the present case. AI gives humans the support they need to increase and quicken their work with ease. It helps in managing our tasks and enables us to work 10 times faster. It also allows our creativity to grow and expand to lengths never reached. 

However, every favourable and helpful tool comes with its cons. The AI, being a great tool, also poses a danger if used with the wrong intentions, particularly when it generates images and videos. Prompts play a crucial role in generating these videos and images. By feeding the AI tool with multiple materials and information, the user has only to ask the AI to generate an image or a video, or can upload any such media material from their phone, make the changes they need, and generate the said media with the changes they made.

Even though AI has reached such and milestone and proceeds to do so, it has, however, caused the emergence of Deepfake technology, causing the spreading of fake news, defamation, fraud, privacy invasion, and also crimes which has not been committed by the person. 

How Deepfakes spread misinformation:

Deepfake videos, images, and audios need constant prompts and data from the user to generate the desired media type. They thrive on “confirmation”. These “confirmations” help the Deepfake technology align with the user’s ideas or pre-existing beliefs. Such confirmations from the user’s side enable Deepfake Technology to gain more and more data and information, which generates their desired outcome. 

The creation of Deepfakes depends on how much information you have given to it to generate media. According to research, humans tend to believe what they see. This leaves a loophole between what is real and what is not. The Deepfake technology, generated with Artificial Intelligence, makes it hard, even for the human eye, to tell the difference.

If a particular content generated out of Deepfake technology makes the audience laugh, cry, or feel outraged, that content is likely to be shared and spread easily through the internet. Emotional thinking usually overrides the rationale, critical thinking, to understand that not every content on the internet is as real as it seems to be in this age of AI.

Artificial Intelligence has advanced sufficiently to question the humanity between reality and illusion. In the statement where “It’s not that deep!” lies a question of whether we are safe in the era of AI. Deepfake technology, through AI, enables the creation of “evidence” that never happened in the first place. Creation of a tangible source that never happened or existed creates a world of uncertainties about what is real. 

In the judicial system, evidences play a crucial role in resolving cases and administering justice to the people. If evidence is submitted through this technology, then the justice system will not be able to uphold its justice pillar, resulting in damage to the legal system.

Deepfake technology uses an advanced system of AI to mimic voices, mannerisms, and appearances, surpassing traditional technology methods to verify the authenticity of the material. This makes it hard for journalists, courts, and the public to distinguish what is correct and what is not, in this realm of Digital Evidence, where the public relies more relied upon digital platforms to gain information. 

Existing legal challenges:

Instead of using such technology in good faith, scammers routinely create Deepfake videos, audios and even voice recordings of Tom Cruise, Elon Musk, and other well-known celebrities and political leaders to promote fraudulent investment schemes. Such impersonations loosen the trust that they had developed with famous figures and deceive victims into financial scams. 

Such instances have posed multifaceted legal challenges not only for famous figures but also for the general public. Seeing a video of oneself admitting something that they haven’t done poses a threat to the future judicial system, alarming the relevance and accuracy of evidence in the court of law.

The primary challenges that the legal system faces include:

  1. Lack of specific legislation:

However simple it may sound, having proper legislation that governs the current situation of Deepfakes will not only secure the future of evidence but also protect citizens from being the victim of Deepfakes. There is no jurisdiction currently that has a comprehensive statute that is tailored to govern Deepfakes. Most countries, including India, rely on a patchwork application of existing laws of defamation, privacy, fraud, copyright and cybercrimes, which often don’t align with the AI-generated media and may not adequately address the problem at hand. This gap leaves victims in fear, and the scammers are left without any such penalty or punishment.

  1. Authentication of Evidence:

The Indian courts thrive on evidence, whether in civil or criminal cases. Evidences play a crucial role in determining justice for the people and for the future. Deepfakes undermine the foundational legal requirement for evidence to be authentic and non-fabricated. In a scenario where a party is arguing, in defence, whether the evidence is genuine or fake. This will particularly increase litigation costs just to determine whether the evidence is reliable or not.

  1. Cross-Border Scam:

In the era of cross-border transactions, malicious scammers can generate or produce Deepfakes anonymously across global platforms to earn money. This limits the jurisdiction of the problem when content originates from abroad, complicating cooperation between law enforcement.

  1. Digital platforms and their liability:

Digital platforms like social media, hosting services benefit from “safe harbour” under Section 79 of the Information Technology Act 2000; that is, these social media and e-commerce platforms are not liable and responsible for the illegal contents posted by their users, as long as they meet the conditions and follow due diligence. However, this leaves a loophole in administering such Deepfake content.

  1. Balancing free expressions and privacy rights:

In efforts to curb harmful Deepfakes, it is hard to respect the freedom of speech and personal rights, because with such rights, scammers indulge in compromising the privacy of the individual, without any punishment. This prohibits the legitimate artistic, educational or satirical uses. 

  1. Intellectual property and copyright:

For Deepfake content to be made, training of AI models that support Deepfake with copyrighted images, videos or audios, without authorisation raises dispute, whether it is a fair use. Deepfake content poses difficulties due to its “black box” nature of training AI, making it difficult to trace specific infringements.

  1. Detention technology reliability:

Although tools like Hive’s AI detector, Intel’s FakeCatcher and many more may help in combating Deepfakes. However, it is considered to be “Fight AI with AI”, which may lack consistency or flawed detectors, and can be evaded as a Deepfake.

Viable solutions:

Every innovation is a curse and a boon to the world. AI, with its generative content, poses the risk of Deepfake content in the same way. Combating Deepfakes would require a multi-pronged approach that would target creating specific legislation and advanced selection systems with effective platform policies and public education. Such measures need to be taken by not just one but the entire world to defeat such non-ethical use of the AI generative system. Certain measures and solutions are as follows:

  1. Deepfake Legislation:

Every act needs to have its legislation, and it is high time to make a legislation that not only tells about Deepfake content but also punishes the scammers and provides relief to the suffering victims. Unless there is proper legislation that would govern the production and usage of such content, it would be hard for the judicial system to distinguish what is real and what is fake, causing delays in providing justice.

Such legislation should make sure to write laws that would govern the AI tools that facilitate Deepfake content and regulate the use of such tools. Specifically in India, no such legislation has been passed. Therefore, it can learn from the United States, with their “Take it Down Act” that criminalises non-consensual intimate Deepfakes with imprisonment and fines. China’s “deep synthesis” rules demand visible labels on AI-generated content and also require verification for providers.

The legislation should also make sure to make digital platforms liable for the content they are sharing, and that these platforms need to run authentic checks on such content before posting it online. This would ensure not only compliance from the social media and e-commerce platforms but also would make the user more cautious towards what contents need to be put online, which is ethical.

  1. Technological aspect:

In the present scenario, AI-based detection tools and the solution to combat Deepfake content. Tools like Sensity AI, Hive AI Deepfake Detection, Intel FakeCatcher, and others offer real-time assessment of media types, determining whether they are Deepfakes or not.

Cryptographic watermarks like C2PA standards embed verifiable provenance metadata at content creation, enabling automated checks for AI-generated or altered media. Blockchain-based verification can also be used to create immutable audit trails for video evidence.

  1. Platform practices and take-down protocols:

Social media platforms provide a means for users to report AI-generated content or label it as AI. However, there is no means by which users can understand what content is a Deepfake. Therefore, when a user reports a media type to be Deepfake and not following the guidelines, the platform should take such a report in all seriousness and, after thorough investigation, such Deepfake media shall be taken down with a standardised notice and action mechanism to ensure AI-generated contents are taken down. This would ensure trust among the users, creating a safe and reliable platform for them.

  1. Public awareness and media literacy:

An individual is aware of such instances only because they’re on social media platforms, without any general knowledge. If a user is exposed to such Deepfake content, it makes them think that it is easy to use such a mechanism and can earn and not respecting the privacy of other individuals. Therefore, making the public aware of what Deepfakes are is essential not only for educational purposes but also in the form of awareness and prevention of future Deepfake crimes. 

Broadcasting or conducting workshops in schools, colleges, and the workplace and including such topics in the academic curriculum, which can help spread awareness regarding Deepfake content and help in understanding the utilization of detection tools and reporting such content.

Conclusion:

Deepfake content makes it very easy to create convincing fake media that can cause real harm and danger to one’s life. While existing law can help for the time being, there is a need for a strong push for a clearer, better and stronger legal solutions specifically designed to stop Deepfakes and effective, transparent-reporting mechanisms and tools to identify spreading misinformation that can cost lives, not just one individual, but the world as a whole.

References:

https://core.ac.uk/download/619718139.pdf

https://www.academia.edu/124844898/Navigating_the_Dual_Nature_of_Deepfakes_Ethical_Legal_and_Technological_Perspectives_on_Generative_Artificial_Intelligence_AI_Technology

https://www.jdsupra.com/legalnews/deepfakes-and-disinformation-the-world-99833

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5339984

https://www.lexisnexis.com/blogs/en-au/insights/rolling-deepfakes-generative-artificial-intelligence-privacy-regulation

Related Posts
Leave a Reply

Your email address will not be published.Required fields are marked *