Legal Implications of Synthetic Media: Who Owns Deepfakes?

This article is written by Nishita Singh during her internship with LeDroit India.

Introduction

By utilizing artificial intelligence (AI), the emergence of synthetic media—especially deepfakes—has transformed the production of digital content. Deepfakes make incredibly lifelike, frequently indistinguishable, synthetic media by manipulating preexisting photos, videos, or sounds. Although the technology has created new opportunities in marketing, education, entertainment, and the arts, its quick spread has also created serious moral and legal issues. Of them, intellectual property rights (IPR) are a particularly complicated and controversial topic. Who has ownership rights to information produced by deepfake technology is the central question. Who is it—the person whose voice or image is imitated, the person who developed the AI program, or the person who uses the technology to produce the content? These questions demonstrate how inadequate current legal frameworks are to handle the complex ramifications of synthetic media. This essay examines the intellectual property issues raised by deepfakes and possible ways to regulate this revolutionary technology as the distinction between real and synthetic becomes increasingly hazy.

Synopsis

A state-of-the-art use of AI technology, deepfakes may produce incredibly lifelike synthetic media with a variety of uses and ramifications. Although they provide innovative possibilities in fields like as education, satire, and entertainment, their abuse has raised grave worries about disinformation, copyright violations, and privacy. The intellectual property rights problems associated with deepfakes are examined in this article, including difficulties with copyright ownership, moral rights, and trademark infringement. Key legal issues are examined, including the lack of clear legislation, moral conundrums, and the effects of this technology on society. The ways in which various platforms and jurisdictions are tackling these issues are clarified by case studies and precedents. The paper concludes by suggesting legal frameworks to better control deepfake technology, such as new laws pertaining to AI-generated material, increased personality rights, required disclosure, and international cooperation for standardized standards.

Comprehending The Synthetic Media and Deepfakes

Deep learning, a kind of artificial intelligence that uses enormous datasets of photos, videos, or sounds to train algorithms, is used to create deepfakes. The AI may create incredibly realistic content by examining patterns and attributes in this data. Videos of people saying or doing things they never did, audio snippets that mimic someone’s voice, or even whole made-up characters can all be produced with the technique. Although there is no denying deepfakes’ creative potential, its capacity to mimic someone’s voice or appearance without that person’s consent has raised serious concerns[1]. Deepfakes may be used as a weapon in ways that have major ethical and legal repercussions, ranging from financial fraud and political propaganda to non-consensual explicit material. Traditional legal ideas like copyright, privacy, and moral rights—which were not intended to handle the complexities of synthetic media—are being challenged by their rise. Therefore, the creation and application of deepfakes necessitate a careful review of current intellectual property regulations.

Copyright Ownership and Intellectual Property Rights in Deepfakes

The foundation of copyright law is the idea of human authorship. The owner of an original work is often given copyright. Deepfakes, on the other hand, go against this idea because they include several people and technology middlemen. The input content is the source of the first layer of complexity[2]. Deepfake algorithms are frequently trained on datasets that contain copyrighted content, such as images, movies, or audio recordings. Even if the finished product is modified or synthesized, using these components without permission may still be a copyright violation. Replicating a celebrity’s voice or appearance from a copyrighted movie, for example, may infringe copyright laws and maybe the artist’s rights.

The created material is part of the second layer. Deepfakes could not satisfy the originality criterion under copyright law since they are produced by algorithms rather than by human hands. The question of whether AI-generated works may be copyrighted and, if so, who owns the rights—the person portrayed in the media, the AI developer, or the user using the AI—is being debated by courts throughout the world. Ownership of deepfake content is still in legal uncertainty until these problems are fixed.

Moral Rights

The deepfake controversy is further complicated by moral rights, such as the right to credit and the right to shield a work from disparaging treatment. These rights are especially important when people are portrayed in deepfakes in an inaccurate or harmful way. For instance, even if the information is completely fictional, a deepfake that depicts a famous figure participating in unlawful or immoral behaviours may damage their reputation[3].

People may have legal options to stop the use of their likeness in deepfakes in countries like the European Union that have robust moral rights provisions. But exercising these rights sometimes necessitates overcoming difficult legal and technological obstacles, particularly when the content comes from a different country.

Considerations for Trademarks

Deepfakes may also violate trademark rights when they exploit a prominent figure’s image in a way that harms their reputation or misrepresents them. For example, it may be considered fraudulent endorsement or passing off if a deepfake commercial uses a celebrity’s picture without their consent. Such abuse draws attention to the possibility of reputational damage in addition to intellectual property issues.

Principal Legal Difficulties

1.Privacy and Consent

There are serious privacy issues when someone’s likeness is used in deepfakes without permission. Legal theories such as the right of publicity, which gives people authority over how their identities are used for profit, may serve as the foundation for a lawsuit. Ordinary citizens, however, have less safeguards since these rights are sometimes restricted to certain situations or well-known people. The problem is made more difficult by the absence of global guidelines for handling consent in the production and distribution of deepfakes.

2.Absence of a Clear Regulation

Since they were created long before artificial intelligence (AI) existed, current intellectual property rules are ill-prepared to handle its complexity. Conventional frameworks find it difficult to assign rights to multiple stakeholders, distinguish between human and machine authorship, and settle disagreements over the dissemination of content across international borders. The lack of defined legislation creates serious gaps in the protection of authors’ and people’ rights as deepfake technology develops.

3.Implications for Ethics and Society

Deepfakes affect society in ways that go much beyond legal ownership. The ethical complexity is increased by their capacity to disseminate false information, sway public opinion, and take advantage of others for monetary or private benefit. Regulating deepfakes is important for maintaining individual dignity and social trust in addition to protecting intellectual property.

Illustrations and Cases

  • Laws in California That Are Deepfake

One step toward resolving the ethical issues of synthetic media is California’s legislation that forbids the use of deepfakes in political campaigns and non-consensual sexual material. But rather than settling intellectual property conflicts, these rules concentrate on consent and misuse.

2020’s Zhao v. Douyin

According to a court ruling in a historic Chinese case, a person’s face is considered personal data, and its unapproved use is against their right to privacy. In the age of synthetic media, the case emphasizes the significance of controlling unlawful likeness usage, even if it has nothing to do with intellectual property[4].

Fig1. Average Smartphone Time Spent per MAU per Month from 2019 Internet Trends Report [2].[5]

The Deepfake Policies of TikTok

Facebook and TikTok are two social media companies that have put policies in place to prevent the abuse of deepfakes. Although praiseworthy, these actions highlight the necessity of all-encompassing legal frameworks that extend beyond platform-level enforcement.

Suggested Legal Structures

In order to tackle the difficulties presented by deepfakes, policymakers need to take a comprehensive approach.

AI-Generated material Law: Pass legislation that balances the interests of developers, consumers, and the people portrayed in the material by precisely defining who owns AI-generated works[6].

Expanded Personality Rights: To guarantee that people maintain ownership over their identity and likeness, extend the reach of personality rights to include safeguards against illegal deepfakes[7].

Compulsory Transparency: To reduce the possibility of abuse and provide legal justification for addressing harm, require authors to reveal when content is artificially created.

International Integration: Encourage collaboration among nations to create uniform guidelines for controlling synthetic media, resolving jurisdictional issues, and encouraging responsibility[8].

 Conclusion

Deepfakes push the limits of creativity and invention and are the perfect example of AI’s transformational potential. It is impossible to ignore their moral and legal difficulties, nevertheless. Urgent attention is required to the ownership question, whether it belongs to the person portrayed, the user, or the AI developer.

A balance between innovation and accountability may be achieved by policymakers through the expansion of individual rights, the reform of intellectual property laws, and the promotion of international cooperation. Deepfake technology may be used responsibly with careful control, maximizing its positive effects while lowering its negative ones.


[1] Sheikh Inam Ul Mansoor, Legal Implications of Deepfake Technology: In the Context of Manipulation, Privacy, and Identity Theft, December 2024, researchgate https://www.researchgate.net/publication/387499036_Legal_Implications_of_Deepfake_Technology_In_the_Context_of_Manipulation_Privacy_and_Identity_Theft

[2] Sheikh Inam Ul Mansoor, Legal Implications of Deepfake Technology: In the Context of Manipulation, Privacy, and Identity Theft, December 2024, researchgate https://www.researchgate.net/publication/387499036_Legal_Implications_of_Deepfake_Technology_In_the_Context_of_Manipulation_Privacy_and_Identity_Theft

[3] Ravi Goyal And Heba Ajaz, Mitigating Deepfake Threats: How Existing Laws Can Tackle Misuse,16 July 2024, livelaw , https://www.livelaw.in/mitigating-deepfake-threats-how-existing-laws-can-tackle-misuse

[4] Zhengwei Zhao, Analysis on the “Douyin (Tiktok) Mania” Phenomenon Based on Recommendation Algorithms, January 2021, https://www.researchgate.net/publication/349000779_Analysis_on_the_Douyin_Tiktok_Mania_Phenomenon_Based_on_Recommendation_Algorithms

[5] Zhengwei Zhao, Analysis on the “Douyin (Tiktok) Mania” Phenomenon Based on Recommendation Algorithms, January 2021, https://www.researchgate.net/publication/349000779_Analysis_on_the_Douyin_Tiktok_Mania_Phenomenon_Based_on_Recommendation_Algorithms

[6] Dr. Santosh Kumar Tiwari, AI-Generated Content and Copyright Law: Challenges and Adaptations in India, 12.March 2020, https://www.ijamsr.com/issues/6_Volume%203_Issue%2012/20240629_110951_3957.pdf

[7] Dr. Santosh Kumar Tiwari, AI-Generated Content and Copyright Law: Challenges and Adaptations in India, 12.March 2020, https://www.ijamsr.com/issues/6_Volume%203_Issue%2012/20240629_110951_3957.pdf

[8] Dr. Santosh Kumar Tiwari, AI-Generated Content and Copyright Law: Challenges and Adaptations in India, 12.March 2020, https://www.ijamsr.com/issues/6_Volume%203_Issue%2012/20240629_110951_3957.pdf

Related Posts
Leave a Reply

Your email address will not be published.Required fields are marked *