UNMASKING DEEPFAKES: THREATS, CHALLENGES, AND GLOBAL RESPONSES.

Profile

Suhana Roy

11 min read • July 07, 2024

Cover

Introduction

Deepfakes, characterized by the artificial manipulation of digital media, encompassing videos, audio, and images, leverage the capabilities of Artificial Intelligence (AI). Given their potential for hyper-realistic digital fabrication, deepfakes pose a significant risk, capable of tarnishing reputations and eroding trust in democratic institutions. Last month, a video featuring renowned actor Rashmika Mandanna went viral on various social media platforms, evoking a blend of shock and horror among online users. The brief video, portraying Mandanna, had undergone manipulation through the use of deepfake technology.

What is deepfake or synthetic media?

The term “deepfake” was first coined in the year 2017 by a Reddit user. Using open-source face-swapping software, this user shared pornographic videos on the online news and aggregation site.

In addition to the Reddit page, the term also includes "synthetic media applications" and new creations such as StyleGAN — “realistic-looking still images of people that don't exist,” said Henry Ajder, director of Deeptrace's threat intelligence team.

The Genesis of Deception: Tracing the Roots of Deepfake Technology and Its Evolution

The inception of deepfake technology can be attributed to the intersection of AI and machine learning, blurring the boundaries between reality and fiction. While these technologies present advantages in diverse fields such as education, film production, criminal forensics, and artistic expression, it is crucial to acknowledge their potential for misuse, including election interference and the dissemination of extensive misinformation.

Today, even semi-skilled and unskilled individuals can easily generate deepfakes by manipulating audio-visual clips and images. As the technology advances and detection becomes more challenging, resources are increasingly available to equip individuals against potential misuse. Initiatives like the Detect Fakes website by the Massachusetts Institute of Technology (MIT) exemplify efforts to help people identify deepfakes by focusing on subtle details. However, the misuse of deepfakes, particularly in online gendered violence, is a rising concern, as highlighted by a 2019 study indicating that a staggering 96% of deepfakes were pornographic, with 99% involving women.

What are the laws against the misuse of deepfakes?

Section 66E of the IT Act of 2000” becomes relevant in cases of deepfake crimes where individuals engage in the capture, publication, or transmission of someone's images through mass media, thereby violating their privacy. It states that-

“Whoever, intentionally or knowingly captures, publishes or transmits the image of a private area of any person without his or her consent, under circumstances violating the privacy of that person, shall be punished with imprisonment which may extend to three years or with fine not exceeding two lakh rupees, or with both.”

Another pertinent section of the IT Act is “Section 66D”, which states that-

“Whoever, by means of any communication device or computer resource cheats by personation, shall be punished with imprisonment of either description for a term which may extend to three years and shall also be liable to fine which may extend to one lakh rupees.”

Individuals engaged in deepfake cybercrimes in India can be prosecuted by invoking the relevant provisions within the IT Act.

Indian legal framework also incorporates copyright protection for various works, including films, music, and other creative content. In the event of deepfakes created without permission, copyright owners may take legal action against those infringing their rights.

According to the Indian Copyright Act of 1957, Section 51 outlines penalties for violations of copyright. It prohibits the unauthorized use of property that belongs to another person and over which that person holds an exclusive right. Furthermore, Indian law explicitly prohibits fraud, including identity theft and financial fraud.

Complementing these legal provisions, the Ministry of Information and Broadcasting, on January 9, 2023, issued an advisory to media organizations. The advisory urges caution in airing content susceptible to manipulation or tampering. Media outlets are advised to distinctly designate any manipulated content as "manipulated" or "modified" to ensure viewers are informed about the altered nature of the content.

Deepfakes are not currently addressed by Indian laws; however, there are government initiatives and legal provisions that could be used. Deepfakes are likely to become more widespread and sophisticated, so the Indian government will have to take further measures to address the issue.

Despite these legal provisions, concerns persist about the adequacy of existing laws in addressing the nuanced challenges posed by emerging technologies. There is a need for a comprehensive regulatory approach based on a market study assessing the harms perpetrated by AI technology. A critical observation is the current focus on addressing harm after the fact, urging the necessity for preventive measures and user awareness.

Global Initiatives: Tackling Deepfake Proliferation on the International Stage

The Bletchly Declaration - A collective effort in a collaborative spirit

Twenty-nine countries, including the US, Canada, Australia, China, Germany, and India alongside European Union have joined in to prevent the 'catastrophic harm, either deliberate or unintentional that arise from the ever-increasing use of Al.

The Declaration, lays down a step forward for countries and nations to cooperate and collaborate on the existing and potential risks of the Al and sets in the agenda aimed at:

  • “identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.

  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.”

The UK government is set to introduce national guidelines for the AI industry, considering the implementation of legislation that would mandate clear labeling for photos and videos generated by AI.

The European Union has enacted the Digital Services Act, compelling social media platforms to meet labeling obligations, thereby enhancing transparency and aiding users in assessing the authenticity of media content.

In South Korea, a law has been passed that deems the distribution of harmful deepfakes illegal, with offenders facing penalties of up to five years of imprisonment or fines up to 50 million won (approximately 43,000 USD).

In January 2023, China, through the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security, emphasized the necessity of clear labeling for deepfakes to prevent public confusion.

The United States has urged the Department of Homeland Security (DHS) to establish a task force dedicated to addressing digital content forgeries, commonly referred to as "deepfakes." Many states within the U.S. have enacted their own legislations to counteract the impact of deepfakes.

Battling the Digital Mirage: Technological Solutions and Social Media Responsibility in the Fight Against Deepfakes

Technological Solutions - Integration of Blockchain to Counteract Deepfakes

Axon Enterprise Inc, the primary manufacturer of US police body cameras, has enhanced its security technology to combat the threat of deepfake videos. The release of Axon's Body 3 camera has become pivotal in addressing allegations of police misconduct, particularly when defense attorneys have questioned the credibility of police videos, citing noticeable edits to shorten scenes or adjust timestamps. The upgraded security camera now incorporates additional measures, making captured footage inaccessible for playback, download, or editing by default, unless authenticated through methods such as a password.

Responsibility and Accountability of Social Media Platforms

Photographs and images, considered sensitive personal data under the Digital Personal Data Protection Act of 2023, possess the capacity to identify individuals. Deepfakes, therefore, not only constitute a breach of personal data but also violate an individual's right to privacy. While publicly available data might not be fully covered by the law, social media giants must take responsibility if the information on their platforms can be exploited for misinformation purposes.

The dissemination of such misinformation frequently occurs through social media channels, necessitating the implementation of controls. Notably, Youtube has recently introduced measures requiring content creators to disclose whether their content is generated through AI tools. There is a pressing need to establish a uniform standardization that all channels can adhere to, fostering consistency across borders.

Conclusion and suggestions

The existing legal framework in India pertaining to cyber offenses facilitated by deepfakes is insufficient in comprehensively addressing the problem. AI, machine learning, and deepfakes are not specifically regulated in the IT Act, 2000, which makes it difficult to effectively oversee their use. To enhance the regulation of offenses involving deepfakes, it may be imperative to revise the IT Act, 2000, incorporating provisions explicitly addressing deepfakes and outlining penalties for their misuse. Such amendments could involve heightened penalties for those involved in the creation or dissemination of deepfakes for malicious purposes, along with stronger legal safeguards for individuals whose images or likenesses are exploited without their consent.

It is crucial to acknowledge that the development and deployment of deepfakes constitute a global issue, necessitating international cooperation and collaboration to regulate their usage effectively and prevent privacy violations. Meanwhile, individuals and organizations should remain cognizant of the potential risks associated with deepfakes, exercising vigilance in verifying the veracity of online content.

In the interim, governments can adopt various approaches:

(a) “The censorship approach” involves blocking public access to misinformation by issuing orders to relevant authorities.

(b) “The punitive approach” charges individuals or organizations for the creation or distribution of false information.

(c) “The intermediary regulation approach” provides online intermediaries with a duty of care to remove false information from their platforms as soon as possible. If they fail to do so, they could be held liable under section 69-A and section 79 of the Information Technology Act, 2000.

TAGS:
Profile

Written By Suhana Roy

A 2nd year BA LLB student at Hidayatullah National Law University.

Comments

No comments yet. Be the first to comment on this article.

Legal Cyfle

LegalCyfle is a platform for legal professionals to share their knowledge and insights. The information provided on this platform is for educational purposes only.

Resources

BlogNews