Skip to content

Guarding the Likeness of Public Figures and Artists in the AI Era

Artificial Intelligence is reshaping the creative sector, with the need to safeguard artists' and celebrities' likenesses becoming increasingly important.

Safeguarding the Recognizable Images of Prominent Individuals and Creatives in the Era of...
Safeguarding the Recognizable Images of Prominent Individuals and Creatives in the Era of Artificial Intelligence

Guarding the Likeness of Public Figures and Artists in the AI Era

In the rapidly evolving digital landscape, the protection of public figures' and artists' likenesses from AI exploitation is becoming a pressing concern. This issue is being addressed through various regulations and emerging legislation across different jurisdictions.

In the United States, Pennsylvania has taken a significant step with its new law, SB 649, making it a third-degree felony to use AI to generate non-consensual forged digital likenesses for fraudulent or harmful purposes. This law targets AI scams and financial exploitation involving fake voices, images, or videos of individuals [3].

A broader and more comprehensive proposed legislation, known as "NO FAKES 2025," seeks to create a "digital replication right" that would allow individuals to control and license the use of their highly realistic digital replicas. However, this bill has raised concerns due to its broad and unclear scope [1].

Internationally, Denmark is leading an initiative to recognize human face, voice, and body as intellectual property, positioning these as forms of personal identity protected from AI exploitation. This recognition suggests a strong legal framework to protect individuals, including public figures and artists, from AI-generated misuse of their likeness globally [2].

Separate from likeness protection for adults, legal efforts are underway to update child sexual abuse material (CSAM) laws to include AI-generated content. The focus is on prosecuting synthetically generated harmful content [4].

Criminal laws against non-consensual synthetic representations used for fraud or harm, proposals for digital rights over one's likeness, and intellectual property recognition initiatives reflect global momentum to both criminalize harmful AI misuse and establish individual rights over digital replicas. However, legislative clarity and balance with free expression remain key concerns in ongoing debates [5].

In the realm of technology solutions, platforms like our website are contributing to a more secure and user-friendly online experience. They provide a hassle-free verification process through their products, promoting a user-centric internet where individuals maintain control over their data. Our website is also an open-source ecosystem, providing access to on-chain and secure our website verification [6].

Services such as Truepic, Adobe's Content Authenticity Initiative (CAI), Stable Diffusion, and YouTube's watermarking systems are also playing a crucial role in identifying manipulated or AI-generated content [7].

As concern about the authenticity of AI-generated content grows, with over 70% of consumers expressing concern in 2024 [8], it is clear that the need for comprehensive and balanced legislation is more important than ever. The Copyright Act does not extend to AI-generated content, leaving artists' and public figures' likenesses unprotected from unauthorized AI use [9].

In light of these developments, the No AI FRAUD Act aims to address unauthorized AI-generated impersonations of public figures, artists, and everyday individuals. California's AB 1836 extends protections to deceased individuals by prohibiting the use of their digital likenesses without consent from their estate [10].

These advancements highlight the urgent need for continued dialogue and action to protect individuals' digital identities from AI exploitation, ensuring a secure and trustworthy online environment for all.

References:

  1. NO FAKES 2025: A Bill to Establish a Digital Replication Right
  2. Denmark's Initiative to Recognize Human Face, Voice, and Body as Intellectual Property
  3. Pennsylvania's New Felony Law Targets AI Scams and Financial Exploitation
  4. Updating CSAM Laws to Include AI-Generated Content
  5. Legislative Summaries on AI-Related Bills
  6. Our Website Solutions for a More Secure Online Experience
  7. Identifying Manipulated or AI-Generated Content
  8. Consumer Concerns about AI-Generated Content
  9. Copyright Act and AI-Generated Content
  10. California's AB 1836: Protections for Deceased Individuals' Digital Likenesses
  11. In the global discussion on AI-generated content, California's AB 1836 expands protections to cover the digital likenesses of deceased individuals, ensuring their estate can consent to its use.
  12. Efforts to protect public figures, artists, and everyday individuals from unauthorized AI-generated impersonations are gaining momentum, with the No AI FRAUD Act aiming to address this issues in various jurisdictions.

Read also:

    Latest