The Fight Against Deepfakes Ramps Up

On February 15, 2024, the Federal Trade Commission (FTC) finalized its Government and Businesses Impersonation Rule, which marks the first time since 1980 that a new rule has been regulated for addressing a deceptive practice. The FTC, in its press release, notes that Americans are getting cheated “out of billions of dollars every year” due to impersonation schemes where fraudsters pretend to represent government agencies and/or be affiliated with consumer brand names.

Perhaps even more interesting is its issuance of a supplemental notice seeking comment on a proposed rule to extend this prohibition to impersonation of individuals. According to FTC Chair Lina M. Khan, “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever.”  The proposed expansions would strengthen the FTC’s ability to fight AI-enabled scams that impersonate individuals, by “declaring it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.”

Isabel Gottlieb, writing for Bloomberg Law, observes if the proposed expansion goes through “many AI models could find themselves implicated.” She states, “AI tools like OpenAI’s Dall-E and its new text-to-video model Sora, or Stability AI’s Stable Diffusion, let users create deepfakes—often convincingly realistic, but manufactured, depictions of people.” The proposed rule asks “should [they] be liable if they know, or ‘have reason to know,’ their services are being used to “create deceptive content.”

Recent examples of deepfake instances include:

  • False headlines and news segments from anchors such as CBS Evening News’ Norah O’Donnell and CNN and BBC journalists went viral “under the guise of being authoritative news.”
  • Earlier this year, a robocall to New Hampshire residents that advised them against voting in the presidential primary was sent via an AI voice that mimicked President Joe Biden.
  • In January, disturbing deepfake images of Taylor Swift were widely circulated on social media including X and Facebook platforms.

As of January 31, AP reports at least 10 states have adopted deepfake-related laws and “scores of more measures” are being considered in legislatures across the country.

  • Georgia, Hawaii, Texas and Virginia have laws that criminalize nonconsensual deepfake porn.
  • Victims can sue creaters of  images that used their likenesses in California and Illinois.
  • Minnesota and New York do both above with Minnesota’s law also targeting use in politics.

Related post: We Need to Talk About Deepfakes

Image by PublicDomainPictures from Pixabay

Copyright © Copyright 2024 Cottrill Research. Site By Hunter.Marketing
s.