On February 15, 2024, the Federal Trade Commission (FTC) issued a Supplemental Notice of Proposed Rulemaking (SNPRM) seeking comment on a proposed rule that, if implemented, would prohibit the impersonation of any individual and prohibit companies from supplying technology that they know or reasonably should have known would facilitate impersonation. The proposed rules, which could go into effect as soon as this spring, would substantially increase the FTC’s enforcement powers over AI — and could potentially expose any provider of generative AI services to significant liability.
The SNPRM is part of the FTC’s expanding investigations and enforcement in the AI industry. The agency used the second half of 2023 to signal an aggressive regulatory and enforcement agenda relating to AI. That trend has only accelerated in 2024 with additional policy statements, formal investigations, and now this SNPRM.
Scope of the Proposed Rule
The SNPRM would expand a recently finalized rule prohibiting the impersonation of government officials and business representatives in two respects.
First, it would declare the impersonation of any individual, whether real or fictitious, an unfair or deceptive act or practice in violation of the FTC Act. The SNPRM defines “impersonation” two ways:
- “materially misrepresent[ing], directly or by implication, affiliation with, including endorsement or sponsorship by … an individual, in or affecting commerce” and
- “materially and falsely pos[ing] as, directly or by implication … an individual, in or affecting commerce.”
Second, and perhaps most impactfully, it would make the provision of “goods or services with knowledge or reason to know” they will be used for unlawful impersonation to be an unfair or deceptive act. In its press release, the FTC states that this provision is directed toward “AI platform[s] that create[] images, video, or text, [] being used to harm consumers through impersonation.”
The SNPRM is the most recent step in the FTC’s long-standing efforts to discourage and police impersonation scams. Increasingly, these efforts are focused on the role of AI. For example, a March 2023 FTC blog post alerted consumers to the risk of chatbots, deepfakes, and voice cloning and their use in “grandparent” or “loved ones” scams. And in November 2023, the FTC launched a novel exploratory challenge to encourage the public to develop detection methods for AI voice cloning. In announcing the challenge, the FTC warned that AI companies “are responsible for the first- and second-order effects of the products they release.”
Implications of the Proposed Rule
The SNPRM has the potential to significantly impact generative AI providers through increased likelihood of investigations and enforcement and monetary and injunctive remedies for violations.
Broad “Knowledge or Reason to Know” Liability
The SNPRM would dramatically increase the risk that a generative AI company could be enforced against for scams that occur using its products. The SNPRM would combine two fringe aspects of the FTC’s enforcement authority: (1) enforcing against providers of the “means and instrumentalities” of misconduct; and (2) imposing liability where the provider “knew or should have known” of the misconduct. In addition, the conduct covered by the proposed rule broadly renders unlawful any act of falsely posing as an individual, without exception — potentially sweeping in a whole range of First Amendment protected activity — and rendering the rule vulnerable to a legal challenge, if adopted in its current form.
The proposed rule could thus enable indefinite, sweeping enforcement against generative AI providers. The SNPRM does not define when a company would have “reason to know” that a scam has occurred using content created using the generative AI product. The FTC’s mere allegation that it believes scams are perpetrated using a specific AI product could suffice to initiate an investigation and possibly enforcement against a target company. A company’s popularity and its own efforts to detect and prevent scams could be used against it. The proposed rule offers no safe harbor for good-faith efforts to detect, prevent, and take action against scammers.
Monetary Relief for First-Time Violations
The SNPRM’s expansive liability standard is compounded by an increased likelihood of FTC enforcement for monetary relief. The FTC makes clear in the SNPRM that it intends to pursue such relief from impersonators and their facilitators — even for first-time violations (the FTC is ordinarily limited to injunctive relief for first-time violations under Section 5 of the FTC Act). By proactively declaring this conduct an “unfair or deceptive act or practice,” the FTC can make use of an obscure portion of the FTC Act, Section 19, to directly file a case in federal court to seek monetary relief for violations. The FTC’s reliance on Section 19 is a direct response to a 2021 Supreme Court decision that curtailed the agency’s authority to obtain monetary relief under other provisions of the FTC Act.
Companies that offer generative AI products should consider the potential consequences of FTC enforcement, which, under the proposed rule, could include injunctive relief and a substantial monetary component.
Collateral Implications for Right of Publicity
Several US states already recognize a right of publicity, and several others are looking to expand their existing right of publicity laws. In addition, efforts to pass a federal right of publicity law could introduce an additional or alternative source of potential liability for AI providers whose services are used to replicate the likenesses of individuals. The proposed rule could be used by the FTC to target similar replications of an individual’s likeness, but with none of the nuances or limitations of the existing or proposed right of publicity laws. For instance, state laws often include specific exceptions permitting the use of likenesses in news, public affairs, and political campaigns.
Situating the Proposed Rule in the FTC’s AI Road Map
The SNPRM is just one component of the FTC’s multifaceted effort to regulate AI through its competition and consumer protection authority. The agency’s recent investigative activity and public statements include:
- An AI Tech Summit held on January 25, 2024, to scrutinize the adoption of AI technologies and industry participants. FTC Chair Khan’s opening remarks highlighted the dangers of consolidated markets and expressed concern that certain AI firms would achieve dominance and use their market power to harm consumers. Commissioner Slaughter asserted that gatekeeping by large incumbents may deny competitors access to key inputs such as chips and training data and advocated proactively using the FTC’s Section 5 authority. Commissioner Bedoya and panel participants warned about the lack of transparency surrounding AI models and noted that companies may be tempted to misuse consumer data.
- 6(b) orders issued during the AI Tech Summit to leading AI technology companies, requiring them to provide information regarding investments and partnerships involving generative AI. Chair Khan explained the agency’s interest in these collaborations by stating, “History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity.”
- A blog post published on February 13, 2024, that warns AI companies to resist the “powerful business incentives to turn the abundant flow of user data into more fuel for their AI products.” The FTC emphasized that retroactively changing terms of use to expand access to user data, including for AI training, could be an unfair or deceptive practice.
This post comes to us from Latham & Watkins. It is based on the firm’s memorandum, “FTC Sharpens Its AI Agenda With Novel Impersonation Rulemaking,” dated February 26, 2024, and available here.