CLS Blue Sky Blog

Debevoise & Plimpton Discusses Federal Regulators’ Focus on AI and Consumer Protection in Finance

As financial institutions increasingly deploy artificial intelligence (“AI”), including machine learning and automated decision-making technologies, across their business lines, U.S. federal regulators have started to scrutinize the consumer protection implications of these technologies. Most recently, the Department of Justice (“DOJ”), in partnership with the Consumer Financial Protection Bureau (“CFPB”) and the Office of the Comptroller of the Currency (“OCC”), announced a new interagency “Combatting Redlining Initiative,” with a particular focus by the CFPB on “digital redlining” resulting from biased underwriting algorithms. The DOJ, OCC and CFPB initiative follows closely on the heels of another recent announcement by the White House Office of Science and Technology Policy of its intention to develop an “AI bill of rights,” which may include a right of consumers to know when and how AI influences decisions that affect their civil liberties or to meaningful recourse if an algorithm causes them harm.

Given this growing focus on the consumer protection implications of AI, financial institutions should plan for increased regulatory oversight of and investigations involving these emerging technologies. In this update, we assess recent developments and enforcement trends and offer guidance on how companies can take steps to mitigate legal, regulatory and reputational risks.

DOJ, CFPB and OCC Announce Combating Redlining Initiative

On October 22, 2021, DOJ, CFPB and OCC announced a sweeping new initiative to combat redlining and lending discrimination that is prohibited under the Fair Housing Act (“FHA”) and the Equal Credit Opportunity Act (“ECOA”). The Combatting Redlining Initiative, which will be led by DOJ’s Civil Rights Division’s Housing and Civil Enforcement Section, in partnership with U.S. Attorney’s Offices, will focus, among other things, on:

In announcing this Initiative, Attorney General Merrick Garland stated that lending and housing discrimination have a long history in the United States and that the Combatting Redlining Initiative represents DOJ’s “most aggressive and coordinated enforcement effort to address redlining,” including by addressing “fair lending concerns on a broader geographic scale than the Justice Department has ever done before.” To that end, Attorney General Garland noted that DOJ currently has several open redlining investigations and expects to “open more in the months ahead.”

CFPB Director Rohit Chopra has also stated that the CFPB plans to play an active role in enforcing the Combatting Redlining Initiative, particularly with respect to “digital and algorithmic redlining.” The Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 granted the CFPB authority to supervise and enforce compliance with ECOA for entities within the CFPB’s jurisdiction and to issue regulations and guidance to interpret ECOA. In his statement, Director Chopra noted concern about the speed with which both banks and non-bank lenders are turning their lending and advertising decisions over to algorithms, explaining that in situations implicating potential discrimination, “[w]hen consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias.” Accordingly, he stated that the CFPB will be “closely watching for digital redlining, disguised through so-called neutral algorithms,” that may reinforce longstanding biases.

CFPB Director Chopra provided additional clarity on the CFPB’s enforcement focus around digital redlining in his subsequent testimony before the Senate Committee on Banking, Housing, and Urban Affairs on October 28, 2021. Director Chopra testified that the CFPB will focus primarily on enforcement against large companies and repeat offenders, particularly violators of agency and federal court orders, and that companies that self-identify violations will be given leeway.

This heightened priority around digital redlining has broad implications for banks and other financial institutions that develop or use artificial intelligence in their lending services:

Accordingly, the Retail Lending booklet suggests that OCC examiners will be looking at the compliance of these technologies with fair lending laws, including by scrutinizing, among other things:

CFPB Advanced Notice of Proposed Rulemaking on Consumer Data Access and Related Information Requests

In October 2020, the CFPB issued an advanced notice of proposed rulemaking (“ANPR”) requesting information regarding the scope of consumers’ access to their financial data under Section 1033 of the Dodd-Frank Act, including with respect to concerns about data security, privacy, control, and accountability. The CFPB received close to 100 letters from banks, fintechs, aggregators and industry trade groups in response to the ANPR.

Although the CFPB has not moved this issue forward sufficiently to issue a notice of proposed rulemaking setting forth its suggested regulatory approach, in October 2021, the CFPB ordered six technology companies to provide information regarding their payment systems and technologies pursuant to 1022(c)(4) of the Consumer Financial Protection Act. The order—which showcased the CFPB’s willingness to target sweeping information disclosure requests toward a selection of the largest payment service providers—asked Amazon, Apple, Facebook, Google, PayPal and Square to provide information about the features of, and marketing efforts, fees charged and future plans for, various financial products. The orders also requested information regarding each company’s data collection, retention, generation, use, monetization, protection and measurement practices.

In concert with the orders, CFPB Director Chopra issued a statement noting specific areas of concern around collection of consumer financial data, including potential behavioral targeting, financial surveillance or discriminatory pricing by “big tech” payment companies. As these issues extend beyond the Section 1033 data access rights at the focus on the CFPB’s ANPR, Director Chopra appears to be signaling a broader lens on the part of the CFPB into the consumer protection implications of emerging fintech products involving the use of consumer data and AI.

FTC Emphasis on Disclosures and Transparency to Consumers on AI

Over the past several years, the FTC has become increasingly vocal about consumer disclosures and transparency around AI. For example, in remarks early last year, FTC Commissioner Rebecca Kelly Slaughter stated that “[p]roprietary algorithmic models are often cloaked in secrecy… and frustration with the opacity of the ‘black box’ can lead consumers to feel powerless and distrustful”; however, “[i]ncreasing transparency lifts the curtain on these opaque processes.”

The FTC followed these remarks by publishing two blog posts focused on AI in April 2020 and April 2021, emphasizing that companies can manage the consumer protection risks of AI by ensuring that their tools are transparent, explainable, fair, empirically sound and accountable. In particular, the FTC encourages companies to:

The FTC is not hesitant to use its century-old anti-deception power to punish modern-day practices if an algorithm fails to meet these expectations. In its settlement with Everalbum, Inc., the FTC ordered the company to “forfeit the fruits of its deception” by deleting any facial recognition models and algorithms developed using photos or videos uploaded by its users without their consent. The FTC may rely on this type of injunctive relief in future enforcement actions involving AI, increasing the potential costs to companies of inadequate consumer disclosures around AI tools.

SEC Focus on AI-Based Delivery of Financial Services

Regulatory concerns around AI are not limited to the lending arena. As to investing, the SEC has become increasingly watchful of the use of AI in the provision of financial services, including automated trading and wealth-management tools. This month, SEC chairman Gary Gensler offered prepared remarks at DC Fintech Week in which he said he believes “machine learning and artificial intelligence are changing decision-making and the models behind that decision-making more dramatically than crypto.” Gensler emphasized the importance of centering public policy goals in light of technologically driven changes in finance. Specifically, Gensler highlighted the need to consider “conflicts of interest, bias, and systemic risks” accompanying these developments.

Gensler clarified these three areas of concern further. Gensler believes the use of digital analytics poses important questions, including whether platforms are optimizing other factors besides investor returns, such as their own revenues, which could pose conflicts. Gensler also raised the need to prevent analytics from “reinforc[ing] societal inequities that may be embedded in data,” thereby deepening bias. Lastly, Gensler warned that failing to “guard against herding, interconnectedness, and concentration into certain datasets, providers, or investments . . . [could] lead to system-wide issues” and increased systemic risk. Gensler ended his remarks by saying he was “technology-neutral,” but called for continued public policy goals to protect investors and the financial markets.

Gensler’s sentiments are in line with recent enforcement actions by the SEC, which illustrate the SEC’s new focus on policing the use of AI. In 2020, the SEC ordered investment advisor BlueCrest Capital to pay $170 million for failing to disclose its use of an underperforming algorithm as a substitute for live traders. In September 2021, App Annie, an app data and analytics company, paid a penalty of more than $10 million to settle fraud and misrepresentation charges in connection with its use of data in a statistical model used in one of its product offerings. Heightened SEC enforcement scrutiny related to AI and data misuse can be expected in the future, in light of Gensler’s comments and the attention to automated platforms and alternative data usage identified in the SEC Division of Examinations’ announcement of 2021 Examination Priorities.

SEC divisions beyond Enforcement are also interested in the use of AI and have begun promulgating guidance for registrants. Recently, SEC staff participating in the “SEC Speaks” virtual event discussed requests for information and comment regarding digital engagement practices by broker-dealers and investment advisers. Sarah ten Siethoff, Acting Division Director of the Division of Investment Management, explained that “on the investment adviser side, [the SEC] really focused on asking questions about how advisers are using . . . artificial intelligence or other types of digital tools in providing investment advice . . . seeking to better understand these practices and any legal questions they raise as well as their relationship with existing rules that we have out there.” The SEC is also working with international counterparts to develop AI-specific guidance. Parisa Haghshenas, Branch Chief in the Chief Counsel’s Office in the Division of Investment Management, shared that the SEC “engaged with” the International Organization of Securities Commissions – which recently published several recommendations for its member regulators in developing their own regulatory frameworks concerning AI – “on its guidance for intermediaries and asset managers’ use of artificial intelligence and machine learning.”

As AI-related regulations take shape, the Enforcement Division will likely have sharper tools beyond general anti-fraud provisions by which to evaluate the use of AI by broker-dealers and registered investment advisers.

Key Takeaways

The Combatting Redlining Initiative, and accompanying remarks by Attorney General Garland and CFPB Director Rohit Chopra, suggest that financial institutions should expect increased enforcement and supervision around “digital redlining,” including with respect to AI technologies. While it is too early to know how the Initiative will be enforced in practice, lenders and creditors should consider preparing for potential supervisory examinations or enforcement actions focused on whether their AI-related practices involve potential proxy discrimination or lead to a disparate impact in lending decisions. State Attorneys General also share authority with the CFPB to enforce the CFPB’s regulations interpreting ECOA, as well as state fair lending laws, and have expressed their focus on disparate impact theories to combat lending discrimination. The SEC and FTC may also increase their scrutiny around transparency, bias, systemic risk and conflicts of interest related to AI systems.

Companies should therefore consider taking steps to mitigate legal, regulatory and reputational risks related to their AI models, including by:

This post comes to us from Debevoise & Plimpton LLP. It is based on the firm’s memorandum, “Increased Focus by Federal Regulators on AI and Consumer Protection in the Financial Sector,” dated November 10, 2021, and available here.  Caroline Novogrod Swett, Frank Colleluori, Adrian Gonzalez, Alexandra N. Mogul, Lorena Rodriguez, and Amy Aixi Zhang also contributed to the memorandum.

Exit mobile version