As financial institutions increasingly deploy artificial intelligence (“AI”), including machine learning and automated decision-making technologies, across their business lines, U.S. federal regulators have started to scrutinize the consumer protection implications of these technologies. Most recently, the Department of Justice (“DOJ”), in partnership with the Consumer Financial Protection Bureau (“CFPB”) and the Office of the Comptroller of the Currency (“OCC”), announced a new interagency “Combatting Redlining Initiative,” with a particular focus by the CFPB on “digital redlining” resulting from biased underwriting algorithms. The DOJ, OCC and CFPB initiative follows closely on the heels of another recent announcement by the White House Office of Science and Technology Policy of its intention to develop an “AI bill of rights,” which may include a right of consumers to know when and how AI influences decisions that affect their civil liberties or to meaningful recourse if an algorithm causes them harm.
Given this growing focus on the consumer protection implications of AI, financial institutions should plan for increased regulatory oversight of and investigations involving these emerging technologies. In this update, we assess recent developments and enforcement trends and offer guidance on how companies can take steps to mitigate legal, regulatory and reputational risks.
DOJ, CFPB and OCC Announce Combating Redlining Initiative
On October 22, 2021, DOJ, CFPB and OCC announced a sweeping new initiative to combat redlining and lending discrimination that is prohibited under the Fair Housing Act (“FHA”) and the Equal Credit Opportunity Act (“ECOA”). The Combatting Redlining Initiative, which will be led by DOJ’s Civil Rights Division’s Housing and Civil Enforcement Section, in partnership with U.S. Attorney’s Offices, will focus, among other things, on:
- Ensuring that fair lending enforcement is “informed by local expertise on housing markets and the credit needs of local communities of color” by partnering with local U.S. Attorney’s Offices;
- Expanding DOJ’s analyses of potential redlining from traditional depository institutions to also encompass non-depository institutions that now make the majority of mortgages in the United States— which was closely followed in New York by an expansion of the state’s Community Reinvestment Act, New York State Banking Law § 28-b, to similarly cover non-depository mortgage lenders;
- Strengthening financial regulator relationships to ensure fair lending violations are identified and referred to DOJ; and
- “Increasing coordination with State Attorneys General on fair lending violations.”
In announcing this Initiative, Attorney General Merrick Garland stated that lending and housing discrimination have a long history in the United States and that the Combatting Redlining Initiative represents DOJ’s “most aggressive and coordinated enforcement effort to address redlining,” including by addressing “fair lending concerns on a broader geographic scale than the Justice Department has ever done before.” To that end, Attorney General Garland noted that DOJ currently has several open redlining investigations and expects to “open more in the months ahead.”
CFPB Director Rohit Chopra has also stated that the CFPB plans to play an active role in enforcing the Combatting Redlining Initiative, particularly with respect to “digital and algorithmic redlining.” The Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 granted the CFPB authority to supervise and enforce compliance with ECOA for entities within the CFPB’s jurisdiction and to issue regulations and guidance to interpret ECOA. In his statement, Director Chopra noted concern about the speed with which both banks and non-bank lenders are turning their lending and advertising decisions over to algorithms, explaining that in situations implicating potential discrimination, “[w]hen consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias.” Accordingly, he stated that the CFPB will be “closely watching for digital redlining, disguised through so-called neutral algorithms,” that may reinforce longstanding biases.
CFPB Director Chopra provided additional clarity on the CFPB’s enforcement focus around digital redlining in his subsequent testimony before the Senate Committee on Banking, Housing, and Urban Affairs on October 28, 2021. Director Chopra testified that the CFPB will focus primarily on enforcement against large companies and repeat offenders, particularly violators of agency and federal court orders, and that companies that self-identify violations will be given leeway.
This heightened priority around digital redlining has broad implications for banks and other financial institutions that develop or use artificial intelligence in their lending services:
- ECOA Enforcement Focus on Disparate Impact Claims. The CFPB is expected to undergo significant changes under the Biden Administration, including the potential revival of the disparate impact doctrine to pursue fair lending violations under ECOA and its implementing regulation, Regulation B. Under the Obama administration, the CFPB reaffirmed that it would supervise and enforce fair lending violations under ECOA based on evidence of disparate impact, which focuses on practices that, regardless of intent, are deemed discriminatory due to their disproportionately negative impact on a protected class. The Obama administration also codified disparate impact standards under the FHA. While the Trump administration sought to weaken this rule, Biden-appointee HUD Secretary Marcia Fudge issued a notice of proposed rulemaking on June 25, 2021 that would restore the Obama-era discriminatory effects standard, which HUD described as more consistent with the FHA’s “broad remedial purpose of eradicating unnecessary discriminatory practices from the housing market.” It is thus likely that a Biden-led DOJ, CFPB and OCC will similarly seek to apply a disparate impact standard in fair lending supervision and enforcement actions arising under ECOA.
- Focus on Proxies for Protected Classes. Lenders and creditors often assess credit risk from alternative data (e., information not typically found in the consumer’s credit files), such as criminal history, residential stability, employment history and social media profiles. The CFPB has previously emphasized that alternative data, when fed into an algorithm, may serve as proxies for protected classes under ECOA, such as race, color, religion, national origin, sex, marital status, age or receipt of public benefits. Unlike the disparate impact theory, proxy discrimination is focused on whether a seemingly neutral variable (such as zip code) might be so highly correlated with a legally protected class (such as race) that it serves as a proxy. When CFPB Director Chopra was an FTC Commissioner, he argued that “[w]ith more data points and more volume, any input or combination of inputs can turn into a substitute or proxy for a protected class.”
- Fair Lending Examinations Focused on AI. Banks may also anticipate increased attention to digital redlining and AI issues by federal bank examiners. The OCC’s Fiscal Year 2022 Bank Supervision Operating Plan, which sets forth annual examination priorities, calls for examiners to focus on banks’ consumer compliance, fair lending and implementation of new technologies such as artificial intelligence, including the “appropriateness of governance processes when banks undertake significant changes.” Moreover, following the announcement of the Combatting Redlining Initiative, on October 28, 2021, the OCC issued a revised “Retail Lending” booklet of the Comptroller’s Handbook. Notably, the new Retail Lending booklet instructs examiners to consider the risks associated with the bank’s use of alternative data and AI, which the booklet emphasizes “must be done in a manner consistent with applicable consumer protection laws and regulations.”
Accordingly, the Retail Lending booklet suggests that OCC examiners will be looking at the compliance of these technologies with fair lending laws, including by scrutinizing, among other things:
-
- Which aspects of the underwriting process are automated versus manual;
- What sources of information are used and required (e.g., credit bureau reports, written applications, alternative data);
- How loan amounts or credit line assignments are determined;
- Where and how credit scores and scoring models are used (types of models, history of model use, monitoring and validation);
- Differences in the underwriting processes based on products, target markets, application channels, etc.;
- Whether third-party due diligence includes assessing the third party’s reputation, products, financial condition, systems for compliance with applicable laws and regulations, and information security audit results; and
- Whether board and senior management oversight includes explanations of all automated decision tools and judgmental decision points within the approval process.
- Explainability of Black Box Underwriting Algorithms. Under ECOA, creditors must provide consumers with the main reasons for a denial of credit or other adverse action. However, the opacity of certain “black box” AI models can create challenges in ascertaining how those complex models reached their decisions, which in turn can make it difficult for companies to provide such explanations. Although the CFPB has previously emphasized that the “existing regulatory framework has built-in flexibility that can be compatible with AI algorithms,” CFPB Director Chopra underscored in his recent testimony that companies cannot avoid fair lending laws under the pretext of secret algorithms— suggesting an expectation that financial institutions that rely on algorithms should have at least a working understanding of how they function and generate results. Understanding how the AI system arrives at a particular decision will better allow lenders to explain to consumers an adverse decision, and thereby help mitigate the risk of discrimination claims by consumers who were denied credit.
CFPB Advanced Notice of Proposed Rulemaking on Consumer Data Access and Related Information Requests
In October 2020, the CFPB issued an advanced notice of proposed rulemaking (“ANPR”) requesting information regarding the scope of consumers’ access to their financial data under Section 1033 of the Dodd-Frank Act, including with respect to concerns about data security, privacy, control, and accountability. The CFPB received close to 100 letters from banks, fintechs, aggregators and industry trade groups in response to the ANPR.
Although the CFPB has not moved this issue forward sufficiently to issue a notice of proposed rulemaking setting forth its suggested regulatory approach, in October 2021, the CFPB ordered six technology companies to provide information regarding their payment systems and technologies pursuant to 1022(c)(4) of the Consumer Financial Protection Act. The order—which showcased the CFPB’s willingness to target sweeping information disclosure requests toward a selection of the largest payment service providers—asked Amazon, Apple, Facebook, Google, PayPal and Square to provide information about the features of, and marketing efforts, fees charged and future plans for, various financial products. The orders also requested information regarding each company’s data collection, retention, generation, use, monetization, protection and measurement practices.
In concert with the orders, CFPB Director Chopra issued a statement noting specific areas of concern around collection of consumer financial data, including potential behavioral targeting, financial surveillance or discriminatory pricing by “big tech” payment companies. As these issues extend beyond the Section 1033 data access rights at the focus on the CFPB’s ANPR, Director Chopra appears to be signaling a broader lens on the part of the CFPB into the consumer protection implications of emerging fintech products involving the use of consumer data and AI.
FTC Emphasis on Disclosures and Transparency to Consumers on AI
Over the past several years, the FTC has become increasingly vocal about consumer disclosures and transparency around AI. For example, in remarks early last year, FTC Commissioner Rebecca Kelly Slaughter stated that “[p]roprietary algorithmic models are often cloaked in secrecy… and frustration with the opacity of the ‘black box’ can lead consumers to feel powerless and distrustful”; however, “[i]ncreasing transparency lifts the curtain on these opaque processes.”
The FTC followed these remarks by publishing two blog posts focused on AI in April 2020 and April 2021, emphasizing that companies can manage the consumer protection risks of AI by ensuring that their tools are transparent, explainable, fair, empirically sound and accountable. In particular, the FTC encourages companies to:
- Ensure that statements to customers and consumers about AI are truthful, non-deceptive and backed up by evidence;
- Avoid overpromising what an algorithm can deliver or misleading consumers about the nature of their interaction with AI models;
- Understand and be able to explain to consumers what data is used in an AI model and how that data is used to arrive at a decision;
- Be transparent when collecting sensitive consumer data and disclose changes to data usage to consumers;
- Disclose the key factors that affected a consumer’s risk score if an algorithm assigns risk scores to consumers;
- Be prepared to provide the consumer with an adverse action notice if an algorithm makes decisions based on third-party vendor information and such a notice is required under the Fair Credit Reporting Act; and
- Embrace transparency by using independent standards, conducting and publishing the results of independent audits or opening their data or source code to outside inspection.
The FTC is not hesitant to use its century-old anti-deception power to punish modern-day practices if an algorithm fails to meet these expectations. In its settlement with Everalbum, Inc., the FTC ordered the company to “forfeit the fruits of its deception” by deleting any facial recognition models and algorithms developed using photos or videos uploaded by its users without their consent. The FTC may rely on this type of injunctive relief in future enforcement actions involving AI, increasing the potential costs to companies of inadequate consumer disclosures around AI tools.
SEC Focus on AI-Based Delivery of Financial Services
Regulatory concerns around AI are not limited to the lending arena. As to investing, the SEC has become increasingly watchful of the use of AI in the provision of financial services, including automated trading and wealth-management tools. This month, SEC chairman Gary Gensler offered prepared remarks at DC Fintech Week in which he said he believes “machine learning and artificial intelligence are changing decision-making and the models behind that decision-making more dramatically than crypto.” Gensler emphasized the importance of centering public policy goals in light of technologically driven changes in finance. Specifically, Gensler highlighted the need to consider “conflicts of interest, bias, and systemic risks” accompanying these developments.
Gensler clarified these three areas of concern further. Gensler believes the use of digital analytics poses important questions, including whether platforms are optimizing other factors besides investor returns, such as their own revenues, which could pose conflicts. Gensler also raised the need to prevent analytics from “reinforc[ing] societal inequities that may be embedded in data,” thereby deepening bias. Lastly, Gensler warned that failing to “guard against herding, interconnectedness, and concentration into certain datasets, providers, or investments . . . [could] lead to system-wide issues” and increased systemic risk. Gensler ended his remarks by saying he was “technology-neutral,” but called for continued public policy goals to protect investors and the financial markets.
Gensler’s sentiments are in line with recent enforcement actions by the SEC, which illustrate the SEC’s new focus on policing the use of AI. In 2020, the SEC ordered investment advisor BlueCrest Capital to pay $170 million for failing to disclose its use of an underperforming algorithm as a substitute for live traders. In September 2021, App Annie, an app data and analytics company, paid a penalty of more than $10 million to settle fraud and misrepresentation charges in connection with its use of data in a statistical model used in one of its product offerings. Heightened SEC enforcement scrutiny related to AI and data misuse can be expected in the future, in light of Gensler’s comments and the attention to automated platforms and alternative data usage identified in the SEC Division of Examinations’ announcement of 2021 Examination Priorities.
SEC divisions beyond Enforcement are also interested in the use of AI and have begun promulgating guidance for registrants. Recently, SEC staff participating in the “SEC Speaks” virtual event discussed requests for information and comment regarding digital engagement practices by broker-dealers and investment advisers. Sarah ten Siethoff, Acting Division Director of the Division of Investment Management, explained that “on the investment adviser side, [the SEC] really focused on asking questions about how advisers are using . . . artificial intelligence or other types of digital tools in providing investment advice . . . seeking to better understand these practices and any legal questions they raise as well as their relationship with existing rules that we have out there.” The SEC is also working with international counterparts to develop AI-specific guidance. Parisa Haghshenas, Branch Chief in the Chief Counsel’s Office in the Division of Investment Management, shared that the SEC “engaged with” the International Organization of Securities Commissions – which recently published several recommendations for its member regulators in developing their own regulatory frameworks concerning AI – “on its guidance for intermediaries and asset managers’ use of artificial intelligence and machine learning.”
As AI-related regulations take shape, the Enforcement Division will likely have sharper tools beyond general anti-fraud provisions by which to evaluate the use of AI by broker-dealers and registered investment advisers.
Key Takeaways
The Combatting Redlining Initiative, and accompanying remarks by Attorney General Garland and CFPB Director Rohit Chopra, suggest that financial institutions should expect increased enforcement and supervision around “digital redlining,” including with respect to AI technologies. While it is too early to know how the Initiative will be enforced in practice, lenders and creditors should consider preparing for potential supervisory examinations or enforcement actions focused on whether their AI-related practices involve potential proxy discrimination or lead to a disparate impact in lending decisions. State Attorneys General also share authority with the CFPB to enforce the CFPB’s regulations interpreting ECOA, as well as state fair lending laws, and have expressed their focus on disparate impact theories to combat lending discrimination. The SEC and FTC may also increase their scrutiny around transparency, bias, systemic risk and conflicts of interest related to AI systems.
Companies should therefore consider taking steps to mitigate legal, regulatory and reputational risks related to their AI models, including by:
- Developing an inventory of AI and machine learning models and establishing a risk-assessment framework for AI uses that considers the company’s legal, regulatory, compliance, operational and reputational risks;
- Assessing which models implicate consumer protection or fair lending considerations and therefore might fall within the scope of a potential examination or enforcement action;
- Assessing whether lending practices involving AI might produce a discriminatory effect on protected classes under the FHA or ECOA, and if so, taking steps to mitigate those risks;
- Determining whether any variables used by the AI system could have a significant correlation to a protected class such that they might be considered a proxy and, if so, taking mitigation steps;
- Developing robust risk-management processes that include assessments of third-party AI vendors to ensure compliance with fair-lending laws and regulations;
- Considering whether the company can explain how its algorithms arrived at decisions with significant consumer implications, as well as what information should be provided to consumers about the consequences of automated decision-making, the key factors underpinning any adverse actions and how to contest or correct an erroneous determination; and
- Establishing proper governance around AI models, including board and senior management oversight, cross-functional AI teams, risk assessments and sufficient training for the individuals involved.
This post comes to us from Debevoise & Plimpton LLP. It is based on the firm’s memorandum, “Increased Focus by Federal Regulators on AI and Consumer Protection in the Financial Sector,” dated November 10, 2021, and available here. Caroline Novogrod Swett, Frank Colleluori, Adrian Gonzalez, Alexandra N. Mogul, Lorena Rodriguez, and Amy Aixi Zhang also contributed to the memorandum.