Skadden Discusses SEC Focus Amid Lack of Final AI Rules

Last year, the U.S. Securities and Exchange Commission (SEC) proposed ambitious rules relating to artificial intelligence (AI) that have drawn significant commentary and criticism. While it is unlikely that any changes in the law are imminent, other initiatives by the SEC indicate that it is not content to wait for those changes before addressing AI-related problems and risks that it perceives. The SEC’s Division of Examinations has launched a sweep focusing on the development and use of AI models by investment advisers, and the Division of Enforcement confirmed that it has AI-focused investigations underway.1 While the proposed rules raise the bar in the areas of conflicts of interest, disclosures, and policies and procedures, existing provisions of the federal securities laws address these areas and may be relevant to how firms currently develop and use AI models. Given the lengthy process for rulemaking, it is likely that the SEC has launched these initiatives with an eye to these existing provisions, which are discussed below.

The SEC’s Focus on AI

Following significant technological developments in recent years, AI is becoming a tool used by many broker-dealers and investment advisers. Firms currently use programs leveraging AI — which includes machine learning, deep learning and generative AI — in various ways, such as to forecast the price movements of certain investment products, program robo-advisers to assist in automated planning and investment services, address basic client questions via virtual assistants, aid in risk management, anticipate cyberthreats and bolster compliance efforts by enhancing surveillance capabilities.

The SEC is paying attention. Reiterating concerns he expressed while he was a professor at MIT’s Sloan School of Management, SEC Chairman Gensler has stated that AI poses risks to individual investors as well as to the financial system generally. For example, according to the chairman, AI models’ decisions and outcomes often are unpredictable, making it difficult to determine whether the models may be placing the interests of the firms developing them ahead of those of firm clients. The chairman also perceives the potential for systemic risks because the use of AI in the financial sector eventually may be driven by a small handful of foundational models, thus creating a “monoculture” with many market participants relying on the same dataset or model.2 In that event, Chairman Gensler posits, it becomes more likely that various AI models will produce similar outputs, making it more likely that those relying on those outputs will make similar financial decisions, concentrating risk.

To address these risks, the SEC has proposed broad new rules that would govern how broker-dealers and investment advisers can use AI.3 The rules, if adopted in their current form, would prevent firms from using predictive data analytics (PDA), which includes AI and other technologies, in a manner that creates a conflict of interest that places the firm’s interests ahead of those of its customers and clients. The rules seem to reflect an SEC concern with “black box” PDA technologies where the firm may not be fully aware of how the technology has reached a certain conclusion as well as with the potential for the PDA to use corrupted, mislabeled or biased data or to exacerbate conflicts of interest with investors. Specifically, the rules as proposed require that broker-dealers and investment advisers:

  • evaluate the PDA they use and identify and eliminate or neutralize any related conflicts of interest that could place the firm’s interests ahead of those of its customers or clients;
  • adopt, implement and maintain written policies and procedures to come into compliance with the proposed rules; and
  • comply with recordkeeping requirements by maintaining records of evaluations done on PDA, including when the technology was implemented and materially modified, the date of any testing and any actual or potential conflicts of interest identified.

Commentators have noted that these proposed rules, if adopted in their current form, would present new challenges to firms relying on AI models.4 The rules would apply to technology beyond AI and to interactions with not just individual investors but also institutional investors and even prospective investors.5 The requirement to eliminate and neutralize conflicts of interest goes well beyond how the federal securities laws typically deal with conflicts, which is by mandating full and fair disclosures. To satisfy this requirement, firms must evaluate and document all conflicts of interest that may arise by using AI. Chairman Gensler’s own remarks indicate this is no easy task: “AI models’ decisions and outcomes often are unexplainable. Part of this is inherent to the models themselves. The math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters.”6 And, as he succinctly observed in a paper that he wrote at MIT, “if deep learning predictions were explainable, they wouldn’t be used in the first place.”7 How broker-dealers and investment advisers will nevertheless be able to fully document the conflicts that may arise from the unexplainable outcomes is not clear.

Given these challenges, the proposed rules have been met with industry pushback. Some firms have argued that the rules, in their current form, would require the cessation of business until they could evaluate every bit of technology they use, that they would risk industry consolidation and that they would put U.S. investors at a competitive disadvantage in the global market.8

Other efforts by the SEC suggest that it is not content to wait until these final rules are in effect before addressing problems and risks it perceives related to AI technologies. Around the time the SEC proposed its rules on PDA, the SEC’s Division of Examination launched an AI-related sweep, asking firms questions concerning how they are using AI and requesting that they provide, among other things, a description of their models and techniques, the source and providers of their data, and internal reports of any incidents where AI use raised any regulatory, ethical or legal issues.9 The SEC has also requested copies of the firms’ AI compliance policies and procedures, contingency plans in case of AI system failure or inaccuracies, a sample of the firms’ client profile documents used by AI systems to understand clients’ risk tolerance and investment objectives, and all disclosure and marketing documents to clients that disclose the firm’s use of AI.10

The Division of Enforcement is also involved. After Chairman Gensler cautioned firms against “AI washing,” which is the practice of making false AI-related claims that the chairman likened to greenwashing, a senior SEC enforcement official confirmed that the agency has active investigations in this area.11

Existing Regulatory Framework and AI

In the absence of final AI rules, the SEC’s efforts indicate that it is considering using existing regulatory provisions to address risks the SEC apparently perceives with respect to AI, as discussed below. Fixing a problem in an operational AI model can be challenging because subsequent machine learning by the model may have been influenced by the problem in ways that are difficult to see and trace. Consequently, awareness of the regulatory landscape is crucial so that problems can be avoided or, at least, minimized in the first instance.

AI Development

Current regulatory provisions could apply to both the inputs and outputs of AI models. As to inputs, AI models that use datasets that the firm may not have the authority to use or which may be non-public could implicate insider trading laws and other provisions, such as the requirement that investment advisers establish and enforce policies and procedures to prevent the misuse of material non-public information. A robust understanding of the origin of data used by AI models may help firms navigate these requirements.

AI outputs may implicate the fiduciary duty owed by investment advisers or a broker-dealer’s obligations under Regulation Best Interest, which require that firms act in their clients’ best interest. While these obligations do not go as far as the proposed rules to require that conflicts of interest be eliminated, they do require that firms take steps to identify and fully and fairly disclose those conflicts. Firms will need to consider whether AI models used in making securities recommendations might run afoul of these requirements if they fail to take into account reasonably available alternatives. And, while there is no existing requirement that firms demonstrate how AI models they use make decisions, the SEC, under current regulatory requirements, needs only to demonstrate that a conflict of interest exists, which it can do from a pattern of investment recommendations or other actions dictated or influenced by AI models, and does not need to show how or why this pattern exists.

AI Disclosures

As Chairman Gensler’s warnings against AI washing demonstrate, the SEC is focused on disclosures concerning AI technology that do not fairly or accurately describe the design or use of the technology. Given the SEC’s broad authority to pursue inaccurate disclosures, the agency’s focus likely extends beyond broker-dealers and investment advisers to public issuers and others.

If companies choose to make disclosures on the AI models they use, they should seek to not only accurately disclose how AI is being developed and deployed, but also monitor the models to timely identify any “drift” over time based on the training scenarios provided, which may render previous disclosures outdated. Even disclosures not specifically related to AI may be impacted. For example, the use of AI in risk management — such as technology that semi-automates the review of risk limit breaches or modifies them with a more dynamic approach —may render disclosures about risk management controls and processes outdated if the AI model has the capability to adapt its review based on training scenarios and other information it obtains.

AI Compliance

While the proposed AI rules mandate the design and implementation of AI-related policies, investment advisers are already required to have and implement policies and procedures designed to prevent violations of the federal securities laws, and broker-dealer supervisory requirements take into account whether a broker-dealer has established policies and procedures that would reasonably be expected to prevent violations. The request by the Division of Examination for AI-related policies and procedures suggests the SEC is interested in assessing whether a firm’s compliance program is specifically focused on the possible regulatory risks that using AI models may pose.

AI Data Protection

Under SEC rules such as Regulations S-P and S-ID, broker-dealers, investment advisers and investment companies must take certain steps to safeguard client information and appropriately respond to red flags related to possible identity theft. Firms should, therefore, take steps to ensure that any AI models that have access to customer information are properly safeguarded and surveilled for indicia of cyberthreats. And, conversely, AI models used by third parties may use the information a broker-dealer or investment adviser makes available online as a data source. Vulnerabilities in a firm’s network architecture may allow for access to more information than the firm intended and possibly implicate its obligation to safeguard customer and other non-public information.

Conclusion

AI has already demonstrated that it can revolutionize many aspects of financial services, and rapid developments with the technology suggest even more significant uses ahead. The SEC’s ongoing efforts related to this technology indicate that the agency may not be content to wait until it adopts rules specifically designed to address AI, and an understanding of the existing regulatory landscape will help firms stay ahead of future developments.

ENDNOTES

1 Richard Vanderford, “SEC Head Warns Against ‘AI Washing,’ the High-Tech Version of ‘Greenwashing’,” The Wall Street Journal (Dec. 5, 2023).

2 Sarah Jarvis, “Gensler Warns AI ‘Monoculture’ May Weaken Financial System,” Law360 (Jan. 17, 2024).

3 The SEC has not yet announced a date for voting on a final rule.

4 See, e.g., Ken Kumayama et al., “SEC Proposes New Conflicts of Interest Rule for Use of AI by Broker-Dealers and Investment Advisers,” Skadden (Aug. 10, 2023).

5 Id.; see also Commissioner Hester M. Peirce, “Through the Looking Glass: Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,” Proposal, SEC (July 26, 2023); Commissioner Mark T. Uyeda, “Statement on the Proposals re: Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers,” SEC (July 26, 2023).

6 Gary Gensler, “Isaac Newton to AI,” Remarks before the National Press Club, SEC (July 17, 2023).

7 Gary Gensler and Lily Bailey, “Deep Learning and Financial Stability,” MIT Sloan School of Management (Nov. 1, 2020).

8 See, e.g.Virtu Financial, Comment Letter on Proposed Rule on Conflicts of Interest Associated with the Use of PDA (Oct. 10, 2023).

9 Bill Myers, “SEC launches AI Sweep,” Private Funds CFO (Aug. 30, 2023).

10 Id.

11 Richard Vanderford, “SEC Head Warns Against ‘AI Washing,’ the High-Tech Version of ‘Greenwashing’,” The Wall Street Journal (Dec. 5, 2023).

This post comes to us from Skadden, Arps, Slate, Meagher & Flom LLP. It is based on the firm’s memorandum, “Understanding SEC’s Focus Amid Lack of Final AI Rules,” dated February 23, 2024, and available here. 

Leave a Reply

Your email address will not be published. Required fields are marked *