Artificial intelligence (AI) tools, particularly large language models, are quickly becoming integral to financial advising. Recent evidence, however, demonstrates that they can act against investors’ interests. In a 2023 experiment,researchers deployed GPT‑4 as an autonomous trading agent and found that it executed an insider trade and then concealed the reason, evidence that sophisticated models can engage in deceptive, market‑abusive conduct without explicit prompts. This evidence underscores the need for regulations that can be used to identify and discipline algorithmic misconduct before it spreads across millions of retail accounts.
The United States already imposes rules on financial advisers. Broker‑dealers operate under Regulation Best Interest (Reg BI), adopted in 2019 and effective since June 30, 2020, which requires that any securities recommendation be made in the “best interest” of the retail customer, mitigates conflicts, and mandates robust disclosure. Investment advisers, by contrast, are fiduciaries under the Investment Advisers Act of 1940. The SEC’s 2019 “Fiduciary Interpretation” reaffirmed that they always owe clients a duty of care and loyalty. Under the interpretation, investment advisers’ conflicts may generally be addressed through full and fair disclosure and customers’ informed consent. These two regulatory regimes reflect decades of common‑law evolution and administrative rulemaking.
In July 2023 the SEC issued a draft rule on “Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker‑Dealers and Investment Advisers (AI Proposal).” The proposal would obligate any intermediary that employs a “covered technology” to identify every conflict, then eliminate or neutralize it, effectively precludingdisclosure as a remedy.
In a forthcoming article, I contend that while the need for oversight is clear, the SEC’s approach is conceptually and empirically flawed, and a principles‑based disclosure regime, bolstered by antifraud enforcement, would better align investor protection with technological innovation.
Historic Duties: Regulation Best Interest and the Fiduciary Interpretation
Regulation Best Interest (Reg BI). Reg BI imposes four interconnected obligations: (1) disclosure, (2) care, (3) conflict of interest, and (4) compliance. Collectively, they require a broker‑dealer to act in the customer’s best interestwithout placing its own financial or other interests ahead of the customer’s. Under Reg BI, after full and fair disclosure is provided, a retail customer’s acceptance of the broker-dealer’s recommendation can be construed as informed consent to the disclosure.
The 2019 Fiduciary Interpretation for Investment Advisers. The SEC’s 2019 interpretation of the Investment Advisers Act distilled the fiduciary duty owed to clients into two core components: (1) a duty of care and (2) a duty of loyalty. The duty of care requires advisers to provide advice that is in the best interest of the client, to seek best execution of transactions, and to provide monitoring commensurate with the agreed scope of the relationship. The duty of loyalty obliges advisers to place client interests first and address, typically through full and fair disclosure and a customer’s informed consent, any conflict that might cause the adviser to render advice that is not disinterested. Importantly, the fiduciary duty is continuous; it applies not only at the moment of a recommendation but throughout the relationship.
These two regimes reflect a calibrated, disclosure‑centric approach to investor protection that has evolved over decades of rulemaking and judicial interpretation.
The AI Proposal in Context
The AI Proposal defines “covered technology” so broadly that an Excel spreadsheet containing a correlation matrix would qualify, as would any chatbot, natural‑language‑processing routine, or proprietary machine‑learning model that “optimizes, predicts, guides, forecasts, or directs” investor behavior. Whenever covered technology is used in interacting with an investor, including through web scrolling prompts, email alerts, or call‑center scripts, the firm must mitigate or neutralize conflicts. The SEC justifies this sweeping mandate on two grounds: first, that algorithmic advice can scale rapidly, thereby amplifying misconduct, and second, that AI’s personalized outputs enable subtle exploitation of customer biases.
Yet the proposal represents a sharp break with the SEC’s regulatory philosophy. Broker‑dealers are now governed by Reg BI, which relies on full and fair disclosure, while investment advisers are subject to an overarching fiduciary duty that likewise prioritizes disclosure. Both were the product of years of development of common law and administrativerulemaking. The draft rule would impose an identical obligation on both types of intermediaries and would forbid disclosure as a cure, despite the SEC’s earlier finding that a one-size-fits-all standard of conduct would raise fees and narrow retail customers’ product choices.
Central Critiques of the AI Proposal
The article offers four criticisms.
- Over‑breadth and ambiguity. By covering ordinary analytical tools, the definition of covered technology would impose substantial compliance costs, even on small firms that rely on rudimentary software for portfolio rebalancing. Commissioner Hester Peirce has warned that, under the proposal’s logic, “using Excel in investor interactions could trigger heavy identification requirements.”
- Departure from disclosure traditions. For decades the SEC has trusted markets to process material facts and helped ensure their accuracy with robust antifraud measures. The proposal rejects that lineage without offering empirical evidence that disclosure would not be effective for policing algorithmic advice. The rule would thus replace the longstanding facts‑and‑circumstances analysis with rigid one-size-fits-all requirements.
- Questionable rationale for new rules. AI does not create fundamentally new forms of conflicts of interest between broker dealers or investment advisers and retail investors. Longstanding conflicts, including biased advice, churning, and incentive-driven product recommendations, arise from compensation methods that misalign the interests of financial professionals and their customers. These issues existed well before the adoption of AI and are already addressed under existing regulations.
- Cost mis-estimation. The Investment Company Institute estimates first‑decade compliance costs at $30 billion, triple the SEC’s own projection, with most expenses likely passed to retail clients.
A Principles‑Based Alternative
Conflicts of interest derive from information asymmetry between financial advisers and customers. Thus, effective and efficient rules should not suppress the flow of information from advisers to customers. The article proposes a three‑part framework that aligns information flow with investor protection.
- Enhanced, targeted disclosure. Regulation BI requires broker dealers to act in the customer’s best interest at the time of a recommendation, while investment advisers owe a continuous fiduciary duty. Rather than treating disclosure as an insufficient remedy, regulators can draw on Professor Donald Dombalagian’s functional fiduciary model, which emphasizes targeted disclosure. Applying this approach, regulators can enhance AI-related conflict of interest disclosures by requiring firms to explain their AI policies and performance metrics. These disclosures should rely on clear and comparable benchmarks and avoid technical language, comparing AI-generated and non-AI recommendations or outcomes across firms. This would help investors make informed decisions and provide a stronger basis for meaningful consent.
- Leveraging market discipline. Market discipline is a critical constraint on conflicted advice. When investors identify conflicts, they can reallocate assets, leading to declines in share prices and increases in funding costs. These penalties are often swifter and more targeted than regulatory enforcement. This competitive pressure discourages broker dealers and investment advisers from misusing AI at the expense of retail customers. Evidence from the Oregon University Optional Retirement Plan supports this view. In plans without default investment options, broker recommendations, even when influenced by fee-based conflicts, often improve outcomes, particularly when equity premia are high and advisory fees are low. A categorical ban on conflicted advice would eliminate this potentially valuable guidance and undermine market efficiency, ultimately harming the investors the regulation seeks to protect.
- Vigorous antifraud enforcement. Recent SEC enforcement actions against AI washing by Delphia and Global Predictions demonstrate that the agency already possesses the authority to address AI related misconduct, including deceptive marketing and false claims about AI capabilities, under existing securities laws. Relying on provisions such as Section 10(b), Rule 10b5, and FINRA Rule 3110, the SEC can hold firms accountable for both fraudulent practices and supervisory failures involving AI. Rather than imposing a categorical obligation to eliminate all conflicts of interest, the SEC should strengthen its current framework, such as by issuing enhanced supervisory guidance ,including requirements for testing and oversight of AI systems. This approach, rooted in existing authority, would more effectively mitigate AI related risks while preserving the flow of valuable information that supports informed investor decision making.
Policy Implications and Conclusion
The proposal arrives amid shifting federal AI policy. A January 2025 executive order directed agencies to revisit Biden‑era initiatives viewed as burdensome to innovation, placing the SEC’s draft rule under renewed scrutiny. Finalizing a rule that departs so dramatically from disclosure traditions and that imposes multibillion‑dollar costs would likely undermine the SEC’s credibility in future technology-related regulatory initiatives.
AI promises to democratize access to sophisticated financial advice, but that promise rests on regulatory balance. The SEC’s proposal would abandon decades of disclosure‑based investor protection in favor of a rigid elimination mandate. A disclosure‑centric, principles‑based framework, reinforced by antifraud enforcement and market discipline, provides a more effective path to algorithmic accountability. By preserving the flow of information while safeguarding against fraud, such an approach better aligns investor interests with technological progress and maintains the integrity of the SEC’s regulatory philosophy.
This post comes to us from Wayne Wang at the University of California – Berkeley School of Law. It is based on his recent article, “Regulating Algorithmic Accountability in Financial Advising: Rethinking the SEC’s AI Proposal,” available here.