What should fiduciary duty require when new technology allows corporate directors to see risk more clearly? Though corporate law has long assumed that directors must make decisions under conditions of uncertainty, artificial intelligence and emerging quantum-computing tools may change what directors can know, when they can know it, and how responsibly they can act on that information. In a new article, I explore how those technologies may reshape the law of corporate oversight.
For decades, corporate fiduciary law has assumed that directors have limited information. Courts do not expect directors to foresee every risk or prevent every corporate failure. The business judgment rule protects most informed, good-faith decisions from judicial second-guessing and Caremark oversight liability remains difficult to establish.[1] That structure made sense in a world where directors often had no practical way to see deeply into a complex corporation’s operations.
That world is changing. Large companies now generate immense streams of operational, financial, compliance, consumer, workforce, and supply-chain data. At the same time, artificial intelligence and emerging quantum-computing tools are making it possible to analyze that information with increasing speed and sophistication. These technologies can reveal patterns that would otherwise remain invisible. They can show when management’s assumptions are fragile. They can also identify risks before those risks become public scandals or catastrophic losses.
This technological shift should matter for fiduciary law. If directors have access to tools that can help them understand the corporation more accurately, the law should recognize that those tools exist. My article therefore proposes a shift from fiduciary standards centered on “gross negligence” and “utter failure” toward a framework of reasonable tech-enabled diligence and proactive oversight. When advanced tools are reasonably available, directors should sometimes have to show that they used them, considered them, or had a sound reason for declining to do so.
Behavioral economics strengthens the case for this shift. Corporate law often imagines directors as rational monitors who will notice when something is wrong. In reality, directors, like all humans, may become overconfident, defer too readily to management, and discount information that conflicts with the preferred narrative in the boardroom. Group dynamics can make those tendencies worse, especially in high-status environments where dissent feels costly.[2]
Technology cannot eliminate those human limitations. But it can make them harder to ignore. An AI-enabled compliance system might flag weaknesses that management has downplayed. A predictive model might show that a strategic plan depends on unrealistic assumptions. Quantum-enhanced simulations might reveal that a proposed transaction carries risks that conventional modeling failed to capture. In each case, technology would not replace board judgment. It would discipline it.
This matters especially because modern corporate risk rarely fits neatly within old doctrinal categories. A privacy failure can become a consumer-protection problem, a securities problem, and a reputational crisis at the same time. Workplace misconduct can affect employee retention, brand value, and long-term performance. Climate risk can alter supply chains, insurance costs, financing, and market demand. A board that treats these issues as unconnected to corporate value may misunderstand the corporation itself.
That point also reframes the relationship between shareholder and stakeholder governance. Directors need not choose between long-term firm value and careful attention to workers, consumers, communities, or the environment. In many contexts, stakeholder harms are early warnings of enterprise risk. Advanced analytics can help boards see those connections more clearly. Ignoring stakeholder data may therefore be both normatively troubling and economically shortsighted.
A workable legal standard would need limits. Courts should not require every company to adopt the most advanced technology available. A small private firm and a multinational public company do not face the same obligations. Nor should directors be punished simply because an AI system failed to predict a harm. Fiduciary law should still depend on context, and it should continue to protect good-faith business judgment.
For that reason, my article argues for safe harbors. Directors who make good-faith efforts to understand relevant technologies, seek appropriate expertise, adopt reasonable monitoring systems, and document their deliberations should receive substantial protection from liability. The law should encourage serious engagement with technology rather than after-the-fact blame.
But the converse should also be true. A board of a large, complex company should not be able to ignore standard analytic tools and then claim ignorance when foreseeable risks materialize. A board should not be able to dismiss data anomalies without inquiry. Nor should it be able to rely on outdated reporting systems when more effective monitoring tools are reasonably within reach. At some point, technological indifference becomes a governance failure.
Corporate law has adapted to new governance realities before. Smith v. Van Gorkom made process matter in major transactions.[3] Caremark recognized that directors have a duty to attend to corporate oversight, even if liability remains rare.[4] Marchand v. Barnhill confirmed that boards must take critical risks seriously.[5] The next step is to recognize that oversight where data are plentiful requires more than passive receipt of management reports.
Quantum AI will bring risks of its own. Algorithmic systems may be opaque. They may reproduce bias. They may create privacy and cybersecurity vulnerabilities. They may also tempt directors to defer to those systems without fully understand them. These concerns are serious. But rather than reasons to avoid advanced technology completely, they are admonitions for boards to oversee it carefully.
The deeper point is that fiduciary law should remain tied to the actual conditions of corporate decision-making. As corporations become more complex, directors need better tools to understand them. As data become more central to governance, ignoring data becomes harder to justify. And as AI and quantum computing develop, corporate law must decide whether fiduciary duty still means conscientious stewardship in practice.
ENDNOTES
[1] See In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996); Stone v. Ritter, 911 A.2d 362 (Del. 2006).
[2] See Christine Jolls, Cass R. Sunstein & Richard Thaler, A Behavioral Approach to Law and Economics, 50 Stan. L. Rev. 1471 (1998).
[3] Smith v. Van Gorkom, 488 A.2d 858 (Del. 1985).
[4] Caremark, 698 A.2d 959; Stone, 911 A.2d 362.
[5] Marchand v. Barnhill, 212 A.3d 805 (Del. 2019).
Michael R. Siebecker is the Maxine Kurtz Faculty Research Scholar and a professor of law at the University of Denver’s Sturm College of Law. This post is based on his article, “Quantum AI and the Future of Corporate Law,” published in the Cardozo Law Review and available here.
Sky Blog