In a new article, I tackle the increasingly urgent question of how corporate governance principles must adapt in response to the transformative influence of artificial intelligence (“AI”). No longer just a tool for enhancing operational efficiency, AI now fundamentally alters how corporations make decisions, relate to stakeholders, and engage with society.[1] Traditional fiduciary corporate governance frameworks, designed for a human-driven world, cannot keep up with this paradigmatic shift. In the article, I explore the impotence of the existing fiduciary framework to address the challenges AI introduces and the steps necessary to ensure corporations use AI ethically, transparently, and responsibly.
Our current fiduciary system, which requires corporate directors to act with care and loyalty, assumes transparency and deliberation. However, AI systems often operate as “black boxes,” rendering decisions that are inscrutable even to those who deploy them. This opacity undermines accountability as it becomes impossible for stakeholders to evaluate whether directors acted responsibly or whether AI biases influenced outcomes.[2]
Moreover, AI’s growing role in corporate decisions raises profound and unprecedented questions about human agency and liability. Historically, corporate law and organization were centered on traditional agency principles, which attribute decisions and actions to human actors – directors, officers, and employees – who are expected to exercise discretion and judgment. However, as AI systems take on increasingly autonomous roles in executing decisions, the foundational assumption of human volition and oversight begins to crumble. What happens when key corporate decisions are made not by humans but by opaque, algorithm-driven systems with little or no meaningful human intervention? Who or what should be held responsible when AI-driven actions result in harm, whether to shareholders, consumers, or society at large? These questions expose the critical inadequacies of our current legal framework, which remains tethered to notions of human agency and intent.[3]
AI’s complexity and unpredictability create a significant accountability vacuum, allowing corporations to potentially sidestep liability for harmful outcomes. Without new legal constructs that account for AI’s unique characteristics – its autonomy, adaptability, and inscrutability – corporations risk exploiting this gray area to escape responsibility. To avoid this accountability void, forward-looking governance models are required to ensure corporate actors remain answerable for the ethical and legal implications of deploying AI.
Beyond the failings of the fiduciary framework, legislative and regulatory efforts to address AI’s implications remain fragmented, slow-moving, and often industry specific. While sectors like healthcare and finance have adopted targeted regulations, other industries lag behind, creating an inconsistent regulatory patchwork that corporations struggle to navigate.[4] To close this gap, a new holistic and adaptive legal framework is needed to balance innovation with ethical oversight.
The increasingly prevalent reliance on AI also requires revisiting our dedication to corporate personhood. Corporations currently enjoy constitutional rights akin to those of individuals, including free speech under the First Amendment. However, as corporations increasingly rely on AI to influence markets, public opinion, and even political processes, the notion of corporations having constitutional rights becomes deeply problematic. AI technologies empower corporations to analyze vast datasets, predict behavior with pinpoint accuracy, and produce highly manipulative content using algorithmically optimized messaging, including deceptive “deepfakes.” These tools grant corporations an unprecedented ability to shape consumer choices and sway political attitudes, often in ways that are difficult to detect or counter. Such developments present a clear and present danger to democratic institutions by allowing corporate interests, driven by profit and fueled by AI systems, to exert unchecked influence over public discourse and political outcomes. Without recalibrating corporate rights to account for the disruptive power of AI, the line between legitimate political participation and algorithmic manipulation risks being obliterated, leaving democratic integrity vulnerable to distortion by machine-driven corporate entities.[5] Reexamining and restricting corporate constitutional rights – particularly in the realm of political speech – seems essential.
To address these challenges, I propose a set of principles to guide future corporate law and governance in the AI era. These principles aim to balance innovation with ethical oversight, ensuring that AI serves the public interest while preserving corporate accountability.
Ethical AI Use
The deployment of AI must prioritize ethical considerations to safeguard human dignity and societal values. Corporations should develop clear ethical guidelines governing AI’s use, ensuring that systems are designed and deployed to prevent harm. For example, corporations must address algorithmic bias, which can perpetuate systemic inequalities.[6] Ethical AI use requires regular audits, transparency in decision-making, and commitments to fairness and justice. By embedding ethics into governance frameworks, corporations can build public trust and mitigate the risks of AI-driven harm.
Safeguarding Human Autonomy
Corporate governance structures must protect human autonomy, ensuring that AI complements rather than replaces human decisions. While AI excels at data processing and pattern recognition, significant decisions must remain grounded in human judgment. Human oversight is especially critical in areas such as hiring and healthcare, where AI’s decisions can have profound personal consequences. Additionally, corporate practices should guard against AI-driven manipulation of consumer behavior and political preferences. By preserving human agency, governance frameworks can ensure AI enhances human capacities.[7]
Transparency and Accountability
Transparency remains essential to ensuring corporations use AI responsibly. Corporations should disclose how AI systems are deployed, the types of data they rely upon, and the safeguards in place to prevent misuse. Such disclosures would enable shareholders, regulators, and the public to evaluate whether AI aligns with ethical and legal standards. At the same time, corporations must implement robust accountability mechanisms, including internal audits and external oversight.[8] This dual emphasis on transparency and accountability will help mitigate AI’s risks while fostering public confidence in corporate practices.
Adaptability and Flexibility
Given the rapid pace of AI advances, governance frameworks must be adaptable and forward-looking. I advocate principle-based regulations that provide corporations with guidance while remaining flexible enough to accommodate innovation. Regulatory frameworks should encourage experimentation – such as through AI “sandboxes” – where corporations can test new applications under controlled conditions. By fostering adaptability, governance structures can ensure laws remain relevant without stifling technological progress.[9]
Promoting Innovation
Innovation is a cornerstone of economic growth, and AI holds immense potential to drive productivity, efficiency, and creativity. To unlock this potential, corporate governance must strike a balance between fostering innovation and mitigating risks. Governments and regulators can support responsible innovation through incentives like research grants, tax benefits, and industry partnerships.[10] At the same time, corporations must be held accountable for the ethical implications of their AI applications. Innovation should not come at the expense of privacy, equity, or societal well-being.
Protection of Privacy and Data Rights
AI’s reliance on vast datasets raises significant concerns about privacy and data protection. Governance frameworks must prioritize robust data rights, ensuring that personal information is not misused or exploited. Drawing from models like the European Union’s GDPR, corporations should be required to implement strict data governance protocols, including transparency about data collection, processing, and storage practices.[11] Protecting privacy is not only a legal necessity but also a moral imperative for maintaining public trust.
Stakeholder Engagement and Public Participation
Stakeholder engagement is critical to achieving responsible AI governance. Corporations must involve diverse stakeholders – including employees, consumers, regulators, and community representatives – in conversations about AI deployment. Public participation ensures that AI systems align with societal values and address community needs. For example, corporations could establish advisory committees to assess AI’s social and ethical impacts. By fostering inclusivity and collaboration, corporations can develop AI solutions that balance innovation with accountability to the broader public.
International Cooperation and Harmonization
Given AI’s global reach, international cooperation is essential to creating cohesive governance standards. A fragmented regulatory landscape hinders compliance and creates opportunities for exploitation. Although there is no international law of corporations that provides uniformity in governing the actions of directors and officers, some appreciation for general principles that cut across political boundaries should foster collaboration among governments, corporations, and global institutions. Establishing shared principles for ethical AI use, data protection, and accountability will promote consistency while supporting responsible innovation across borders.
As AI transforms decision-making, operations, and accountability, we cannot rely on outdated legal frameworks. Instead, we must adopt dynamic, ethical, and transparent governance principles that ensure AI serves humanity and not the other way around. By addressing these challenges, corporations can harness AI’s potential for innovation while safeguarding the integrity of our institutions and values.
ENDNOTES
[1] See Michael R. Siebecker, The Incompatibility of Artificial Intelligence and Citizens United, 85 OHIO ST. L.J. 1211, 1220–21 (2022)
[2] See Sylvia Lu, Algorithmic Opacity, Private Accountability, and Corporate Social Disclosure in the Age of Artificial Intelligence, 23 VAND. J. ENT. & TECH. L. 99, 114–29 (2020).
[3] See Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 HARV. J. L. & TECH. 889, 897–906 (2018).
[4] See Adam Satariano & Cecilia Kang, How Nations Are Losing a Global Race to Tackle A.I.’s Harms, N.Y. TIMES (Dec. 6, 2023), https://www.nytimes.com/2023/12/06/technology/ai-regulation-policies.html [https://perma.cc/8LZA-8CLR].
[5] See Michael R. Siebecker, Democracy, Discourse, and the Artificially Intelligent Corporation, 84 OHIO ST. L.J. 953, 986–90 (2024)
[6] See Andreas Kremer et al., As Gen AI Advances, Regulators—and Risk Functions—Rush to Keep Pace, MCKINSEY & CO. (Dec. 2023), at 7.
[7] See Raymond H. Brescia, Social Change and the Associational Self: Protecting the Integrity of Identity and Democracy in the Digital Age, 125 PENN ST. L. REV. 773, 779–84 (2021).
[8] See Tom Wheeler, The Three Challenges of AI Regulation, BROOKINGS (June 15, 2023), https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/ [https://perma.cc/PM2A-5KUV].
[9] See Eric Fruits, AI Regulation Needs a Light Touch, INT’L CTR. FOR L. & ECONS. (Aug. 14, 2023), https://laweconcenter.org/resources/ai-regulation-needs-alight-touch/ [https://perma.cc/YKZ5-CY3M].
[10] See Orly Lobel, The Law of AI for Good, 75 FLA. L. REV. 1073, 1122–27 (2023).
[11] See Karl Manheim & Lyric Kaplan, Artificial Intelligence: Risks to Privacy and Democracy, 21 YALE J. L. & TECH. 106, 181–85 (2019).
This post comes to us from Professor Michael R. Siebecker, the Maxine Kurtz Faculty Research Scholar at the University of Denver’s Sturm College of Law. It is based on his article, “Reconceiving Corporate Rights and Regulation in the AI Era,” available here.