Democracy, Discourse, and the Artificially Intelligent Corporation

Does the rise of artificially intelligent corporations threaten the integrity and legitimacy of democracy? This question looms large as the 2024 presidential election approaches. A growing number of academics, business leaders, and politicians warn that unchecked development and dissemination of artificial intelligence (AI)could irreparably damage vital institutions of civil society.[1] Despite these warnings, the reliance on AI technologies is proliferating at break-neck speed. In a new article, I argue that this development, combined with the increasing dominance of corporations in our society, calls for revamping basic principles of corporate governance.

In virtually every business sector and across many operations, AI can be essential to success. Managers increasingly rely on it to improve decision-making, whether in identifying new business strategies, conducting due diligence, improving workplace safety, managing human resources, mitigating risk, or handling other essential functions.[2] In some cases, AI even takes on formal management roles and serves as a functional member of the board of directors.[3] Some suggest that corporations may soon be wholly owned and operated by algorithmic entities without human oversight.[4] While predicting all applications of AI is impossible, its impact on corporate practices will undoubtedly endure. Recognizing AI’s potential and capacity for disruption underscores the need to revitalize corporate governance principles to guide and direct the use of AI in the future.

The concern about AI’s potentially destructive impact is exacerbated by the increasing dominance of corporations in politics. Following the decision in Citizens United v. FEC,[5] corporations enjoy essentially the same speech rights as humans. They attempt to dominate politics to enhance their bottom line.[6] Because existing law generally does not require disclosure of corporate political expenditures, corporations can clandestinely manipulate voters to increase sales or secure a favorable regulatory environment. The ultimate fear is that corporations will deploy AI technology to manipulate political opinion without detection, thereby undermining our confidence in democracy itself.

Indeed, AI technologies have already been used to manipulate public opinion and voting behavior. In the 2016 presidential election, both Cambridge Analytica and the Russian Internet Research Agency used AI tools to influence voters by creating fake social media personas.[7] Despite concerns about the use of data exchanges in elections, both major presidential campaigns relied on AI technology to develop election strategies and disseminate political messaging. Experts predict that AI will permeate political discourse in the 2024 election cycle, most notably with the use of “deep fake” videos that falsely depict people doing things that never occurred.[8]

There are numerous calls to restrain the development and dissemination of AI. President Biden recently persuaded some top AI developers, including Microsoft, Google, and OpenAI, to implement ethical guidelines in designing and deploying AI.[9] However, the voluntary nature of these pledges may not inspire confidence in their longevity. Despite the potential ethical fickleness of corporations when profits hang in the balance, no comprehensive legislative fix seems forthcoming.

Additionally, the current fiduciary framework governing officers and directors remains impotent to prevent AI from wreaking havoc on our social institutions in the name of increasing corporate profits. Even before the rise of AI, corporate executives and managers were regularly criticized for pursuing selfish ends rather than promoting shareholder interests. Corporate scandals persist, and the duties of care and loyalty simply do not adequately address the changing nature of corporate decision-making in the AI era. Instead of fostering trust between shareholders and corporate managers, the enfeebled fiduciary framework permits the blind pursuit of wealth maximization, often to the detriment of consumers, stakeholders, and the communities corporations affect.

Interpreting the existing fiduciary governance structure through political “discourse theory,” however, could more effectively guide corporate decision-making in the AI era. Discourse theory, as a philosophical framework, explores how embracing just rules for discussion and engagement can improve organizational structures and lend legitimacy to institutional decisions. Based on Jürgen Habermas’ political philosophy, applying discourse theory in the corporate setting would require adopting rules and incentives that encourage the independent expression of ideas, fair participation of corporate constituencies in decision-making, respectful consideration of diverse viewpoints, and the ability to revise positions through reflective discourse.[10]

While AI might pose existential threats, it could also make dialogue between corporations and their shareholders much more meaningful and efficient. AI communication technologies can sift through vast amounts of data and open new channels for creative engagement with shareholders and other groups. As a result, indifference to actual shareholder preferences should dissipate, and reliance on the false sensibility that shareholders uniformly prioritize short-term wealth should diminish. Although the existing fiduciary framework’s business judgment rule tolerates managerial apathy toward shareholder preferences, fiduciary duties as viewed through discourse theory require much more. Rather than using AI technologies to manipulate opinions and interests for profit, under a discourse theory of the firm, managers would need to direct AI technologies toward gaining a better understanding of actual shareholder preferences and how the corporation might better serve its owners’ interests.[11]

Especially in the AI era, a discourse theory of the firm could bring about a renewed sense of democratic legitimacy. Due to corporations’ increasing dominance in politics, the integrity of corporate governance structures directly affects the trust we have in our democratic processes. The presence of special interests, managerial imperialism, or antidemocratic values within corporations erodes a sense of meaningful citizenship in our society.[12]To counteract corporate corruption of politics, a discourse theory of the firm aims to establish fair and just internal corporate structures. Even as corporate power grows, we can maintain faith in democratic processes as long as corporate governance principles require effective consideration of shareholder, consumer, and stakeholder voices. By ensuring a greater sense of democratic participation in the corporate realm, a discourse theory of the firm could secure democratic legitimacy in the public sphere.

As corporations increasingly use AI in their operations, communication, management, and political engagement, there is a pressing need for a more nuanced and holistic approach to corporate governance that places continual discourse at its core. Interpreting corporate fiduciary duties through political discourse theory could better ensure that corporate practices align with the preferences of shareholders and other stakeholders. Continual and robust engagement between managers and constituencies affected by corporate behavior would not only enhance managerial competence but also preserve a sense of democratic legitimacy in our polity. Without reinvigorating governance structures around democratic discourse, however, we might surrender political sovereignty to artificially intelligent corporations.

ENDNOTES

[1] See, e.g., Kevin Roose, A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, N.Y. TIMES (May 30, 2023), https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html. For a survey of academic views regarding the threats AI poses to human agency, see JANNA ANDERSON & LEE RAINIE, PEW RSCH. CTR., THE FUTURE OF HUMAN AGENCY 4–5 (Feb. 2023), https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/ [https://perma.cc/5

Q39-YFXM].

[2] See DUANE S. BONING ET AL., MCKINSEY & CO., TOWARD SMART PRODUCTION: MACHINE INTELLIGENCE IN BUSINESS OPERATIONS 9 (Feb. 2022), https://www.mckinsey.com/~/media/mckinsey/business%20functions/operations/our%20insights/toward%20smart%20production%20machine%20intelligence%20in%20business%20operations/toward-smart-production-machine-intelligence-in-business-operations-vf.pdf [https://perma.cc/88ZP-HQTT]; Dan Reilly, How A.I. Is Being Used as a Tool for Innovation, Not Just Efficiency, FORTUNE (June 8, 2022), https://fortune.com/2022/06/08/artificial-intelligence-innovation-sefficiency/ [https://perma.cc/86AS-8GSH].

[3] See Florian Möslein, Robots in the Boardroom: Artificial Intelligence and Corporate Law, in RESEARCH HANDBOOK ON THE LAW OF ARTIFICIAL INTELLIGENCE 649, 658–60, 665–66 (Woodrow Barfield & Ugo Pagallo eds., 2019).

[4] See Shawn Bayern, Are Autonomous Entities Possible?, 114 NW. U. L. REV. ONLINE 23, 47 (2019); Lynn M. Lopucki, Algorithmic Entities, 95 WASH. U. L. REV. 887, 898–899 (2018).

[5] Citizens United v. FEC, 558 U.S. 310 (2010).

[6] See Michael R. Siebecker, Political Insider Trading, 85 FORDHAM L. REV. 2717, 2723–24 (2017).

[7] See Elizabeth Dwoskin, Craig Timberg & Adam Entous, Russians Took a Page from Corporate America by Using Facebook Tool to ID and Influence Voters, WASH. POST (Oct. 2, 2017), https://www.washingtonpost.com/business/economy/russians-took-a-page-from-corporate-america-by-using-facebook-tool-to-id-and-influence-voters/2017/10/02/681e40d8-a7c5-11e7-850e-2bdd1236be5d_story.html [https://perma.cc/9F2D-D2P9];  Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, N.Y. TIMES (Apr. 4, 2018), https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html.

[8] See See James Bickerton, Deepfakes Could Destroy the 2024 Election, NEWSWEEK (Mar. 24, 2023), https://www.newsweek.com/deepfakes-could-destroy-2024-election-1790037 [https://perma.cc/PVC7-LKR9].

[9] See Michael D. Shear, Cecilia Kang & David E. Sanger, Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools, N.Y. TIMES (July 21, 2023), https://www.nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html [https://perma.cc/5VKH-CPBP].

[10] See, JÜRGEN HABERMAS, BETWEEN FACTS AND NORMS 166–67 (William Rehg trans., MIT Press 1996) (1992); Michael R. Siebecker, A New Discourse Theory of the Firm After Citizens United, 79 Geo. Wash. L. Rev. 161, 198-208 (2010).

[11] See, Michael R. Siebecker, The Incompatibility of Artificial Intelligence and Citizens United, 83 OHIO ST. L.J. 1211, 1230 (2022).

[12] See, Michael R. Siebecker, Bridging Troubled Waters: Linking Corporate Efficiency and Political Legitimacy Through a Discourse Theory of the Firm, 75 OHIO ST. L.J. 103, 152 (2014).

This post comes to us from Professor Michael R. Siebecker, the Maxine Kurtz Faculty Research Scholar at the University of Denver’s Sturm College of Law. It is based on his article, “Democracy, Discourse, and the Artificially Intelligent Corporation,” available here.