ChatGPT and artificial intelligence (“AI”) generally might enhance our lives in many ways, but some people fear significant, pernicious outcomes as well. For instance, in a recent poll of 1,000 U.S. business leaders, almost half said their companies had already replaced some human employees with the ChatGPT interface. Goldman Sachs recently predicted that generative AI could eventually eliminate 300 million human jobs globally. Obviously for those displaced workers, AI technologies might seem more of a threat than a benefit. Either way, the speed and intensity of impending change drives home the need to adopt an appropriate legal framework to guide the development and implementation of AI technologies.
In a recent article, I examine whether the proliferation of AI in the corporate realm calls into question the jurisprudential soundness of treating corporations as constitutional rights bearers, especially with respect to corporate political speech. In the landmark case of Citizens United v. FEC, the U.S. Supreme Court granted corporations essentially the same political speech rights that human beings enjoy. But does the growing use of AI in directing the content and dissemination of corporate political communications require revisiting whether corporations should be considered constitutional persons? Moreover, would construing a corporation as a constitutional rights holder make much practical or philosophical sense if AI entities could wholly own and operate businesses without any human oversight?
Even though AI might allow corporate directors to engage in more reflective and arguably humane decision-making, the rapid changes that AI produces in corporate organization and strategy seem to require cabining the potential unintended or harmful effects of AI outside the corporate realm.
To illustrate, consider a hypothetical scenario involving a fictional company, Fun Guns Corporation, which manufactures and sells firearms. Due to legal restrictions on marketing guns through social media and other online platforms, Fun Guns suffers a decline in sales. In an attempt to stem the sales drop, Fun Guns hires a consulting firm that utilizes an AI technology called “Ethel” to develop new sales strategies. Using its sophisticated predictive analytics and data mining capabilities, Ethel identifies an indirect marketing approach to promote certain political beliefs and social conditions that strongly correlate with increased gun sales. In particular, Ethel predicts that widespread promotion of racism, nationalism, distrust in media, and fear of government will lead to increased sales of guns.
To achieve this goal, Ethel builds detailed individual consumer profiles by collecting and analyzing all historical online interactions and other observable records. These profiles include demographic, behavioral, and attitudinal data, such as race, religion, age, gender, sexual orientation, wealth, social preferences, political leanings, and purchasing history. Ethel then uses these profiles to deliver personalized political messages that aim to promote the political and social conditions predicted to enhance gun sales.
For example, Ethel creates fake personas on social media platforms to engage in political discourse with consumers, with the goal of inciting racial hatred, fear of law enforcement, distrust of media, and sympathy for political extremism. These personas adopt a human-like appearance and speech pattern and a host of other “human” characteristics to build trust with the targeted consumers. Through postings, interactive chat, and personalized videos, Ethel’s fake personas (which appear indistinguishable from real humans) cajole and manipulate the views and behaviors of the targeted consumers, without explicitly marketing the sale of Fun Guns’ products.
Although this hypothetical initially seem implausible, a combination of historical factors suggests that it fairly closely describes corporate strategies already at work. First, following the Supreme Court’s decision in Citizens United, corporations have become increasingly involved in the political sphere, primarily with the aim of securing greater profits. As corporations recognize that altering individuals’ political views can result in more favorable business environments and increased consumer purchasing, corporate efforts to dominate politics are becoming more extensive. For example, similar to the Fun Guns hypothetical, recent reports reveal a troublesome link between enhanced gun sales and the promotion of social unrest, which exemplifies the types of strategies that corporations may use to further their financial interests. Regardless of the particular product or service being marketed, the substantial amounts of money that corporate executives spend on political campaigns and lobbying demonstrate that corporations continue to seek monetary gain through political activity.
Second, corporations increasingly employ AI technologies to influence human behavior, whether as consumers, investors, or political actors. With access to big data, detailed consumer and investor profiles can be created and used for individualized corporate communication aimed at producing a particular attitude or behavior. AI technologies are already being used in the political realm to collect and analyze vast amounts of data, producing strategic messaging designed to influence election outcomes. The Cambridge Analytica scandal and the Russian Internet Research Agency’s attempts to manipulate the 2016 presidential election provide just two of many notable examples.Moreover, AI’s ability to create “deep fake” videos, which are digitally altered videos of humans doing or saying something that did not actually occur, has also become widespread. As one AI industry expert recently lamented, “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed.” Although much of the current AI communication technology remains focused on creating more effective marketing, there is a significant fear that AI-driven deep fake communication practices will consistently combine consumer marketing with just enough political messaging to create an amalgam of “politically tinged corporate speech” immune from regulation or liability under the First Amendment.
Third, as corporations seek greater profits, they increasingly rely on AI technologies and algorithmic entities that can outperform human actors. Although AI was initially perceived as a tool to enhance human performance, it is now encroaching on human volition in numerous corporate settings. Important decisions regarding business planning, strategy, and goal setting are heavily influenced, if not controlled, by AI technologies and entities. Managerial functions that were once the exclusive domain of human actors now frequently get carried out by algorithmic entities. Some corporations even permit AI entities to serve as functional members of their boards of directors. Perhaps somewhat shocking, AI entities can now own and operate their own business ventures without effective human oversight.
The rapid transformation of corporate practices in the era of AI raises profound concerns about granting corporations full constitutional personhood and robust political speech rights. If corporations can use AI to manipulate political preferences and election outcomes to increase their profits, the basic viability and legitimacy of our democratic processes are in jeopardy. Furthermore, if AI technology plays an increasingly important, if not controlling, role in determining the content of corporate political communication, granting corporations the same political speech rights as humans surrenders the political realm to algorithmic entities. While it is certainly possible that AI may help corporations act more humanely, the idea of a corporation heavily influenced or controlled by non-human entities requires at least curtailing our jurisprudential commitment to corporations as full constitutional rights bearers. Specifically with respect to the viability of our democratic institutions, the growing prevalence of AI in managerial (and possibly ownership) positions makes granting corporations the same political speech rights as humans incompatible with maintaining human sovereignty.
AI may inevitably reshape the nature of corporations. And with that fundamental change must come a reconsideration of the constitutional status of corporations.
 See Trey Williams, Some Companies Are Already Replacing Workers With ChatGPT, Despite Warnings It Shouldn’t Be Relied on For “Anything Important”, Fortune (Feb. 25, 2023), https://fortune.com/2023/02/25/companies-replacing-workers-chatgpt-ai/.
 See Joseph Briggs and Devesh Kodnani, The Potentially Large Effects of Artificial Intelligence on Economic Growth, Goldman Sachs Economics Research (March 26, 2023), https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf.
 See Michael R. Siebecker, The Incompatibility of Artificial Intelligence and Citizens United, 83 Ohio St. L. J. 1211, 1241-46 (2022) (describing how business entities no longer need human owners or managers).
 See Michael R. Siebecker, Making Corporations More Humane Through Artificial Intelligence, 45 J. Corp. L. 95, 127-43 (2019).
 See Michael R. Siebecker, Political Insider Trading, 85 Fordham L. Rev. 2717, 2720-28; Michael R. Siebecker, Bridging Troubled Waters: Linking Corporate Efficiency and Political Legitimacy Through a Discourse Theory of the Firm, 75 Ohio St. L.J. 103, 116-19 (2014).
 See Marc Fisher, Miranda Green, Kelly Glass & Andrea Eger, ‘Fear on Top of Fear’: Why Anti-Gun Americans Joined the Wave of New Gun Owners, Wash. Post (July 10, 2021), https://www.washingtonpost.com/nation/interactive/2021/anti-gun-gun-owners/; Rukmani Bhatia, Ctr. For Am. Progress, Guns, Lies, And Fear: Exposing The NRA’S Messaging Playbook 1, 11, 27 (Apr. 2019), https://www.americanprogress.org/wp-content/uploads/2019/04/NRA-report.pdf [https://perma.cc/P6AW-2FGU]; Ben Winck, Gun Manufacturer Stocks Rise After Weekend Mass Shootings and Renewed Calls for Tougher Firearm Laws, Bus. Insider (Aug. 5, 2019), https://markets.businessinsider.com/news/stocks/gun-stocks-rise-after-dual-weekend-shootings-calls-for-laws-2019-8-1028418220 [https://perma.cc/DSG7-5ART].
 See Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, N.Y.Times (Apr. 4, 2018), https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html [https://perma.cc/BX7N-RVBE].
 See Tiffy Hsu and Steven Lee Myers, Can We No Longer Believe Anything We See, N.Y. Times (April 8, 2023), https://www.nytimes.com/2023/04/08/business/media/ai-generated-images.html.
 See The Emerging Threat of Deepfakes to Brands and Executives, Constella Intel. (Mar. 2, 2021), https://constellaintelligence.com/the-emerging-threat-of-deepfakes-to-brands-and-executives-2/ [https://perma.cc/U629-2G5d] (“[D]eepfakes have already been used in a wide array of contexts, including in the production of ‘fake news’ and manipulated content or malicious impersonations with the objective of obtaining sensitive data for financial gain (also known as ‘social engineering’ within this context) or influencing public opinion for corporate or political reputational damage.”); Hannah Smith & Katherine Mansted, Aspi Int’l Cyber Pol’y Ctr., Weaponised Deep Fakes: National Security And Democracy 4 (Apr. 2020), https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2020-04Weaponised%20deep%20fakes.pdf?VersionId=lgwT9eN66cRbWTovhN74WI2z4zO4zJ5H [https://perma.cc/N83L-RXMB] (“Deep fakes will pose the most risk when combined with other technologies and social trends: they’ll enhance cyberattacks, accelerate the spread of propaganda and disinformation online and exacerbate declining trust in democratic institutions.”)
 See Tim Fountaine, Brian McCarthy & Tamim Saleh, Building the AI-Powered Organization, Harv. Bus. Rev., July-Aug. 2019, at 62, 64.
 See Joe McKendrick, It’s Managers, Not Workers, Who Are Losing Jobs to AI and Robost, Study Shows, Forbes (Nov. 15, 2020), https://www.forbes.com/sites/joemckendrick/2020/11/15/its-managers-not-workers-who-are-losing-jobs-to-ai-and-robots-study-shows/?sh=77e741f820d5.
 See Michael R. Siebecker, The Incompatibility of Artificial Intelligence and Citizens United, 83 Ohio St. L. J. 1211, 1241-46 (2022).
This post comes from Michael R. Siebecker, Maxine Kurtz Faculty Research Scholar and Professor of Law at the University of Denver’s Sturm College of Law. It is based on his recent article, “The Incompatibility of Artificial Intelligence and Citizens United,” available here.