Corporate Governance Lessons from the OpenAI Controversy

The ongoing controversy surrounding the artificial intelligence company OpenAI, Inc. (OpenAI) offers valuable, broad-based governance lessons for corporate boards across industry sectors and regardless of whether they are for-profit or non-profit companies. The lessons include those relating to mission restrictions, corporate structure selection, the board/management dynamic, risk oversight, and workforce culture.

While much of the controversy occurred in late November and early December 2023, OpenAI remains in the forefront of AI news and commentary,[1] providing continuing opportunities for leadership discussion and education on its organization and evolution.

Mission and Structure

Though OpenAI has been well covered in the news,  familiarity with the unique way that  OpenAI – the developer of the groundbreaking ChatGPT – was formed is necessary to appreciate the governance lessons it offers.[2]

OpenAI was initially formed as a single, tax-exempt Section 501(c)(3) organization with the goal of “building safe and beneficial artificial general intelligence for the benefit of humanity.”[3] Consistent with themes of so-called “effective altruism,”[4] OpenAI’s founding premise was that a tax-exempt structure would be the best way to develop the technology because ii would not be affected by profit incentives.

Several years later, under the leadership of new CEO Sam Altman, OpenAI reorganized its corporate structure to add several non-tax-exempt subsidiaries formed as partnerships and limited liability companies. This was done to attract third-party investment sufficient to cover the costs of computational power and recruit and retain necessary talent without detracting from the company’s core missions.

The reorganization was designed to leave the original OpenAI “nonprofit” organization intact, with its board continuing to serve as the overall governing body for all OpenAI activities. The equity structure of the primary for-profit subsidiary was capped to limit the financial returns to investors and to balance commercial goals with safety and sustainability.

The structure was recently revised to provide an investor in the form of a for-profit subsidiary with “observer” status.

From this corporate structure came tension among leadership over the competing goals of altruism, safety, research, and profitmaking, and that tension fueled much of the controversy that began in November 2023 and continues in some degree to this day. This controversy in turn offers the following governance lessons and observations.

Governance Lessons

Lesson No. One: Board Authority. Corporate law generally does not recognize the unitary executive theory, i.e., that of the all-powerful CEO to whom the board provides advice and counsel but does not supervise. The OpenAI controversy is a reminder that it is the board, and not the CEO, which is responsible by law for the conduct of the business and that the CEO acts as the board’s delegate. There may be value in confirming with  leadership the fundamentals underscoring the accepted board/management dynamic.

Lesson No. Two: Oversight of CEO. Given the nature of the Altman dispute, it is very important to confirm with all constituents of an organization the fiduciary obligation of the board to monitor the performance of the CEO, no matter how valuable he or she is. Material concerns about the CEO’s ability to be “consistently candid in his communications with the board,”[5] as the OpenAI board expressed in referring to Altman, could constitute grounds for termination.

Lesson No. Three: Clarity of Decision-Making. The quality of board decision-making, especially when it comes to CEO hiring and termination, is essential given its significance to the corporation. Boards should have the courage to reverse decisions they believe were based on mistaken impressions, inaccurate data, or exculpatory information.[6]Reversing decisions primarily on the basis of external pressure will almost always result in a loss of credibility for the board.

Lesson No. Four: Risk Monitoring. The Altman termination controversy provides a vivid reminder of the law’s likely expectations of board oversight of AI-related risks and the need for a McDonald’s-grounded internal AI safety reporting mechanism from the executive leadership team.[7] In addition, allegations that the OpenAI board was deeply concerned about the safety of Altman’s development strategy would have gone to the heart of one of OpenAI’s core missions.

Lesson No. Five: Workforce Culture. In a corporate governance environment increasingly sensitive to preserving an empowered workforce, the overwhelming opposition of OpenAI’s workforce to the Altman termination stands out. To the extent that the workforce action is perceived as influencing the decision to reinstate Altman, it could embolden other employees  to defy unpopular board decisions. In essence, it suggests informal unionization without the actual union.

Lesson No. Six: The Limits of Altruism. “Good governance” is important in demonstrating that a Section 501(c)(3) entity operates exclusively for charitable purposes. Concepts such as “effective altruism” and “capped profit model” don’t necessarily equate to operation exclusively for charitable purposes. Accordingly, decisions by the (c)(3)’s board with respect to its subsidiary(ies) must thoughtfully and formally reflect how such decisions support the charitable purposes behind the (c)(3)’s actions, not just the potential financial return.

While it is perfectly permissible for a charitable organization to invest in subsidiaries purely for investment, it is more difficult to accomplish when the subsidiary was once the exclusive activity of the Section 501(c)(3) organization.

Lesson No. Seven: Mission Confusion. Directors of a nonprofit organization owe a fiduciary duty to the charitable mission – even when that purpose is seemingly amorphous, such as “developing technology for the broad benefit for humankind.” In those circumstances, directors must be disciplined in their service to the mission, particularly when the not-for-profit company owns or controls a for-profit subsidiary that is more economically significant than an owner whose mission beneficiary is “humanity” and not investors.[8]

Lesson No. Eight: Know Your Structure. There arose from the OpenAI controversy an inference that several of the company’s constituent groups did not fully appreciate the nonprofit-grounded corporate structure and the authority granted to the nonprofit’s governing board. That’s a general reminder to make sure that the choice of legal entity and its implications, and the governance structure for that entity, are fully appreciated in advance of formation.

Summary

The OpenAI saga involves a fascinating combination of the mystery and promise of AI with the complexity of corporate governance principles, in a way that could affect the nature of board/CEO relationships and ultimately affect future regulation of AI technology.

The open question as to whether “[T]he destruction of the company could be consistent with the board’s mission” invites additional, thoughtful governance law discourse.[9]

Additional insight on the controversy may be gleaned from the results of the independent counsel review that has been commissioned by the restructured AI board, should the report ultimately be made public.[10]

Ultimately, the OpenAI controversy offers a unique governance lesson for corporate leadership and  entrepreneurs and other promoters of technology-related enterprises – especially those that seek to combine altruistic and investment objectives.

ENDNOTES

[1] See. e.g. https://www.axios.com/2024/01/15/chatgpt-openai-2024-elections;

https://www.dailymail.co.uk/news/article-12974603/OpenAI-CEO-Sam-Altman-new-technology-uncomfortable.html

[2] https://openai.com/our-structure

[3] https://openai.com/our-structure See also https://lawprofessors.typepad.com/nonprofit/2024/01/the-openai-corporate-structure.html; https://www.philanthropy.com/article/how-openais-nonprofit-corporate-structure-fueled-the-tumult-around-ceo-sam-altmans-short-lived-ouster

[4] See, e.g. https://www.effectivealtruism.org/; https://www.economist.com/special-report/2024/01/10/the-effective-altruism-movement-is-louder-than-it-is-large

[5] https://openai.com/blog/openai-announces-leadership-transition

[6] See, e.g. https://www.forbes.com/sites/michaelperegrine/2024/01/09/its-never-too-late-to-detour-from-the-wrong-ethical-route/?sh=428b7026707a

[7] In re McDonald’s Corp. Stockholder Derivative Litig., C.A. No. 2021-0324-JTL (January 25, 2023);

In re McDonald’s Corp. Stockholder Derivative Litig., C.A. No. 2021-0324-JT (March 1, 2023).

[8] https://openai.com/blog/planning-for-agi-and-beyond; See, also https://hbr.org/2023/11/openais-failed-experiment-in-governance

[9]  https://www.nytimes.com/2023/12/09/technology/openai-altman-inside-crisis.html?smid=nytcore-ios-share

[10]  https://news.bloomberglaw.com/business-and-practice/openai-probe-is-wilmerhale-test-after-college-president-debacle

This post comes to us from Michael W. Peregrine and Robert C. Louthian III, attorneys at McDermott Will & Emery LLP, and Charles Elson, founding director of the Weinberg Center for Corporate Governance and Woolard Chair in Corporate Governance (retired) at the University of Delaware.