CLS Blue Sky Blog

Why the Organizational Form of Corporations Matters for AI Governance

As Elon Musk’s lawsuit against OpenAI advances to trial in April 2026, the case offers more than courtroom drama – it provides a critical test of whether hybrid corporate organizational structures can sustain public-interest commitments in the face of extraordinary commercial pressures. With U.S. District Judge Yvonne Gonzalez Rogers indicating during a hearing that there is “plenty of evidence” to support Musk’s fraud claims and damages potentially reaching $134 billion, the litigation exposes fundamental tensions in how we govern institutions that produce knowledge infrastructure.

In a new article, we provide context for the litigation. The article examines how corporate organizational form shapes the capacity of generative AI companies to serve the public good. The Musk-OpenAI case highlights our central argument: Hybrid models combining for-profit incentives with nonprofit oversight suffer from structural incoherence, having the limitations of both organizational forms while securing the strengths of neither.

The Charitable Trust at the Heart of the Dispute

Musk’s surviving legal claims center on breach of charitable trust – the allegation that his approximately $38 million in seed funding was donated with specific charitable conditions that OpenAI violated when it pivoted toward a for-profit structure and exclusive partnership with Microsoft. Judge Rogers found this claim viable because of Musk’s disputable contention that his contributions to OpenAI “had a specific charitable purpose and that he attached two fundamental terms to it: that OpenAI be open-source and that it would remain nonprofit – purposes consistent with OpenAI’s charter and mission.”

The lawsuit’s evidentiary foundation includes Greg Brockman’s November 2017 diary entry: “[C]annot say that we are committed to the non-profit. don’t wanna say that we’re committed. If three months later we’re doing b-corp then it was a lie.” This private acknowledgment reveals what our article identifies as a defining feature of hybrid structures: internal recognition that dual commitments are fundamentally incompatible, even as public rhetoric maintains they can be reconciled.

Hybrid Structures as Governance Theatre

OpenAI’s organizational evolution exemplifies the hybrid model’s structural weaknesses. Founded in 2015 as a Delaware nonprofit with assets “irrevocably dedicated” to charitable purposes, the organization created its “capped-profit” subsidiary in 2019 to attract capital while ostensibly maintaining mission fidelity. Following its October 2025 restructuring, the OpenAI Foundation retains approximately 26% equity (roughly $130 billion) and formal control over the for-profit entity, now valued at $500 billion, through overlapping board structures.

This configuration embodies what we term “dual incoherence.” The nonprofit lacks the accountability mechanisms that disciplined traditional charitable institutions: transparent governance, enforceable mission constraints, and community-oriented boards independent from commercial interests. Meanwhile, the for-profit subsidiary escapes the market discipline that constrains traditional corporations: shareholder primacy, hostile takeover threats, and fiduciary duties running clearly to investors.

The result, as our we argue, is not the best of both worlds but organizational opacity and symbolic ethics. The nonprofit provides reputational legitimacy while the for-profit structure enables capital mobilization and competitive strategy, yet neither governance regime effectively constrains the other. As Lucian Bebchuk and Roberto Tallarita have demonstrated in the ESG context, injecting purpose into corporate governance through soft mechanisms does not replicate the binding structural constraints of genuinely mission-driven entities.

The Knowledge Production Problem

Our article frames generative AI systems as epistemic infrastructure: institutions that shape how knowledge is produced, validated, and disseminated. This framing helps explain why OpenAI’s organizational form matters beyond investor returns or competitive dynamics. When GenAI models function as authoritative sources for legal research, medical advice, educational instruction, and public understanding, their institutional architecture becomes a matter of urgent concern.

Knowledge production, as Kenneth Arrow established, exhibits public-good characteristics: It is non-rivalrous, non-excludable, and vital to collective welfare. Historically, societies developed specialized institutions – universities, research institutes, public libraries – governed by norms prioritizing the integrity, transparency, and availability of knowledge over profit maximization. These institutions operate under what Cathy Hwang and Dorothy Lund call the “non-distribution constraint:” legal prohibitions on distributing surplus revenues to private actors, which structurally supports long-term, mission-driven governance even under financial pressure.

The hybrid model attempts to produce public knowledge through organizations whose fundamental legal architecture prioritizes private wealth accumulation. OpenAI’s partnership with Microsoft, which exercises substantial operational influence, illustrates the tension. Microsoft’s fiduciary duties run to its shareholders; OpenAI’s nonprofit board ostensibly serves the public interest. Yet when these duties conflict, which governance regime prevails?

The Altman Reinstatement as Case Study

The November 2023 crisis surrounding Sam Altman’s brief removal and rapid reinstatement provides empirical support for our theoretical claims. The nonprofit board, exercising its formal authority, terminated Altman reportedly over concerns about his leadership approach to AI safety. Microsoft, with billions of dollars and exclusive technology at stake, immediately pressured for his return. Within five days, Altman was reinstated under a restructured board, while directors who supported his removal were dismissed.

As we argue in our article, this episode reveals the illusory nature of nonprofit control in hybrid structures. The market logic of investor expectations, employee retention, and competitive positioning overwhelmed the nonprofit’s governance authority. Chen Wang’s analysis of “superstar CEOs” helps explain this dynamic: Leaders like Altman gainpower through non-institutional channels – investor relationships, media influence, employee loyalty – that formal governance structures cannot effectively constrain.

The incident also demonstrates what we call “fiduciary fragmentation.” Directors appointed to represent the public interest lack shareholder backing, financial leverage, or organizational resources. Those aligned with investor priorities, while perhaps formally subordinate to nonprofit oversight, control operational decisions, strategic planning, and resource allocation. The symbolic power of nonprofit governance provides legitimacy without substantive constraint.

Regulatory Arbitrage and the Conversion Question

Musk’s lawsuit arrives as OpenAI navigates state regulatory approval for its restructuring – a process that has attracted unprecedented third-party scrutiny. Even before the recent events, Public Citizen has argued that under California charitable trust law, OpenAI should pay a minimum $30 billion “control premium” to independent charitable organizations, analogizing to the 1990s Blue Cross of California conversion that required over $3.2 billion in charitable distributions. Furthermore, 12 former OpenAI employees, represented by Harvard Law Professor Lawrence Lessig, filed an amicus brief arguing that removing nonprofit control would “fundamentally violate its mission” and create pressure to “cut corners” on safety. However, in October 2025 Delaware Attorney General Kathy Jennings issued a Statement of No Objection to the proposed corporate recapitalization transaction of Open AI, subject to her office’s understanding that the not-for-profit OpenAI will retain control and oversight over the newly formed public benefit company (PBC), including the sole power and authority to appoint members of the board of directors of the PBC (the “PBC Board”); and that the PBC will publish the “OpenAI Charter,” which describes the principles the PBC will use to execute the mission.

These interventions highlight what our article identifies as regulatory arbitrage: Hybrid structures leverage the symbolic capital of nonprofit status while operating according to for-profit logic, obscuring deep entanglements with monopolistic capital. The nonprofit form provides tax advantages, reputational benefits, and reduced regulatory scrutiny, creating information asymmetries that erode public trust and enable the exploitation of company resources.

The Path Dependency Problem

Our article emphasizes that organizational form locks in certain structures. Once hybrid entities achieve massive scale – OpenAI’s $500 billion valuation, Microsoft’s embedded position, thousands of employees – the practical options for governance reform narrow dramatically. Dismantling these structures becomes economically and politically difficult, even when their dysfunction becomes evident.

The Musk litigation illustrates this dynamic. OpenAI argues that for-profit options were “openly discussed” as early as 2017, that nonprofit constraints proved incompatible with capital requirements, and that restructuring represents legitimate institutional evolution rather than betrayal of founding commitments. From this perspective, the hybrid model’s failure was inevitable – a predictable result of attempting to sustain charitable purposes while competing with Google, Microsoft, and Meta at frontier AI development.

Yet this defense proves our point: If the hybrid structure’s commercial transformation was foreseeable, then the nonprofit framing was misleading from the start. Either the charitable commitments were genuine – in which case the pivot constitutes breach of trust – or they were always subordinate to commercial ambitions, making the nonprofit structure a governance facade.

Implications for AI Governance

The approaching trial will likely turn on witness credibility – whether jurors believe that Sam Altman, Greg Brockman, and other OpenAI leaders made enforceable commitments to Musk about maintaining nonprofit status and open-source principles. But the broader implications extend beyond contractual interpretation to fundamental questions about institutional design for AI governance.

Our article argues that preserving democratic legitimacy and the integrity of knowledge production in GenAI systems requires legal innovation in the form of organizations. Hybrid structures do not solve the tension between mission and market, they obscure it. Policymakers face a choice: Either strengthen nonprofit constraints to make them genuinely binding (through enforceable mission provisions, independent governance, and regulatory oversight), or acknowledge that for-profit structures will dominate and regulate them accordingly through antitrust enforcement, transparency requirements, and safety mandates.

The Swiss initiative “Apertus,” developed by EPFL, ETH Zurich, and the Swiss National Supercomputing Centre as a fully open-source, large-language model explicitly framed as public infrastructure, demonstrates an alternative path. By maintaining genuine nonprofit governance, transparent development processes, and commitment to linguistic diversity and legal compliance, Apertus embodies the institutional architecture our article recommends for knowledge-producing AI systems.

Conclusion: The Hybrid Model on Trial

The trial of Musk v. OpenAI will test more than fraud allegations or contractual obligations. It will examine whether hybrid organizations can sustain public-interest commitments at the scale of contemporary AI development. Our article’s theoretical framework predicts they cannot: The structural incoherence of combining nonprofit oversight with for-profit incentives produces merely symbolic governance, fiduciary fragmentation, and regulatory arbitrage rather than genuine accountability.

Judge Rogers’s refusal to dismiss Musk’s claims suggests the court recognizes these stakes. The case offers an opportunity to clarify the legal obligations that accompany nonprofit formation, the enforceability of charitable commitments, and the standards for converting mission-driven institutions into for-profit entities. Whatever the jury decides about Musk’s allegations, the litigation has already exposed the hybrid model’s inherent instability – a governance structure that aspires to serve the public while remaining bound to for-profit logic.

Moran Ofir is a professor in the Haifa University Faculty of Law, and Ronit Levine-Schnur is a senior lecturer at Tel Aviv University Faculty of Law. This post is based on their forthcoming article, “GenAI Models and the Hybrid Governance Trap,” available here.

Exit mobile version