For more than a decade, the European Union has styled itself as the custodian of digital civilization. If Silicon Valley built the engines, and Shenzhen perfected the replication, Brussels has written the rulebook. After the General Data Protection Regulation (GDPR) changed how the world thinks about data privacy, the EU has now unveiled its next great legislative experiment: the Artificial Intelligence Act (“AI Act”).
At first glance, the AI Act looks like a continental matter, a European attempt to tame algorithms within its own borders. But its scope is far more ambitious as its obligations apply to any AI system that touches the European market – whether built in California, deployed in New York, or coded in Bangalore. Just as GDPR became a global template, the AI Act will ripple outward, shaping contracts, compliance frameworks, and governance practices worldwide.
For U.S. corporations, the message is unmistakable: The future of AI governance will not be confined to technical specifications in Brussels. It matters in Delaware boardrooms, Chicago compliance offices, and Wall Street general-counsel (GC) suites.
Corporate Governance Implications: A Shift in Roles
In a recent article, I make a simple but perhaps unsettling claim: The AI Act reshapes the duties of three often-overlooked actors in corporate governance – board secretaries, compliance officers, and in-house counsel. Their work will determine whether AI governance becomes a meaningful corporate practice or remains a paper exercise.
Traditionally, board secretaries have been custodians of minutes, guardians of procedure, and facilitators of board deliberations. Under the AI Act, they will be responsible for letting AI oversight into the boardroom. Consider a U.S. multinational deploying AI-driven credit-scoring tools in Europe. Under the AI Act, such systems are deemed high-risk and must undergo conformity assessments, risk documentation, and monitoring. Someone must ensure these requirements actually reach the ears of directors. That someone is often the secretary, whose task expands from recording what is decided to shaping what must be discussed.
Under Delaware law, directors breach their duty of loyalty if they consciously disregard “mission critical” risks, as in Marchand v. Barnhill or the Boeing litigation. By making AI risk management a matter of statutory obligation, the AI Act essentially makes algorithmic oversight “mission critical.” The secretary thus becomes the responsible for ensuring that AI disclosures, impact assessments, and audit results are regularly placed on the board’s agenda.
As for compliance officers, the AI Act assigns them responsibilities that are both sweeping and, at times, paradoxical. They must guarantee that AI systems are continuously assessed for risks, monitored for malfunctions, and documented with precision. It is the classic Catch-22 of modern regulation: accountability without control. Worse, AI systems evolve. A fraud-detection algorithm retrained overnight on new data may no longer resemble the model initially approved. Compliance officers must therefore build frameworks capable of auditing not just a product but a moving target.
For U.S. corporations, the risks are doubled. An incident report filed in Europe – a malfunction, a bias finding, a regulatory fine – does not stay in Europe. It migrates. Securities class action lawyers in New York may reframe that disclosure as a material omission under Rule 10b-5. Plaintiffs in Delaware may seize it as evidence of a Caremark red flag. The compliance officer thus operates in a situation where a report to Brussels may become an exhibit in a U.S. lawsuit.
Finally, the AI Act transforms the GC’s role from legal adviser to institutional gatekeeper. Every contractual clause with an AI vendor now matters: Who bears liability if the model discriminates? Who must provide documentation for conformity assessments? How are indemnities structured if EU regulators impose fines? These are not abstract questions. They must be drafted, negotiated, and enforced in real time. Moreover, the AI Act requires fundamental-rights impact assessments for high-risk AI. GCs must coordinate with data protection officers and HR and technical teams to demonstrate that AI systems respect non-discrimination, privacy, and due process.
In the U.S., this resonates with the Sarbanes–Oxley Act’s conception of the lawyer’s duty to “report up” material violations. The GC must not only advise but also ensure that warnings reach the highest levels of governance. The irony is that in-house lawyers, long perceived as corporate “nay-sayers,” now find themselves at the heart of corporate strategy. AI compliance is not just a regulatory burden; it is a governance opportunity. By shaping internal AI frameworks, counsel can enhance investor trust, pre-empt litigation, and position the company as a leader in ethical innovation.
The Broader Lesson for U.S. Corporate Leaders and Policy Implications
For GCs and CLOs in the United States, all this means that AI is no longer just a technical problem but also a governance problem, a fiduciary problem, and ultimately, a reputational problem.
Europe’s AI Act has given corporate roles new roles: the secretary as steward of AI oversight, the compliance officer as navigator of what seems impossible, and the GC as gatekeeper of fundamental rights. The AI Act also reveals the inevitability of transatlantic convergence in corporate governance. Europe regulates through statute; the United States regulates through litigation. Together, they leave corporations little room to hide.
For policymakers, the challenge is to reconcile these regimes. For corporations, the imperative is to internalize them. Embedding AI oversight into enterprise risk management, aligning disclosure practices across continents, and negotiating robust vendor contracts are no longer optional best practices.
Conclusion
The AI Act, like any ambitious legislation, remains a work in progress. Yet its significance for U.S. corporate governance is already clear: It recasts familiar roles, intensifies fiduciary duties, and merges EU regulation with U.S. liability. For GCs and CLOs, this is not just compliance. The question for executives is not whether to prepare, but how quickly they can align their governance structures with a regulatory wave that will not stop at Europe’s borders.
This post comes to us from Professor Maria Lucia Passador at Bocconi University, Department of Law. It is based on her recent article, “The AI Act’s Silent Impact on Corporate Roles,” forthcoming in the Business Lawyer and available here.