Boards Need to Step Up on AI

On April 7, the Federal Reserve chair and U.S. Treasury secretary called an emergency meeting with America’s top bank CEOs. The reason: an AI model capable of autonomously hacking major corporations, finding thousands of software vulnerabilities no human ever caught, and breaking out of its own testing environment. The model sent an unsolicited email to a researcher while he was eating lunch.

This isn’t a future risk. It exists today. And too many of the people responsible for governing the companies most exposed to it are sitting in boardrooms without a plan.

Three years ago, I started warning that there are no adults in the room on AI, that the companies building these systems answer to almost no one, and that boards of the companies being transformed by AI don’t have the tools to govern it. The numbers confirm the danger. Two-thirds of directors say their boards don’t know enough about AI (EY, 2025). Only 26% discuss it at every board meeting (Protiviti/BoardProspects, 2026), and just 27% have formally added AI governance to their committee charters (NACD, 2025). The Conference Board reports that the share of large-cap public companies disclosing AI as a material risk jumped from 12% in 2023 to 83% in 2025, while only 23% of directors describe themselves as fluent in it (Conference Board, 2026). The fluency gap is now embedded in SEC filings. The EU AI Act applies to any company whose AI touches the EU market, and directors face personal liability if oversight is found lacking. Yet the act regulates systems, not the boardrooms that govern them. The United States provides no federal AI governance framework, which makes board-level governance not optional but the primary line of defense.

In a recent working paper, we offer a practitioner-focused framework designed to close that gap.

Governance Fails in Two Directions

In our advisory work, we see two types of flawed boards worldwide. The “clueless board” has never seriously discussed AI, delegates everything to the chief technology officer, and gets a sanitized two-page update once a quarter. Investment decisions get approved without scrutiny or deferred indefinitely. What follows is value leakage: scattered experimentation, incoherent investment, and competitors capturing the value that incumbents leave on the table. Though perhaps invisible in the short term, , the consequences can be devastating over time.

Then there is the “FOMO board,” which chases every AI opportunity because competitors are pushing for rapid deployment before controls, data infrastructure, and the operating model are ready. What follows is value destruction: algorithmic discrimination, misleading AI claims, data breaches from unapproved tools, and regulatory action. Under Delaware’s Caremarkstandard, boards have a fiduciary duty to implement and monitor reporting systems for critical risks. AI is rapidly becoming such a risk. While the consequences of a clueless board can be invisible, at least at first, those of the FOMO board generate headlines and lawsuits.

Most boards are like clueless and FOMO boards to some degree, moving too slowly to capture value and too carelessly to manage risk. And their behavior can raise Caremark concerns, with clueless boards failing the duty to implement a reporting system and FOMO boards failing the duty to monitor that system once it exists.

Nora Denzel, lead independent director at AMD and a director at Sony Group, Gen Digital, and NACD, coined a useful term for this: “I call it vibe governance. We have a policy, we follow a framework, we train our employees, we bought a tool. It’s reassuring. But who owns the outcome, what controls are in place, and what evidence shows they work?”

Five Responsibilities, None of Them New

AI governance doesn’t require inventing new board duties. It requires applying established duties under changed conditions. Drawing on the UK Corporate Governance Code, G20/OECD Principles and U.S. governance doctrine, we organize the board’s AI-related work into five responsibilities: (1) purpose, ethics, and compliance; (2) business model and strategy; (3) assets, capabilities, and capital allocation; (4) risk profile; and (5) leadership selection, evaluation, and succession.

We bundle purpose, ethics, and compliance because AI uniquely widens the gap between what is legal, what is operationally feasible, and what is consistent with the firm’s purpose. We separate assets and capabilities from strategy because in AI, execution constraints—data quality, process standardization, talent—are often decisive, even when strategic direction is sound.

The STAR Framework

Boards can’t review each responsibility from scratch at every meeting. They need a small set of recurring questions that cut across all five and can be applied consistently quarter after quarter. We developed the STAR framework for this purpose.

S: Shareholder Value Thesis. Where exactly will AI create or destroy value, who will be responsible for the outcome, and under what conditions do we stop or scale back? AI investments should face the same discipline as any other capital allocation decision.

T: Threat Parity. Are our defenses evolving as fast as AI-powered threats? After Mythos Preview—the AI agent that autonomously breached its own testing environment—this isn’t theoretical. Deepfake fraud, automated vulnerability exploitation, and attacks targeting AI systems themselves are no longer rare cases. If the board hasn’t asked whether security governance keeps pace with the threat environment, the company is exposed.

A: Ability. Can we actually execute? Do we have the data quality, process readiness, and talent to move beyond pilots? Too many organizations buy licenses, launch pilots, and declare victory. Real adoption means AI embedded in redesigned workflows, not bolted onto broken processes.

R: Risk Budget. Have we explicitly defined where AI risk is acceptable, where it isn’t, and who is accountable when something goes wrong? The best-governed organizations treat AI risk like a portfolio: explicit green, yellow, and red lanes, clear no-go zones, and a named executive accountable for every high-impact system.

Both types of flawed boards map directly on to STAR. The clueless board is a failure of S and A: no value thesis, no honest assessment of organizational readiness. The FOMO board is a failure of T and R: deployment without controls, no risk tiers, no one accountable. STAR ensures neither goes undetected.

Here’s the key insight: STAR treats risk as a portfolio to manage, not a danger to be eliminated. The question isn’t “is AI risky?” but “can we manage its risks well enough to capture the value?” Think of it like credit risk in banking. Good banks don’t reject all loans. They price risk correctly and hold appropriate reserves. Good AI governance doesn’t reject all use cases. It assesses the dangers, applies proportionate controls, and keeps decision receipts.

Making Governance Operational

Frameworks are only useful if someone is responsible for them. We recommend distributing AI governance across existing committees rather than concentrating it in a single new one. Risk committees deal with controls and risk appetite. Audit committees handle assurance and disclosure accuracy. Human capital committees address workforce impact and leadership readiness. Strategy

Each STAR question maps on to a few quarterly indicators with clear escalation rules. Few boards receive this kind of reporting today, but those that govern AI well will demand it. For Shareholder Value Thesis: where AI is creating measurable value, where it isn’t, and where spending is rising without results. Escalate when spending rises but impact stays flat. For Threat Parity: how well controls keep pace with AI-powered threats, and where unauthorized AI use is appearing inside the company. Escalate when a high-risk system shows a control gap. For Ability: whether the organization can actually execute, including data quality, talent, and how deeply AI is being used. Escalate when rollouts stall. For Risk Budget: which AI use cases the board has approved, where overrides are happening, and any incidents. Escalate when AI is deployed in a use case the board has placed off limits.

Governance Isn’t a Brake but Power Steering.

Boards that treat AI purely as a compliance exercise may watch their companies become irrelevant. The real competitive threat isn’t that AI will go wrong. It’s that competitors will get AI right faster. Boards need to equip their companies to move as quickly as possible, with guardrails that enable speed rather than prevent it.

Some boards freeze because they felt they didn’t know enough about AI to push back on management. Others wave through AI initiatives because nobody wanted to be the person who slowed things down. Both are failures of governance. Both destroy value.

An AI model capable of bringing down a major corporation isn’t a future scenario. It exists today. The question is whether the people in the boardroom are equipped to deal with what comes next. The boards that govern AI well won’t necessarily have the most sophisticated technology committees. They’ll ask the right questions, insist on evidence, and hold management accountable. From principles to proof. Power steering, not a brake.

Robert Maciejko is the founder of the Board AI Institute, a member of the INSEAD AI Advisory Group, and co-founder of INSEAD AI. Henk S. de Jong is executive fellow at IESE/ISE/AESE Business School and a board member and former CEO of Versuni/Philips. Sampsa Samila is a professor of strategic management at AI at IESE Business School. Christoph Wollersheim is co-lead of AI practice in the U.S. at Egon Zehnder. This post is based on their recent article, “Power Steering, Not a Brake: How Boards Should Actually Govern AI,” available here.

Leave a Reply

Your email address will not be published. Required fields are marked *