The European Union’s Artificial Intelligence Act is now law. Some provisions are already in effect, but others are still being finalized, so boards are being asked to take digital governance seriously while the legal floor under their feet is still moving.
The question for directors is no longer whether artificial intelligence and data systems matter. The challenge is how to exercise real oversight without turning the board into a technology department or drifting into what many practitioners describe as artificial intelligence theater. Corporate law still expects boards to govern systems, information flows, and allocations of responsibility, not to design models. Recent posts on this blog, here, here, and here, have mapped the paradigm and doctrinal implications. I take a narrower, operational angle and ask what minimal digital-governance architecture for boards would look like for companies subject to the AI Act, and how that architecture can respect the boundary between oversight and management.
Act in Force, Rulebook Still Changing
The AI Act classifies artificial intelligence systems by risk level and imposes obligations in stages. Bans on practices that are considered unacceptable, such as certain forms of social scoring and intrusive emotion recognition, have applied since early 2025. Requirements for providers of general-purpose models and related transparency duties have applied since August 2025. High-risk obligations are due to take effect in steps in 2026 and 2027, and the European Commission’s implementation overview presents the timetable. At the same time, the commission has proposed a Digital Omnibus package that would simplify parts of the wider digital rulebook and could extend some high-risk deadlines. Boards are therefore designing digital governance systems in the middle of a staggered rollout.
From Cyber and ESG to Digital Governance
Over the past two decades, boards have seen certain specialized issues become routine areas for oversight. After accounting scandals, audit committees gained explicit responsibility for internal control over financial reporting. After the global financial crisis, many large issuers created risk committees and gave more structure to enterprise risk management. Climate and ESG oversight, which began as a matter for investor relations, has since been written into the formal responsibilities of boards and their committees, helped by frameworks such as the Task Force on Climate-related Financial Disclosures and later European sustainability reporting rules.
The pattern is familiar. A technical topic starts in management, becomes a focus of regulators and investors, and eventually lands in a board’s remit. Digital governance has followed this path and become a fourth area for board oversight that cuts across audit, risk, and ESG. It covers artificial intelligence, data and critical digital systems. It intersects with cyber security and operational resilience but is not reducible to either.
Oversight, Management, and What the Act Changes
Corporate law already draws a line between oversight and management. In Delaware, the duty of oversight associated with the Caremark line of cases requires directors to make a good faith effort to implement and monitor systems of reporting and control. It does not protect a sustained failure to install any system capable of bringing critical risks to the board’s attention. Although Caremark is a U.S. doctrine, its basic logic about board oversight of risk systems has influenced how lawyers and policymakers think about board duties in other jurisdictions, including Europe.
Artificial intelligence complicates this division. Many systems are embedded in everyday processes, and their failures can cut across existing areas of risk. Their internal workings can be hard to explain even to technically literate executives, which makes it harder for directors to show that they were reasonably informed when they relied on those systems or on management’s assurances about them. The temptation is to respond by pulling more detailed technical decisions into the boardroom. That response is not what the law expects. Boards are not meant to approve individual models, but to ensure that the systems by which models are designed, tested, deployed, and retired are robust, monitored, and aligned with the company’s risk appetite and legal obligations.
The AI Act does not rewrite the duty of oversight, but it does specify what a reasonable system of digital governance should contain. Providers and deployers of high-risk systems must implement risk management, data and data governance practices, technical documentation, human oversight measures, and post-market monitoring. Providers of general-purpose models face their own transparency and governance obligations. Together with other digital regimes, these rules set expectations for what an internal governance system should look like. Firms must classify their artificial intelligence systems against the act’s risk tiers and maintain an inventory tied to business functions. They must monitor systems once they are in use and report serious incidents to authorities, using existing systems for incident management, whistleblowing, and regulatory engagement rather than building entirely separate ones.
A Minimal Digital-Governance Architecture for Boards
For a listed company with meaningful exposure to the AI Act, there are four elements of a minimal digital-governance architecture for boards.
Committee ownership. Boards need to decide where digital governance sits. In some firms, the audit committee takes the lead because internal control and assurance are central. In others, the risk committee is a better home because artificial-intelligence risk is one of a broad range of risks. Some companies in highly digital sectors have a dedicated technology or digital committee. Whatever the structure, at least one committee should state plainly that artificial intelligence, data, and critical digital systems form part of its remit and explain how that remit connects to cyber and ESG oversight.
Executive accountability. On the management side, firms are experimenting with chief artificial intelligence officers, chief data officers and other digital leadership roles. From a board perspective, the questions are simple. Is there a senior executive who is clearly accountable for the artificial intelligence and data governance framework as a whole? How is that person’s responsibility integrated with risk, compliance, internal audit, information security, and data protection? Does that executive appear regularly before the relevant committee with a report, rather than only when there is a new initiative to showcase?
Information architecture. Directors cannot and should not track the performance of every model, but they need a consistent view of exposure, controls, and incidents. A workable digital-governance dashboard might include three types of indicators: exposure indicators, such as the number of artificial intelligence systems in use, organized by risk tier and business function; control indicators, such as the status of alignment with the NIST Artificial Intelligence Risk Management Framework and its Generative AI Profile, or progress towards adoption of governance standards such as ISO IEC 42001, the first international artificial intelligence management system standard; and incident indicators, such as the number of significant artificial-intelligence related events and how lessons learned have been translated into control changes.
Escalation thresholds. Boards should expect management to define thresholds for what counts as a digital incident that must be escalated to the relevant committee. Examples might include any artificial-intelligence related event that triggers notification under the AI Act, any incident with clear implications for fundamental rights or safety, and any major investigation or information request from a regulator focused on artificial intelligence systems. The board does not need to approve each threshold, but it should test whether the escalation regime exists, is known in practice, and is consistent with the firm’s appetite for digital risk and with the requirements of the act.
Avoiding Extremes and Looking Ahead
As digital governance gains prominence, it is easy for boards to slide toward one of two extremes. At one extreme, directors attempt to interrogate the internal workings of particular models, which adds little to oversight and blurs the line with management. At the other extreme, boards receive polished presentations on artificial intelligence strategy and ethics, but little concrete information about inventories, incidents, or control gaps.
A more sustainable approach treats digital governance the way mature boards treat financial reporting or climate risk. Directors focus on systems, responsibilities, and assurance, not on individual decisions. They ask whether internal audit and external assurance providers are looking at artificial intelligence controls. They seek independent views on how their governance compares with that of peers and ensure that significant incidents lead to visible changes in the governance system.
By the time high-risk obligations are fully in force, it is reasonable to expect that investors, regulators, and courts will look for signs that boards have treated digital governance as a settled part of their work. Boards that reach that point will not have done so by turning directors into algorithm designers. They will have done it by applying familiar governance principles to a new domain in a way that fits the evolving rulebook.
Kostakis Bouzoukas is a principal technology leader and founder of Breakthrough Pursuit, a platform focused on AI governance and digital trust. He writes in a personal capacity.
Sky Blog