The Argument for Strong Board Oversight of Artificial Intelligence

Corporate governance can play an important role in a company’s approach to machine learning technologies such as artificial intelligence (“AI”) and in the mitigation of risks associated with their use.

A thoughtfully developed governance structure for AI will reflect oversight, decision-making, and compliance protocols consistent with core fiduciary responsibilities. It will supplement the AI supervision already taking place at the operational level. And it will help communicate to corporate constituents how AI is being used – responsibly – in the company’s operations and delivery of goods and services.

Yet in the absence of any AI regulatory framework, and any accepted best practices for AI oversight, many boards are uncertain about the role governance may perform with respect to AI matters. This uncertainty has been compounded by a lack of internal and external appreciation for the contributions governance can make to address AI risks and strategies.

The concern is that without deliberate progress towards a formal governance role for these emerging technologies, AI implementation may soon overwhelm the company’s ability to effectively monitor its use in the organization.  That could quickly become a liability problem for both the company and its directors – not to mention the company’s consumers.

These problems can be avoided in large part through (i) greater awareness of the fundamental AI-related role of corporate governance (i.e., “the Why?”), (ii) recognition of the current challenges to board oversight of AI (“the Regulation,” “the Resistance”), and (iii) the adoption of an interim role for governance in advance of specific fiduciary principles and best practices for that role (“the Pathway Forward”).

The Why

There is not, of course, any statutory or judicial mandate that boards adopt particular fiduciary protocols for use of AI.  Neither have any of the leading business trade, leadership, or policy organizations introduced specific guidelines or recommendations concerning the board’s approach to AI.  And certainly there are no governance principles that advocate for formal board oversight of every individual business line, strategy, or technology implemented by a corporation; that would result in an unsupportable board bureaucracy.

Rather, the mandate for specific board treatment of the company’s investment in, and implementation of, AI lies in the technology’s unique combination of promise, complexity, and especially risk. This combination includes:

  • (i) the extraordinary value-creation opportunities within the company’s business model presented by AI;
  • (ii) the complexity (and often opacity) of the underlying technology, especially when compared with existing data and information systems currently applied or under development by the company;
  • (iii) the social, legal, compliance, ethical, and privacy issues that AI use presents – as well as its implications for international competitiveness and national security;
  • (iv) the trust and safety-related concerns AI generates for both consumers and employees;
  • (v) the impact of data-intensive AI development, training, and deployment activities on an organization’s carbon footprint and sustainability commitments;
  • (vi) the inevitability of AI-focused government regulation, the uncertainty of its possible scope, and the need for related corporate legal and compliance mechanisms;
  • the appropriate extent of corporate resources committed to AI operations;
  • the prudence of the company’s AI strategy and of its “AI governance” process; and
  • (ix) the ability of the company to effectively manage the risks if the technology goes wrong.

Of these factors, the most compelling, if not permanent, from a board oversight perspective must be the risks associated with enterprise use of AI.

From a theoretical perspective, it would be irresponsible to ignore the risk implications of a technology that some have described as capable of presenting an existential threat to humanity, similar to that presented by other threats to society’s extinction.

From a more practical perspective, it is also difficult to ignore media reports of notable AI incidents resulting in tangible harms; e.g., biased hiring and search algorithms; rogue advice to patients from a mental health chatbot; routine denial of care on the basis of a popular proprietary Medicare Advantage algorithm; AI-generated diagnoses so incorrect that doctors need to override them.

The Regulation

AI-focused enforcement and emerging federal and state regulation are adding to the pressure on corporate boards to develop some form of AI monitoring system.

Efforts at the federal level include the Biden Administration’s 2022 non-binding “Blueprint for an AI Bill of Rights;” the Federal Trade Commission’s guidance on (and investigations into) unfair and deceptive trade practices related to AI; requests for comments from several federal agencies (e.g., USPTO, NTIA, HHS ONC) on AI accountability and regulation; congressional hearings and efforts to promote a bipartisan legislative effort towards comprehensive AI legislation; and voluntary safeguards on AI development agreed to by leading U.S. technology companies.

Also informative are emerging state and local efforts to regulate AI, including increased interest from state attorneys general seeking concurrent enforcement authority with any federal regulatory regime for AI and dozens of state-level legislative proposals, focused on the use of AI in various contexts.

The Resistance

Then there is the resistance to board oversight emerging from corporate leaders.  These include technology leaders, researchers, scientists, and developers who believe that any kind of corporate monitoring system will needlessly frustrate innovation and the competitive advantage.

It also includes the rise of what is (exceptionally confusingly) referred to as “AI Governance;” i.e., an internal operational framework designed to establish and implement policies, procedures, and standards for the proper development, use, and management of AI and machine learning algorithms.  While a laudable process, it must not be confused with, or seen as a substitute for, actual governance by the organization’s board of directors.

A Pathway Forward

A platform from which boards may develop their own (at least interim) structure for AI oversight might reasonably address the following:

The Board’s Basic Role.  This would be a shared internal appreciation for the differing roles of the board and management as they relate to AI.

As noted above, the board’s duties and responsibilities would likely involve monitoring, risk management, basic policy formation, and decision-making as to key acquisition and implementation strategies.  Management’s roles would include the development of AI strategy for board approval; acquisition, implementation; legal or compliance measures, supervision of key risk areas, operational leadership, and staffing. This would include operational tasks that are becoming known as “AI Governance”.

Board-Level Expertise.  The nominating or governance committee should identify candidates with AI-level expertise who would be qualified to support AI oversight, whether as a member of the board of directors or as a lay member of a committee with board-delegated powers.

Risk Profile.  The board would be well advised to work with management and the AI governance team to develop a profile for that level of risk that the organization is willing to assume in connection with potential AI use cases.  The knowns/unknowns risk identification process developed by former U.S. Defense Secretary Donald Rumsfeld might be a particularly appropriate framework for such a profile. Of the risks inherent to enterprise AI applications, data management, ethics, and the perils of black-box decision-making are of critical concern.

The Monitoring Platform.  This is the structural approach by which the board’s monitoring obligations are pursued; e.g., by the board as a whole; by a standing committee with board delegated powers (e.g., the “AI Committee”); by a subcommittee of an existing board committee (e.g., the “AI Subcommittee of the Data and Technology Committee”); or by a special committee with advisory powers only.

Which is the most appropriate structure will depend upon the specific facts and circumstances associated with the company, the extent of its involvement in technology in general and AI in particular, and the size of the board and the extent of AI knowledge amongst its membership.

Information Reporting.  Essential to any governance-level monitoring of AI use is a set of reporting systems that provide key officers, and the board, with information sufficient to inform them as to AI risks. Examples can be drawn from recent Delaware decisions interpreting the Caremark standard.

Education:  A Prerequisite.  Any effective board-level AI monitoring system will confront the significant learning curve confronting board and committee members. As difficult and complex as the technical concepts may be, boards must nevertheless increase their engagement with the functions and limitations of generative AI if they are to fulfill their monitoring responsibilities.

Conclusion

The combination of the rapid development of artificial intelligence, the exceptional opportunities it provides, and the significant risks it may create are sufficient to require the attention and monitoring of the corporate governing board.

There may be no existing comprehensive body of regulation for AI, or published standards of board conduct, that provide specific guidance to develop a board AI monitoring framework.

There is, however, sufficient guidance from emerging enforcement actions, Biden Administration policy, legislative proposals, agency recommendations, and Delaware case law to guide many boards in the development of such a monitoring framework.  Given the increasing focus on enterprise AI applications, time is of the essence for such frameworks to be implemented, even if in a preliminary form.

This post comes to us from Michael W. Peregrine and Alya Sulaiman, partners at the law firm of McDermott Will & Emery LLP.

1 Comment

  1. Michael Gabriel

    The lack of clarity on AI regulation is currently more obscurely defined than for Cyber Security incident reporting or previously for the initial Sarbanes-Oxley guidelines.

    In absence of that, at a minimum, Boards should request an evolving preliminary framework that first highlights AI (and ML) risks and opportunities for their company and industry as well as the goals of any projects in motion or being considered.

    The Board can then begin to approach how to best govern this area for the company, and to start including that in Quarterly meetings, before any significant issues occur that might have been avoided.

Comments are closed.