CLS Blue Sky Blog

Skadden Discusses Evaluating and Managing AI Risk Using the NIST Framework

The rapid adoption of artificial intelligence (AI) technology into corporate environments has left many organizations understandably struggling with how to identify, measure and manage the unique risks of these nascent systems. Organizations are also trying to determine a pathway to build trustworthy AI systems in order to avoid the significant business and reputational risks that can arise from implementing AI systems that do not function as intended. One approach to address these issues is to adopt, in whole or in part, an AI risk framework released by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that promotes U.S. innovation, typically through establishing standards and frameworks.

NIST designed the Artificial Intelligence Risk Management Framework (AI RMF) to help organizations better identify, manage and mitigate AI risks and create more trustworthy AI systems. Along with the AI RMF, NIST has released a companion “playbook” with further implementation guidelines for organizations, a roadmap of its plans regarding AI developments and a “crosswalk” explaining how the AI RMF matches up to the OECD Recommendation on AI, the proposed EU AI Act, U.S. Executive Order 13960 on promoting the use of trustworthy artificial intelligence in the federal government, and the Biden administration’s Blueprint for an AI Bill of Rights. NIST has also launched the Trustworthy and Responsible AI Resource Center to facilitate implementation of, and international alignment with, the AI RMF.

We provide below an overview of the AI RMF and the related materials NIST has issued.

Background

NIST has explained that it released an AI-specific framework in addition to the other standards and frameworks that already exist for information technology systems, privacy and cybersecurity because the risks posed by AI are unique. For example:

The premise of the AI RMF is that AI risk management (namely, minimizing negative impacts, such as threats to civil liberties and rights, while maximizing positive outcomes of using the software) is a key component of responsible development and use of AI systems. Such an approach will help “AI actors” (i.e., primarily those who design, develop, deploy, evaluate and manage risks of AI systems) consider potential negative impacts, and thereby enhance the reliability of and cultivate public trust in AI systems. NIST characterizes the AI RMF as “voluntary, rights-preserving, non-sector-specific, use-case agnostic” guidance that is intended to be readily adaptable throughout the AI life cycle.

The first part of the AI RMF outlines the various risks presented by AI, and the second part provides a framework for considering and managing those risks. One of the key focus areas of the AI RMF is those involved in testing, evaluation, verification and validation (TEVV) processes throughout the AI lifecycle. However, NIST emphasizes that also critical to AI risk management are groups not normally involved in technology development, such as advocacy groups that can assist primary AI actors by providing context and understanding of potential and actual impacts of AI usage.

Overall AI Risks

According to NIST, AI risk is unique because of the different sectors it can impact. This includes:

The AI RMF seeks to take into account each of these, and encourages stakeholders to do the same.

Unique Challenges in Managing AI Risks

The AI RMF sets forth some unique challenges in AI risk management:

AI Trustworthiness

The AI RMF also provides a framework for assessing whether an AI system is “trustworthy” — a key aspect of a risk assessment. In most cases, the AI RMF draws on standards from the International Standard of Organization (ISO):

NIST notes that AI systems might increase the “speed and scale of biases” and perpetuate and amplify resultant harms.

Managing AI Risks

The AI RMF Core consists of four functions — governing, mapping, measuring and managing — which are broken down into subcategories and provide organizations and individuals with specific recommended actions and outcomes to manage AI risks. NIST notes that these four functions should not be seen as a checklist or an ordered and complete set of oversight actions.

AI RMF Profiles

NIST suggests establishing use-case profiles as a means to evaluate how risk can be managed at various stages of the AI lifecycle or in a specific sector, technology or end-use application. For example, an organization might create a “hiring profile” where AI is used for hiring, while a comparison of a “current profile” and “target profile” might help an organization conduct a risk gap analysis. In NIST’s view, profiles will help organizations manage AI risk in a manner that aligns with their organizational goals, takes into account legal/regulatory requirements and best practices, and reflects an organization’s risk management priorities.

Additional NIST Resources

The NIST playbook released with the AI RMF provides additional recommendations and actionable steps for organizations, including further details on the AI RMF Core functions (governing, measuring, mapping and managing). NIST also plans to release The Language of Trustworthy AI: An In-Depth Glossary of Terms to provide organizations and individuals with a shared understanding of AI terms and improve communication among those interested in trustworthy and responsible AI.

In March 2023, NIST established a Trustworthy and Responsible AI Resource Center (AIRC) that hosts the AI RMF, and will feature related resources to facilitate implementation of, and international alignment with, the AI RMF. The resource center is expected to include technical documents and AI toolkits, stakeholder-produced content, case studies and educational materials, and to serve as a repository hub for standards, measurement methods and metrics.

Key Takeaways

With the use of AI expanding in ways for which most companies were not prepared, the AI RMF provides companies with a comprehensive tool to understand and evaluate the risks posed by AI and understand how to build trustworthy systems. The AIRC will also be a resource for companies to reference new documents related to AI regulation. NIST has emphasized that AI technology is rapidly evolving and the institute expects to continuously update its frameworks and resources, including ways to measure improvements in the trustworthiness of AI systems. Finally, NIST has encouraged those who use the AI RMF to periodically evaluate whether the framework has improved their ability to manage AI risks, including through their policies, processes and expected outcomes.

This post comes to us from Skadden, Arps, Slate, Meagher & Flom LLP. It is based on the firm’s memorandum, “AI Risk: Evaluating and Managing It Using the NIST Framework,” dated May 18, 2023, and available here. 

Exit mobile version