CLS Blue Sky Blog

Cleary Discusses Managing AI Risks: Legal and Governance Imperatives for Boards

AI adoption is now mainstream: 88% of businesses use AI in at least one function, with global spending expected to exceed $1.5 trillion in 2025 and approach $2 trillion in 2026. As organizations race to scale AI, many have relied upon traditional vendor risk management policies to vet third-party AI vendors and tools; however, implementation of third-party AI tools presents distinctive risks that require tailored due diligence, auditing, contracting and governance. Because businesses are accountable for outputs generated by third-party AI tools and for vendors’ processing of prompts and other business data, boards and management should ensure legal, IT and procurement teams apply a principled, risk-based approach to vendor management that addresses AI‑specific considerations.

General Risks Inherent in AI Tools

The inherent nature of AI models presents unique risks beyond those addressed by typical vendor management:

Information Security Risks

AI systems face novel attack vectors (including prompt injection, data poisoning and model inversion), where attackers manipulate inputs or infer sensitive information from model behavior. Frequent model updates and opaque decision logic complicate security testing and auditing. These challenges are particularly acute with third-party AI vendors, where businesses often lack direct visibility into the vendor’s security practices, model training environments and data handling procedures. Unlike internally developed systems, third-party AI tools often operate as “black boxes,”preventing businesses from conducting comprehensive security assessments or verifying that security patches and model updates have not introduced new vulnerabilities.

The growth of agentic AI creates additional risks. Recent disclosures show that agentic AI can now independently execute complex offensive campaigns at nation-state scale, and enterprise assistants, once granted access and operational autonomy, can trigger actions that circumvent traditional enterprise controls.1 When these agentic capabilities are embedded in third-party vendor solutions, businesses face compounded risk: they must trust not only the vendor’s security controls but also their governance over autonomous agent behavior, with limited ability to monitor or constrain agent actions that occur within the vendor’s infrastructure.

Privacy Risks

Third-party AI tools pose privacy risks because sensitive, personally identifiable information (PII) may be shared, processed or stored outside the business’ direct control. PII triggers specific legal obligations under data protection regimes such as Europe’s General Data Protection Regulation (the GDPR), the California Consumer Privacy Act (the CCPA) and China’s Personal Information Protection Law (PIPL) and other privacy laws that impose strict requirements on PII processing, retention and cross-border transfers.

When PII is input into AI tools provided by a third-party vendor, it may be retained, logged or reused for model improvement, increasing risks of inadvertent disclosure, unauthorized access and secondary use. Organizations may be unable to honor data subject rights requests (e.g., rights to access, deletion and rectification) when data resides in opaque AI systems controlled by third parties or has been incorporated into training datasets. Data ingested in AI tools may also be transferred across jurisdictions, creating compliance challenges with privacy and data protection regulations.[1]2

Intellectual Property Risks

Use of third-party generative AI tools poses unique IP risks:

Regulatory Risks

Regulatory risk associated with AI adoption is increasingly driven by the application of existing consumer protection, securities, civil rights and data protection laws to AI-enabled activities. Regulators have made clear that businesses deploying AI remain fully accountable for legal compliance, even where AI functionality is sourced from third-party vendors. Enforcement actions by the FTC and SEC demonstrate that reliance on vendor representations, without independent validation and governance, is insufficient and exposes businesses to enforcement risk.

Third-party AI tools materially amplify regulatory exposure because legal accountability remains with the deployer, while technical control, design decisions and underlying data inputs often sit with the vendor. Many AI vendors do not design products around a business’ specific compliance obligations, making it difficult to implement required transparency and consumer disclosures, explain automated decision-making outcomes, support consumer data protection rights or document how outcomes are generated. Limited audit rights, restricted access to training data and system logs and weak data provenance frequently leave businesses unable to substantiate compliance during regulatory inquiries or to remediate issues once identified.

Recent enforcement underscores these risks. The FTC’s most impactful AI-related enforcement action against Rite Aid illustrates that businesses cannot outsource accountability for AI governance: the alleged failures stemmed from allegedly inadequate oversight, testing, monitoring and auditability of a third-party AI system. Similarly, SEC actions targeting AI washing reflect regulatory skepticism toward overstated AI claims where organizations lack demonstrable controls, validation or understanding of vendor-provided tools. In both contexts, regulators focused on the gap between marketing or deployment claims and the business’ actual ability to govern and explain the AI system in use.

These risks are intensifying as AI-specific legislation emerges globally. In the United States, laws such as Colorado’s AI Act, Texas’s Responsible AI Governance Act and New York Local Law 144 impose affirmative obligations on AI deployers, including impact assessments, transparency obligations and safeguards against discriminatory outcomes, requirements that are difficult to satisfy without deep visibility into vendor systems. In the EU, the AI Act imposes stringent obligations on high-risk AI uses, with penalties of up to €35 million or 7% of global annual turnover. For boards, the key issue is structural: third-party AI can create a misalignment between regulatory responsibility and operational control, significantly increasing the likelihood of non-compliance, enforcement and reputational harm unless proactively governed.

It is worth noting, however, that despite these regulatory developments, significant political pressure exists in the U.S. to minimize regulatory burdens in favor of supporting innovation, as exemplified by President Trump’s December 11, 2025, Executive Order, which escalates federal efforts to prevent state-level AI regulation in favor of a “minimally burdensome national policy framework,” including through a task force empowered to challenge state laws inconsistent with federal innovation priorities.4

Recommendations for Board Action

Boards play a critical role in AI governance by setting strategic goals, supervising management and assessing organization-wide AI risks. Boards of companies that incur financial losses stemming from AI may face Caremarkshareholder derivative suits alleging that directors breached their fiduciary duty of oversight with respect to AI-related risks. Given the rapid evolution of AI technology, the fragmented regulatory landscape and the significant legal and operational risks associated with third-party AI tools, boards and management should prioritize safe and compliant AI use by supporting centralized AI governing bodies subject to board-level oversight to guide implementation. Such steps may include:

ENDNOTES

1 For comprehensive analysis of the security implications of these incidents and recommended due diligence measures, see our November blog post available here.

2 For example, where data, including PII, is hosted or stored in certain jurisdictions, particularly the United States, organizations may face additional legal complexities arising from government access laws such as the U.S. CLOUD Act and the concerns raised in the Schrems decisions regarding adequacy of data protection for EU PII transferred outside the EEA.

3 For additional information, see our article on AI Copyright Litigation.

4 For analysis of the Executive Order, see our December blog post available here.

This post is based on a Cleary Gottlieb Steen & Hamilton LLP memorandum, “Managing AI Risk: Legal and Governance Imperatives for the Board,” available here. 

Exit mobile version