For decades, artificial intelligence (AI) was the stuff of science fiction. Today, it is fueling one of the biggest investment booms in history. In 2024 alone, venture capitalists poured over $209 billion into AI startups—a 30% jump from the previous year. Major tech acquisitions, led by Cisco’s $28 billion acquisition of Splunk, focused on expanding AI capabilities, while biotech companies invested $5.6 billion in AI-powered innovation, including a $1 billion deal between Novartis and Generate:Biomedicines. Even the U.S. government is betting big on AI, with the recently announced $500 billion Stargate initiative involving Oracle, OpenAI, SoftBank, and MGX.
But not all the news has been good. In late January, Chinese AI platform DeepSeek sent shockwaves through the market, first by disrupting U.S. chipmakers with its vastly faster and cheaper AI modeling, then by promptly falling victim to a massive cyberattack. The episode raised concerns that some companies may be concealing vulnerabilities or inflating their AI potential, echoing the dot-com bubble of the 1990s.
In this article, we explore the key risks in the AI gold rush, investor expectations for transparency into AI capabilities, and steps corporate boards can take to minimize AI litigation risk.
A Brave New World
AI is not just a passing trend—it’s an intrinsic technology that can be woven into every facet of business operations, from optimizing supply chains and financial modeling to revolutionizing drug discovery. For instance, biotech startup Absci uses generative AI to design entirely new antibodies, accelerating drug development in ways previously thought impossible. With AI’s broad applicability and transformative potential, corporate directors should be mindful of how it will evolve and whether they are appropriately navigating the risks inherent in exploiting this technology. For example, observers say the rush to monetize AI has outpaced regulatory frameworks and risk mitigation efforts. Some industry leaders and experts fear that AI’s rapid evolution could outstrip companies’ ability to control it, creating unforeseen risks both for boards and humankind more broadly.
Potential AI Risks: Just Fool’s Gold for Nerds?
Without meaningful oversight, the AI gold rush might simply lead corporations and their investors to a pot of fool’s gold. Indeed, despite its promise, AI raises several risks corporate boards must navigate.
One major concern is “AI washing” where companies exaggerate or misrepresent their AI capabilities to suggest they have a competitive edge. Similar to “greenwashing,” where companies falsely tout environmental achievements, AI washing can mislead investors and inflate a company’s valuation. In the race to demonstrate supremacy over AI applications, companies may overstate their AI capabilities to the market.
Communication risks also pose a challenge. Generative AI systems, like ChatGPT, are prone to “hallucinations,” producing incorrect, biased, or entirely fabricated information due to input errors. These AI-generated inaccuracies could expose companies to litigation, including fraud, defamation, and consumer protection claims. Legal scholars caution that as businesses increasingly rely on AI for customer service, marketing, and decision-making, the risk of hallucinations could escalate.
Third-party risks are another significant issue. Many businesses rely on external AI-powered tools and APIs, such as PayPal’s API for online payments or GitHub Copilot for software development. These services require companies to upload sensitive and proprietary data onto supposedly secure platforms. However, recent data breaches at OpenAI and DeepSeek highlight vulnerabilities that prompt hackers to target AI-driven systems. The MOVEit data breach involving Progress Software, which relies heavily on AI, is another example of this growing threat. Beyond security concerns, companies that fail to properly vet AI vendors may also introduce unintentional bias into human resources platforms and other enterprise-wide systems, leading to reputational damage and potential legal consequences.
Internal risks are also a pressing concern. As AI becomes more deeply integrated into business operations, a systemwide failure of a critical AI technology could have far-reaching consequences. Last year’s CrowdStrike outage—though not AI-related—disrupted global travel, healthcare systems, stock markets, and banking services, underscoring the dangers of over-reliance on complex and not fully understood technologies. New AI systems could pose significant system-wide risks that require heightened board attention and oversight.
Legal risks are another growing concern, particularly in healthcare and insurance, where AI-driven claim denials face increasing scrutiny. Over time, AI models can develop self-reinforcing behaviors that, while initially legal, may evolve into unlawful discrimination. This underscores a key principle of AI governance: companies cannot simply deploy AI and assume it will operate fairly and legally without continuous monitoring.
Stepping Into the Void: Investors’ Role in Holding Companies Accountable for Accurate AI Discussions
Investors are already actively pursuing AI-related securities class action and shareholder derivative litigation. Early signs suggest that AI-washing is currently taking center stage in these lawsuits. According to NERA, 2024 saw 13 AI-washing-related cases, more than double the number in 2023.
One of the most notable AI-washing securities class actions filed to date is a recently certified case against Zillow (Nasdaq: Z) and its derivative lawsuit counterpart. Both cases allege that Zillow overstated the forecasting capabilities of its proprietary AI-driven pricing model used in its now‑defunct Zillow Offers program.
Other recent AI-related securities class actions include:
- Oddity Tech (Nasdaq: ODD): The Israeli beauty and wellness platform allegedly misrepresented its proprietary AI technology’s ability to target customer needs and drive sales before its IPO. The “AI” turned out to be little more than a basic questionnaire.
- Innodata (Nasdaq: INOD): The company claimed to have a proprietary AI system, but in reality much of the work was done by thousands of low-wage offshore workers.
- Elastic NV (NYSE: ESTC): Investors allege that this company repeatedly overstated the stability of its sales operations. The lawsuit claims Elastic will likely fail to meet its previously issued FY 2025 revenue guidance.
The rise of AI-washing litigation suggests investors are already pushing back on misinformation and misrepresentations about companies’ AI capabilities. The AI gold rush bears striking similarities to past market bubbles, from the dot-com era to the SPAC frenzy—where exaggerated claims led to market corrections and waves of investor lawsuits. If companies continue to overpromise and underdeliver on their claims of having a competitive advantage with AI, they could face a similar reckoning.
What’s a Director to Do? Adopting AI Safeguards to Protect Shareholder Value
Corporate boards should take notice of the legal landscape and potential for liability requires a recognition not only of the power of AI, but also the urgency of establishing oversight over AI risk management. Institutional investors will insist on corporate accountability for transparency around AI capabilities, responsible AI deployment, and mechanisms to manage emerging risks.
Despite the increasing integration of AI in business operations, in many cases companies’ AI governance remains alarmingly weak. A 2024 Deloitte Global survey of nearly 500 board members and C-suite executives across 57 countries found that only 14% of boards discuss AI at every meeting, while 45% have yet to include AI on their agendas. Additionally, while 94% of businesses are increasing AI spending, within the last two years only 6% of companies had policies in place for the responsible use of AI, and merely 5% of executives report having implemented any AI governance framework.
To strengthen AI oversight, investors will expect directors to take several key steps. Corporate boards should establish dedicated AI governance committees to assess risks, oversee AI development and implementation, and ensure regulatory compliance. Given the growing use of AI across industries— 72% of organizations worldwide have integrated AI into at least one business function, and 21% have fully embedded AI into their operations—it would be prudent for all or some combination of the company’s Chief Technology Officer, Chief Legal Officer, Chief Risk Officer, and Audit Committee to discuss AI governance at least annually, and perhaps quarterly. Directors should engage third parties to identify risks, vulnerabilities and ethical concerns. Finally, Boards should also enhance AI literacy to ensure directors fully understand the technologies they are tasked with overseeing.
This post comes to us from Cohen Milstein Sellers & Toll PLLC.