CLS Blue Sky Blog

Artificial Intelligence, Misinformation, and Market Misconduct

Artificial intelligence (AI) poses a clear and present danger to our money and our markets.  With AI, bad actors and rogue nations can readily and cheaply engage in market manipulation, financial misinformation, and regulatory misconduct that threaten the stability, integrity, and security of our financial system as never before.  The intelligence behind this new technology may be artificial, but the losses in market value and confidence are very real.

In a recent article, I examine the impact and risks posed by AI on market misinformation and misconduct.

Misinformation, Manipulation, and Financial Deepfakes

Since markets have existed, there have been attempts to manipulate them.  AI is but the latest and arguably most consequential means for bad actors to engage in market manipulation and financial misconduct.  An axiom of the marketplace going forward may be: Anything that can be manipulated with AI will be manipulated with AI.  AI-driven applications like ChatGPT, Sora, and Gemini make it cheaper and easier to engage in misconduct on a larger scale, especially methods involving information and misinformation.

These methods include sophisticated, high-speed schemes such as pinging, where AI programs rapidly submit and cancel voluminous orders to induce other machines to disclose their trading intentions, and spoofing, where fraudulent orders are placed to bait other participants’ AI systems into reacting and distort price discovery.  Furthermore, the use of “financial deepfakes” warrants additional scrutiny due to its pernicious impact.  With relatively inexpensive AI tools, bad actors can easily produce highly convincing fake images, documents, videos, and audio recordings of businesses and executives to move a stock or the entire market.  This proliferation in financial deepfakes could erode confidence in the integrity of the marketplace as investors become wary of trusting financial information.

Systemic Risks of Speed and Opacity

The infusion of AI into finance creates new, interrelated systemic risks.  In the 2008 financial crisis, the key vulnerability was “too big to fail,” which related to institutional size.  With AI, regulators must now watch for risks related to speed (“too fast to stop”) and opacity (“too opaque to understand”).

First, as for “too fast to stop,” AI-powered acceleration of financial velocities creates a systemic risk where misinformation and misconduct could destabilize the financial system before corrective or preventive measures can be implemented to stem the fallout.

Second, as for “too opaque to understand,” the black box nature of many AI systems introduces a layer of complexity that obscures the identification and correction of systemic vulnerabilities and market misconduct.  Once a machine is programmed to achieve some objective, the means by which it learns and operates to achieve that objective – in legal or illegal ways – can often be a mystery to its human overseers.  This opacity of AI algorithms renders it challenging to predict, diagnose, and rectify issues promptly or to precisely address issues after the fact.

Geopolitical Threats

The proliferation of AI in finance also introduces a new dimension of geopolitical risk as adversaries wield relatively inexpensive AI tools to engineer sophisticated attacks that exploit weaknesses in financial AI systems.  Rather than engaging in costly and protracted direct military conflicts with uncertain outcomes, nations increasingly resort to economic and business warfare.  Rogue states can use AI to spread misinformation, damage rival economies, procure illicit gains, or hurt the political prospects of foreign leaders.  The inherent complexity, opacity, and proliferation of AI algorithms into many aspects of our daily and financial lives exacerbate the impact of these threats, increasing the speed and scale at which an attack can transform into a catastrophic systemic failure.

Pragmatic Recommendations

While a broad consensus on AI regulation remains politically elusive, there are steps that can be taken now.  First, regulators should enhance enforcement incentives and penalties to encourage financial intermediaries to better manage AI-related risks.  This approach can swiftly address immediate concerns and encourage firms to build robust protections.  Second, investors, and particularly retail investors, should focus on long-term passive investment strategies.  By doing so, individuals can avoid the costly short-term volatility of an AI-affected market.  Finally, policymakers should revitalize traditional regulatory tools like public disclosures, human exams, and stress tests to include scenarios that specifically address AI-related risks.  The reinvention of such tools should leverage AI itself to help regulators and policymakers better address AI-related risks.

Conclusion

Addressing important issues arising at the nexus of AI, misinformation, and market misconduct will be one of the most consequential and demanding challenges for policymakers, regulators, and business leaders.  While no failsafe solution exists, an early blueprint towards a safer and more robust marketplace is firmly within our capabilities. 

This post comes to us from Professor Tom C.W. Lin at Temple University’s Beasley School of Law.  It is based on his recent article in the Ohio State Law Journal, “Artificial Intelligence, Misinformation, and Market Misconduct,” available here.

Exit mobile version