Regulating the Development of Driverless Finance

Before permitting driverless cars to operate on the open road without a licensed driver, lawmakers and innovators are working to ensure the safety not only of the passengers in those cars, but also of third parties – particularly other drivers and pedestrians.  Safety concerns figure much less prominently, however, in discussions about fintech and the increasing algorithmic automation of finance.  While the use of algorithms in finance is nothing new, the ubiquity, sophistication, and autonomy of financial algorithms has increased significantly in recent years with advances in computing power and data usage techniques.  Increasingly automated financial decision-making (a phenomenon that I have termed “driverless finance”) has generally been praised for increasing efficiency and inclusiveness, particularly as applied in the marketplace lending and robo-investing business models. To the extent that any concerns have been raised about these business models, the focus has been on privacy violations of, and discrimination against, financial consumers.[1]  But there has been very limited consideration of the potential negative externalities that driverless finance could generate for third parties.  In a forthcoming Harvard Business Law Review article, I begin to address this lacuna by examining how increasingly autonomous financial algorithms could cause or exacerbate financial crises, generating economic conditions that harm society as a whole.

Innovations in smart contracts and machine learning are particularly relevant to any consideration of the potential impact of driverless finance on financial stability.  Smart contracts are algorithms that are programmed to self-execute once the algorithm receives the necessary instructions or data.  They are recorded on a distributed ledger (at least for now, the Ethereum blockchain is the ledger of choice), and are intended to be self-enforcing, in the sense that once the algorithm is instructed to execute the contract, it will immediately force a transfer of virtual assets from one account to another on the distributed ledger.  This means that, at least in theory, smart contracts do not allow for interpretation, adjudication, or enforcement by the courts or any other outside entities.  Tokens sold in initial coin offerings are the most familiar example of financial assets expressed as smart contracts, but it is not difficult to conceive of an increasingly algorithmic world of finance where self-executing smart contracts come to represent mainstream financial assets.  Payments could be made in the form of debits and credits of digitized sovereign currency on a distributed ledger, and transfers of ownership of assets could also be reflected on the distributed ledger.  These payments and transfers could be triggered by a variety of contractually predetermined events; already, smart contracts can be programmed to consult “oracles” (outside information sources) to determine when to effect a transaction.  By design, humans would have no real opportunity to interrupt the performance of the obligations associated with these financial assets, and the efficiency and certainty associated with self-executing, self-enforcing financial assets could be very appealing to financial market participants.

Unfortunately, oracles can be wrong or misinterpreted by algorithms, and no algorithm can ever envisage all possible states of the world.  As a result, smart contracts can never be equipped to deal appropriately with all possible contingencies.  Legal systems interpreting paper financial contracts have developed the ability to relax and suspend contractual obligations in order to help preserve financial stability, and as Katharina Pistor has noted, “in the context of a highly instable financial system, the elasticity of law has proved time and again critical for avoiding a complete financial meltdown.”[2]  Smart contracts have the potential to harm financial stability by depriving the financial system of some of its flexibility.  Although there may be more opportunities to undo smart contracts than proponents care to admit, the speed with which smart contracts execute their programming can cause damage, even if the transaction is ultimately reversed. Furthermore, the fact that whole new classes of “smart assets” can be created out of whole cloth by anyone with computer programming knowledge means that there is no real limit on the supply of these assets – this exponentially multiplies their potential risks.

Smart contracts are typically predictive algorithms (meaning that they are given linear instructions by a human programmer, which they follow to carry out a particular task), but technological advances may result in increasing numbers of smart contracts capable of machine learning (a form of artificial intelligence).  Advances in machine learning are already revolutionizing the provision of many types of financial services. Instead of being programmed to execute a particular task, machine learning algorithms are programmed to learn how to execute tasks by studying a data set.  There is particular interest in machine learning as a tool for risk management, because of the ability of artificial intelligence to process vast amounts of information deliberatively and quickly.  However, a machine learning algorithm’s ability to properly manage risk will be circumscribed by any limitations in the data set it is exposed to (particularly if the data set is recent and doesn’t contemplate low-probability but high-consequence tail events – the type of events that precipitate financial crises).  Machine learning algorithms will only be able to observe correlations, not infer any causation, and the process by which such algorithms decide which data are relevant and how to weight them in making decisions is opaque.  If it does become clear that a machine learning algorithm has made a mistake, the technology does not yet exist to teach it not to make the same mistake again.  We should therefore be wary about the degree to which the process of risk-management is delegated to artificial intelligence.

The automation of financial decision-making can also undermine basic assumptions about the use of diversification to manage risk. When financial decision-making is automated and performed by a few algorithms learning from the same data set of historical market information, preferences may become monolithic, and market behavior may become even more correlated than it is at present (a phenomenon that I refer to as “correlation by algorithm”).  In the robo-investment context, for example, asset bubbles could form if numerous consumers were to be advised to invest in the same financial portfolio, or if they were to be steered to a particular asset by an algorithm that underestimates the asset’s associated risks.  If that same algorithm advises selling assets, then that could have a sudden impact on the price of those assets system-wide, and depressed asset prices might force other market participants to sell other assets to deleverage, creating problems for asset pricing in general and the stability of the financial system as a whole.

The rise of driverless finance has the potential to create these and many other threats to financial stability.  While it may be tempting to defer consideration of the negative impact of driverless finance until we have actually observed a failure of the relevant technology, its potential to cause serious economic harm (and attendant social costs) necessitates creative thinking about the risks that new financial technologies could create and proposals for regulation to address those risks ex ante.  There will soon come an inflection point after which the ability of regulators to influence the application of such technology in the financial markets will be circumscribed – the technology and the industry will be set, and regulators will have missed their opportunity to shape it. It is therefore troubling that the Treasury Department’s recent report on “Nonbank Financials, Fintech and Innovation” makes almost no mention of the impact that machine learning, smart contracts, and other technological innovations could have on financial stability.[3]

Regulators around the world can have the biggest impact by regulating the processes by which these sophisticated financial algorithms are being created, to guide the development of driverless finance in a way that minimizes harm to third parties.  For example, regulators could adopt principles-based regulation that requires that all machine learning algorithms be exposed to the possibility of tail events in their data sets.  Best practices may ultimately involve developing hypothetical data sets through war-gaming, to be used to train machine learning algorithms to deal with the types of events that cannot be observed in the historical data.  Another example of process-oriented regulation might be to require that any smart contract that represents a financial asset be programmed to include some form of circuit breaker to allow execution to be paused in emergency circumstances.  Steps such as these can only be effectively implemented if regulators have sufficient programming and data science expertise, however – hiring personnel with such expertise will no doubt be expensive, and funding support will be needed as matter of priority.  Financial regulators are also likely to lack jurisdiction over some of the developers of driverless finance technologies – these jurisdictional issues ultimately will also need to be addressed, but in the interim, regulators can still affect the way that financial technologies are developed through their regulation of the financial institutions that deploy those technologies.



[2] Katharina Pistor, A Legal Theory of Finance, 41 J. COMP. ECO. 315 (2013).


This post comes to us from Professor Hilary J. Allen at American University’s Washington College of Law. It is based on her recent article, “Driverless Finance,” available here.