The development of self-learning and independent computers has long captured our imagination. The HAL 9000 computer, in the 1968 film, 2001: A Space Odyssey, for example, assured, “I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.” Today businesses (and governments) are increasingly relying on big data and big analytics. As the cost of storing and analyzing data drops, companies are investing in developing ‘smart’ and ‘self-learning’ machines to assist in pricing decisions, planning, trade, and logistics. With the Internet of Things, more of our daily activities will be collected and used to enhance (or exploit) our immediate living environment – the way we commute, shop and communicate. Machine learning raises many challenging legal and ethical questions as to the relationship between man and machine, humans’ control — or lack of it — over machines, and accountability for machine activities.
While these issues have long captivated our interest, few would envision the day when these developments (and the legal and ethical challenges raised by them) would become an antitrust issue. The antitrust community is accustomed to company executives fixing prices, allocating markets, and allocating bids. The film, The Informant!, captures these real-life executives who every year conspire around the world to fix prices and reduce output. Price-fixing cartels are generally regarded in the antitrust world as ‘no-brainers.’ The cartel agreement (even if unsuccessful) is typically condemned as per se illegal; the executives and companies have few, if any, legal defenses. And in the U.S., among other jurisdictions, the guilty executives are often thrown into prison.
So it made the news when the Department of Justice recently warned antitrust lawyers, economists and scholars of the dangers of complex pricing algorithms. In 2015, the DOJ charged a price-fixing scheme involving posters sold in the United States through Amazon Marketplace.[1] According to the DOJ, David Topkins and his co-conspirators adopted specific pricing algorithms that collected competitor pricing information for specific posters sold online and applied the sellers’ pricing rules. The competitors used the computer algorithms with the goal of coordinating changes to their respective prices.
As the DOJ’s recent criminal prosecution reflects, sophisticated computer pricing algorithms are changing the competitive landscape and the nature of competitive restraints. Take, for example, the way in which prices are determined. When we were growing up, humans monitored the market activity, determined whether, and by how much, to raise or lower prices, and physically stamped products with price stickers. We recall the clerks along the supermarket aisle stamping each food can. Pricing decisions took weeks—if not months—to implement. Now with online trade platforms, computers can assess and adjust prices—even for particular individuals at particular times—within milliseconds. Pricing algorithms dominate online sales of goods—optimising the price based on available stock and anticipated demand—and are widely used in hotel booking, and the travel, retail, sport and entertainment industries.
As pricing mechanisms shift, so too will the types of collusion. We are shifting from the world where executives expressly collude in smoke-filled hotel rooms to a world where pricing algorithms continually monitor and adjust to each other’s prices and market data. We are leaving the antitrust ‘no brainer’ jurisprudence for one where the ethical and legal issues are unsettled. In this new world, there isn’t necessarily any collusive agreement among executives. Each firm may unilaterally adopt its own pricing algorithm, which sets its own price. In this new world, there isn’t necessarily anticompetitive intent. The executives cannot predict if, when, and for how long the industry-wide use of pricing-algorithms will lead to inflated prices. The danger here isn’t express collusion, but tacit collusion.
The use of advanced algorithms in this scenario transforms an oligopolistic market in which transparency is limited and therefore conscious parallelism cannot be sustained to a market susceptible to tacit collusion/conscious parallelism in which prices will rise. Importantly, price increases are not the result of express collusion but rather the natural outcome of tacit collusion. While the latter is not itself illegal–as it concerns rational reaction to market characteristics–one may ask whether its creation should give rise to antitrust intervention.
With the advancement of technological developments human involvement in setting up these machines will become limited, up to a point that such actions may be determined independently by smart, self-learning machines, which learn how to operate in the market through trial and error. A self-learning machine may find the optimal strategy is to enhance market transparency and thereby sustain conscious parallelism and foster price increases. Importantly, tacit coordination—when executed—is not the fruit of explicit human design but rather the outcome of evolution, self-learning and independent machine execution. Here pricing algorithms learn through experience to maximise profit. Such machines may promote a stable market environment in which they could predict each other’s reaction and dominant strategy. An industry-wide adoption of similar algorithms may foster interdependent action, conscious parallelism, and higher prices. Interestingly, the execution of tacit collusion via algorithms does not mark the end of the spectrum of possible antitrust infringements, as advanced computers may undermine competition through more subtle means. Importantly, without evidence of a human ‘agreement’ or ‘anticompetitive intent,’ the industry-wide use of pricing algorithms may evade antitrust scrutiny under current laws.
Such technological developments raise thought-provoking legal and ethical challenges. For instance, the increased market transparency, which often enhances competition, may in such circumstances facilitate anticompetitive effects. Enforcers may find it difficult to fine-tune antitrust policy to condemn ‘excessive’ market transparency. This may be particularly challenging when the information and data are otherwise available to consumers and traders and it is the intelligent use of that information that leads to price increases. Importantly, such unilateral use of information, which may facilitate conscious parallelism,[2] is not illegal and does not infringe Section One of the Sherman Act.[3] In other words, under current law, competition agencies may find it difficult to challenge each firm’s unilateral decision to use sophisticated algorithms to analyse market information and determine prices, even if such use results in parallel price increases, to the detriment of consumers.
Another challenge concerns the relationship between humans and machines. For instance, when should the law attribute liability to companies for their computers’ actions? The answer is easy when humans design and program the algorithm to further their illegal scheme. The answer is harder when a human command did not direct the computer’s actions. Instead, the computer’s action is the outcome of many intermediate steps of computer learning, which evolved from evaluating a voluminous variety of data. Thus tacit collusion was not reasonably foreseeable as the likely and natural consequence. Granted on one level the firm is accountable: it created, used, and profited from the algorithm. But at what point, if any, would the designer or operator relinquish responsibility over the acts of the machine? If companies face strict liability for their computer’s tacit collusion, how could they constrain their computer’s actions to avoid less competitive outcomes?
In this new world, many of the old deterrents don’t work. Computers do not fear imprisonment, feel guilt or shame, or respond in anger to a competitor’s move. Assuming that computers are programmed to refrain from any violation of the competition laws, they may still operate in awareness of each other, thus reducing competition without infringing competition laws. Importantly, they may do so through self-learning and rational decision-making, in a deterrent-free-environment, bypassing safeguards, which inhibit traditional price fixing.
Consequently, it will be challenging for lawmakers to identify a clear, enforceable triggering event for intervention, which would prevent the change of market dynamics that foster conscious parallelism. Furthermore, it is likely to be challenging for competition agencies to enforce such a provision. Competition authorities would have a difficult time overseeing firms’ attempts to design a machine to optimise performance while instructing it to ignore or to respond irrationally to market information and competitors’ moves, or to pursue inefficient outcomes. Under an ex post approach, enforcers can intervene when they believe that computers are tacitly colluding. But an ex-post monitoring exercise would still require the legislator to determine whether liability ought to be imputed on the companies involved—even when the companies were unaware of the tacit collusion (or had few viable alternatives to prevent it).
Be it an ex-post or ex-ante regime, one still has to confront the challenge of identifying the adequate level of intervention, if such exists, when dealing with the creation of market conditions for conscious parallelism. A policy that requires firms to develop algorithms that ignore market prices or not to react to market changes may well undermine competition. Since efforts to reduce price transparency can harm consumers, intervention would require careful technological and policy fine-tuning. Some may argue that these challenges should tilt the balance in favour of nonintervention. A non-interventionist approach, however, risks creating a lacuna, which market players can exploit, again to consumers’ detriment.
So it noteworthy that we enter this new antitrust world with a small-stakes case like Topkins. The companies used algorithm-based pricing software to sell their posters, prints and framed art online. But the executives still had one foot in the old cartel world as they allegedly discussed and agreed among themselves to fix prices for their products. How will competition officials respond when the executives leave this old world behind? When the executives no longer need to meet in hotel rooms since their pricing algorithms, in enhancing market transparency, foster classic tacit collusion and new forms of anticompetitive conduct? How will the agencies and courts respond to this new world of collusion? This remains unclear.
ENDNOTES
[1] Information, United States v. Topkins, Case No. 3:15-cr-00201-WHO (N.D. Cal. filed April 6, 2015), available at http://www.justice.gov/atr/cases/topkins.html.
[2] Conscious parallelism, also known as oligopolistic price coordination or tacit collusion, describes the process, not in itself unlawful, by which firms recognize their shared economic interests and their interdependence with respect to price and output decisions and subsequently unilaterally set their prices above the competitive level. Brooke Group ltd., v. Brown & Williamson Tobacco Corp., 509 U.S. 209 (1993); Glossary of Industrial Organisation Economics and Competition Law, compiled by R. S. Khemani and D. M. Shapiro, commissioned by the Directorate for Financial, Fiscal and Enterprise Affairs, OECD, 1993, available at http://www.oecd.org/dataoecd/8/61/2376087.pdf.
[3] Bell Atl. Corp. v. Twombly, 550 U.S. 544, 554 (2007) (noting that “[t]he inadequacy of showing parallel conduct or interdependence, without more, mirrors the ambiguity of the behavior: consistent with conspiracy, but just as much in line with a wide swath of rational and competitive business strategy unilaterally prompted by common perceptions of the market”).
The preceding post comes to us from Ariel Ezrachi, the Slaughter and May Professor of Competition Law at The University of Oxford and Director of the Oxford University Centre for Competition Law and Policy, and Maurice E. Stucke, Associate Professor at the University of Tennessee College of Law and Co-founder of the Data Competition Institute. For an in-depth discussion of these themes, see their working paper, Artificial Intelligence & Collusion: When Computers Inhibit Competition, available here.
I am probably failing to understand something. Surely the computers are programmed to maximize profits. They ought to be.
At least with products such as posters with elastic demand, would not the imperative to maximize profits undercut the ill effects of collusion?